Archive for the 'Systems Management' Category

Join the Foglight beta

If you read the prior post, a Q&A with our VP of Monitoring, Steve Rosenberg and want to know more, or would just like to try our future Foglight app monitoring solution out, it’s now available in beta here.laptop[1]

Dell Software VP: lightweight app monitoring is, well, just too lightweight – CWDN

Good interview with Steve Rosenberg on our App monitoring strategy, approach.

Dell Software VP: lightweight app monitoring is, well, just too lightweight – CWDN.

Response time monitoring for AJAX and Javascript

[Updated 10/31, 7:50pm central] John Newsom, VP of our APM (Application Performance Monitoring) team has had a great overview of the issues and challenges around Web 2.0 monitoring published in The DataCenter Journal. He discusses the three main issues

  • Inadequate code-level analysis
  • Incorrect page response times
  • Insufficient context

and the key ways you can address application monitoring, including 1. Capturing functional issues and establishing context; 2.Capturing and troubleshooting JavaScript errors; 3. Looking for detailed insight into page load times; and finally, 4. Isolating problems to individual page elements.

Overall its a great read and served as a great refresher for a couple of issues I’m currently looking at in one of my projects. CTR (Computer Technology Review) has a good fly-by of the Foglight APM. You can read it here. Foglight can help you monitor and manage you applications, middleware and systems.

More on the Dell PowerEdge VRTX

While my blog is called “Adventures in SystemsLand” while I’ve diverted off to another one of those occasional career tracks that has me working in a non-systems area, it remains something I will continue to post on.

Tomorrow, the Dell Tech Center  are having one of their regular Dell TechChats On The Systems Management Features Of VRTX. It’ starts at 3pm central time.

You’ve seen the announcements of the new VRTX product launch, heard the VRTX Systems Management Overview by Kevin Noreen. and seen the videos so take it one step deeper on feature details with Roger Foreman, Product Manager for the Chassis Management Controller.

Dell TechCenter page – Del.ly/VRTX

Introducing PowerEdge VRTX – Direct2Dell Blog

VRTX Product Page – http://www.dell.com/us/business/p/poweredge-vrtx/pd

I’ve put in my calendar and will be listening in, join me.

Dell Software – Accelerating Results

John Swainson, Dell Software GroupToday was a major day for Dell Software group. Out in San Francisco many of our team and some great customers, were talking about real Dell Software products. Why was this major?

Dell Software BYOD RealityBecause it wasn’t about strategy, it wasn’t about an acquisition, it was about real problems and Dell Software products that customers are using to address those problems. There were some great customer speakers, as well as keynotes and breakout panels. The whole thing was streamed live via livestream, recordings are already up and available.

InfographicBig up also to the marketing team, I must admit Dell puts together some great infographics and this one was one of the best.

[Update: A couple of emails came in. Here is a useful written summary page with links in a Press Release.]

New Servers, New Software and more

Dell announced Monday our Dell PowerEdge 12th Generation Servers and always, the hardware garnered much of the interest, it’s tangible and you can see it, as in this picture of my boss and Dell VP/GM of Server Solutions, Forrest Norrod holding up our new 4-up M420 Blade server. However, along side the were a ton of announced and unannounced new features.

iDRAC7

The first worth a mention comes from our team, out-of-band management for updating the BIOS and firmware and managing hardware settings—independent of the OS or hypervisor throughout a server’s life cycle, and initial deployment of an OS for a physical server or a hypervisor for a virtual machine. That function is delivered by the Integrated Dell Remote Access Controller 7 with Lifecycle Controller (iDRAC7).

It is an all-in-one, out-of-band systems management option to remotely manage Dell PowerEdge servers. In iDRAC7, we have combined hardware enablement capabilities into a single, embedded controller that includes its own processor, power, and network connection and without OS agents, even when the OS or hypervisor isn’t booted. The iDRAC7 architects have worked with marketing to pull together a useful summary of the capabilities, it can be found here.

OpenManage Essentials

The next software initiative announced was the 1.0.1 release of OpenManage Essentials (OME). We listened to customers when it came to management consoles and while a lot of companies liked what we’d been doing and our partnership with Symantec for Dell Management Console, many of our smaller customers, and a few bigger ones wanted a simpler console for monitoring and that was quicker and easier to deploy. OME is it. There is a full OME wiki page here and development lead Rob Cox has summarised the 1.0.1 update here.

OpenManage Power Center

Not formally announced, but covered in slides and some presentations, because it’s linked to some of the advanced power management of our servers. The Fresh Air Initiative, Energy Smart design and the introduction of OpenManage Power Center in our 12th generation servers has the potential to change the way you power and manage the power distributions across servers, racks and more.

Dell Virtual Network Architecture

There is a new wikicovering the announcement of the Dell Virtual Network Architecture, which has at its’ foundation High-performance switching systems for campus and data centers; Virtualized Layer 4-7 services; Comprehensive automation & orchestration software; Open workload/hypervisor interfaces. Our VNA framework aims to extend our current networking and virtualization capabilities across branch, campus and data center environments with an open networking framework for efficient IT infrastructure and workload intelligence. Shane Schick over on IT World Canada has a good summary.

Oh yeah, there was hardware too… Tomothy Prickett Morgan has a useful summary over at vulture central and the Dell summary page is here.

Simplicity – It’s a confidence trick

My friend, foil and friendly adversary James Governor posted an blog entry today entitled “What if IBM Software Got Simple?

It’s an interesting and appealing topic. It was in some respects what got in our way last year, it was also what was behind the 1999 IBM Autonomic computing initiative, lets just make things that work. It’s simple to blame the architects and engineers for complexity, and James is bang-on when he says “When I have spoken to IBM Distinguished Engineers and senior managers in the past they have tended to believe that complexity could be abstracted”.

There are two things at play here, both apply equally to many companies, especially in the systems management space, but also in the established software marketplace. I’m sure James knows this, or at least had it explained. If not, let me have a go.

On Complexity

Yes, in the past software had to be complex. It was widely used and installed on hundreds of thousands of computers, often as much as ten years older than the current range of hardware. It was used by customers who had grown up over decades with specific needs, specific tools and specific ways of doing things. Software had to be upgraded pretty much non-disruptively, even at release and version boundaries you pretty much had to continue to support most if not all of the old interfaces, applications, internal data formats and API’s.

If you didn’t you had a revolt on your hands in your own customer base. I can cite a few outstanding examples of where the software provider misunderstood this and learn an important lesson both times, I would also go as far as far as to suggest, the product release marked the beginning of the end. VM/SP R5 where IBM introduced a new, non-compatible, non-customer lead UI; VM/XA Migration Aid, where IBM introduced a new, non-compatible CMS lightweight VM OS; and of course, from the X86 world, Microsoft Vista.

For those products a descision was taken at some point in the design to be non-compatible, drop old interfaces or deliberately break them to support the new function or architecture. This is one example where change brings complexity, the other is where you chose to remain compatible, and carry the old interfaces and API’s. This means that everything from the progamming interface, to the tools, compilers, debuggers etc. now has to support either two versions of the same thing, or one version that performs differently.

Either way, when asked to solve a problem introduced by these changes over a number of years, the only real option is to abstract. As I’ve said here many times, automating complexity doesn’t make things simple, it simply makes them more complex,.

On Simplicity

Simplicity is easy when you have nothing. Get two sticks, rub them together and you have a fire. It’s not so easy when you’ve spent 25-years designing and building a nuclear power station. What do I need to start a fire?

Simplicity is a confidence trick. Know your customers, know your market, ask for what it will take to satisfy both, and stick to this. The less confident your are about either, the more scope creep you’ll get, the less specific you’ll be about pretty much every phase of the architecture, the design and ultimately the product. In the cloud software business this is less of an issue, you don’t have releases per se. You roll out function and even if you are not in “google perpetual beta mode” you don’t really have customers on back releases of your product, and you are mostly not waiting for them to upgrade.

If you have a public API you have to protect and migrate that, but otherwise you take care of the customers data, and as you push out new function, they come with you. Since they don’t have to do anything, and for many of the web 2.0 sites we’ve all become used to, don’t have any choice or advance notice, it’s mostly no big deal. However, there is still a requirement that someone that has to know the customer, and know what they want. In the web 2.0 world that’s still the purview of a small cadre of top talent, Zuckerberg, Jobs, Williams, Page, Schmidt, Brin et al.

The same isn’t true for those old world companies, mine included. There are powerful groups and executives who have a vested interest in what and how products are designed, architected and delivered.  They know their customers, their markets and what it will takes to make them. This is how old school software was envisasaged, a legacy, a profit line, even a control point.

The alternative to complexity is to stop and either start over, or at least over multiple product cycles go back and take out all the complexity. This brings with-it a multi-year technical debt, and often a negative op-ex that ,most businesses and product managers are not prepared to carry. It’s simpler, easier and often quicker to acquire and abandon. In with the new, out with the old.

Happy New Year! I Need…


About & Contact

I'm Mark Cathcart, formally a Senior Distinguished Engineer, in Dells Software Group; before that Director of Systems Engineering in the Enterprise Solutions Group at Dell. Prior to that, I was IBM Distinguished Engineer and member of the IBM Academy of Technology. I'm an information technology optimist.


I was a member of the Linux Foundation Core Infrastructure Initiative Steering committee. Read more about it here.

Subscribe to updates via rss:

Feed Icon

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 879 other followers

Blog Stats

  • 83,508 hits