Archive for the 'Systems Management' Category

IBM 3090 Training

Between 2001 and 2004, I had an office in the home of the mainframes, IBM Poughkeepsie, in Building 705. As a Brit’, it wasn’t my natural home, also, I wasn’t a developer or a designer, as a software architect focusing in software and application architectures, it never felt like home.

IBM Library number ZZ25-6897.

One day, on my way to lunch at the in-house cafeteria, I walked by a room whose door was always closed. There was a buzz of people coming from it, and the door was open. A sign outside said “Library closing, Take anything you can use!”

I have some great books, a few of which I plan to scan and donate the output to either the Computer History Museum, or to the Internet Archive.

One of the more fun things I grabbed were a few IBM training laserdiscs. I had no idea what I’d do with them, I had never owned a laserdisc player. I just thought they’d look good sitting on my bookshelf. Especially since they are the same physical size as vinyl albums.

Now 16-years on, I’ve spent the last 4-years digitising my entire vinyl collection, in total some 2,700 albums. One of my main focus areas has been the music of Jazz producer, Creed Taylor. One of the side effects from that is I’ve created a new website, ctproduced.com – In record collecting circles, I’m apparently a completionist. I try to buy everything.

And so it was I started acquiring laserdiscs by Creed Taylor. It took a while, and I’m still missing Blues At Bradleys by Charles Fambrough. While I’ve not got around to writing about them in any detail, you can find them at the bottom of the entry here.

What I had left were the IBM laserdiscs. On monday I popped the first laserdisc in, it was for the IBM 3090 Processor complex. It was a fascinating throwback for me. I’d worked with IBM Kingston on a number of firmware and software availability issues, both as a customer, and later as an IBM Senior Software Engineer.

I hope your find the video fascinating. The IBM 3090 Processor was, to the best of my knowledge, the last of the real “mainframes”. Sure we still have IBM processor architecture machines that are compatible with the 3090 and earlier architectures. However, the new systems, more powerful, more efficient, are typically a single frame system. Sure, a parallel sysplex can support multiple mainframes, it doesn’t require them. Enjoy!

Join the Foglight beta

If you read the prior post, a Q&A with our VP of Monitoring, Steve Rosenberg and want to know more, or would just like to try our future Foglight app monitoring solution out, it’s now available in beta here.laptop[1]

Dell Software VP: lightweight app monitoring is, well, just too lightweight – CWDN

Good interview with Steve Rosenberg on our App monitoring strategy, approach.

Dell Software VP: lightweight app monitoring is, well, just too lightweight – CWDN.

Response time monitoring for AJAX and Javascript

[Updated 10/31, 7:50pm central] John Newsom, VP of our APM (Application Performance Monitoring) team has had a great overview of the issues and challenges around Web 2.0 monitoring published in The DataCenter Journal. He discusses the three main issues

  • Inadequate code-level analysis
  • Incorrect page response times
  • Insufficient context

and the key ways you can address application monitoring, including 1. Capturing functional issues and establishing context; 2.Capturing and troubleshooting JavaScript errors; 3. Looking for detailed insight into page load times; and finally, 4. Isolating problems to individual page elements.

Overall its a great read and served as a great refresher for a couple of issues I’m currently looking at in one of my projects. CTR (Computer Technology Review) has a good fly-by of the Foglight APM. You can read it here. Foglight can help you monitor and manage you applications, middleware and systems.

More on the Dell PowerEdge VRTX

While my blog is called “Adventures in SystemsLand” while I’ve diverted off to another one of those occasional career tracks that has me working in a non-systems area, it remains something I will continue to post on.

Tomorrow, the Dell Tech Center  are having one of their regular Dell TechChats On The Systems Management Features Of VRTX. It’ starts at 3pm central time.

You’ve seen the announcements of the new VRTX product launch, heard the VRTX Systems Management Overview by Kevin Noreen. and seen the videos so take it one step deeper on feature details with Roger Foreman, Product Manager for the Chassis Management Controller.

Dell TechCenter page – Del.ly/VRTX

Introducing PowerEdge VRTX – Direct2Dell Blog

VRTX Product Page – http://www.dell.com/us/business/p/poweredge-vrtx/pd

I’ve put in my calendar and will be listening in, join me.

Dell Software – Accelerating Results

John Swainson, Dell Software GroupToday was a major day for Dell Software group. Out in San Francisco many of our team and some great customers, were talking about real Dell Software products. Why was this major?

Dell Software BYOD RealityBecause it wasn’t about strategy, it wasn’t about an acquisition, it was about real problems and Dell Software products that customers are using to address those problems. There were some great customer speakers, as well as keynotes and breakout panels. The whole thing was streamed live via livestream, recordings are already up and available.

InfographicBig up also to the marketing team, I must admit Dell puts together some great infographics and this one was one of the best.

[Update: A couple of emails came in. Here is a useful written summary page with links in a Press Release.]

New Servers, New Software and more

Dell announced Monday our Dell PowerEdge 12th Generation Servers and always, the hardware garnered much of the interest, it’s tangible and you can see it, as in this picture of my boss and Dell VP/GM of Server Solutions, Forrest Norrod holding up our new 4-up M420 Blade server. However, along side the were a ton of announced and unannounced new features.

iDRAC7

The first worth a mention comes from our team, out-of-band management for updating the BIOS and firmware and managing hardware settings—independent of the OS or hypervisor throughout a server’s life cycle, and initial deployment of an OS for a physical server or a hypervisor for a virtual machine. That function is delivered by the Integrated Dell Remote Access Controller 7 with Lifecycle Controller (iDRAC7).

It is an all-in-one, out-of-band systems management option to remotely manage Dell PowerEdge servers. In iDRAC7, we have combined hardware enablement capabilities into a single, embedded controller that includes its own processor, power, and network connection and without OS agents, even when the OS or hypervisor isn’t booted. The iDRAC7 architects have worked with marketing to pull together a useful summary of the capabilities, it can be found here.

OpenManage Essentials

The next software initiative announced was the 1.0.1 release of OpenManage Essentials (OME). We listened to customers when it came to management consoles and while a lot of companies liked what we’d been doing and our partnership with Symantec for Dell Management Console, many of our smaller customers, and a few bigger ones wanted a simpler console for monitoring and that was quicker and easier to deploy. OME is it. There is a full OME wiki page here and development lead Rob Cox has summarised the 1.0.1 update here.

OpenManage Power Center

Not formally announced, but covered in slides and some presentations, because it’s linked to some of the advanced power management of our servers. The Fresh Air Initiative, Energy Smart design and the introduction of OpenManage Power Center in our 12th generation servers has the potential to change the way you power and manage the power distributions across servers, racks and more.

Dell Virtual Network Architecture

There is a new wikicovering the announcement of the Dell Virtual Network Architecture, which has at its’ foundation High-performance switching systems for campus and data centers; Virtualized Layer 4-7 services; Comprehensive automation & orchestration software; Open workload/hypervisor interfaces. Our VNA framework aims to extend our current networking and virtualization capabilities across branch, campus and data center environments with an open networking framework for efficient IT infrastructure and workload intelligence. Shane Schick over on IT World Canada has a good summary.

Oh yeah, there was hardware too… Tomothy Prickett Morgan has a useful summary over at vulture central and the Dell summary page is here.

Simplicity – It’s a confidence trick

My friend, foil and friendly adversary James Governor posted an blog entry today entitled “What if IBM Software Got Simple?

It’s an interesting and appealing topic. It was in some respects what got in our way last year, it was also what was behind the 1999 IBM Autonomic computing initiative, lets just make things that work. It’s simple to blame the architects and engineers for complexity, and James is bang-on when he says “When I have spoken to IBM Distinguished Engineers and senior managers in the past they have tended to believe that complexity could be abstracted”.

There are two things at play here, both apply equally to many companies, especially in the systems management space, but also in the established software marketplace. I’m sure James knows this, or at least had it explained. If not, let me have a go.

On Complexity

Yes, in the past software had to be complex. It was widely used and installed on hundreds of thousands of computers, often as much as ten years older than the current range of hardware. It was used by customers who had grown up over decades with specific needs, specific tools and specific ways of doing things. Software had to be upgraded pretty much non-disruptively, even at release and version boundaries you pretty much had to continue to support most if not all of the old interfaces, applications, internal data formats and API’s.

If you didn’t you had a revolt on your hands in your own customer base. I can cite a few outstanding examples of where the software provider misunderstood this and learn an important lesson both times, I would also go as far as far as to suggest, the product release marked the beginning of the end. VM/SP R5 where IBM introduced a new, non-compatible, non-customer lead UI; VM/XA Migration Aid, where IBM introduced a new, non-compatible CMS lightweight VM OS; and of course, from the X86 world, Microsoft Vista.

For those products a descision was taken at some point in the design to be non-compatible, drop old interfaces or deliberately break them to support the new function or architecture. This is one example where change brings complexity, the other is where you chose to remain compatible, and carry the old interfaces and API’s. This means that everything from the progamming interface, to the tools, compilers, debuggers etc. now has to support either two versions of the same thing, or one version that performs differently.

Either way, when asked to solve a problem introduced by these changes over a number of years, the only real option is to abstract. As I’ve said here many times, automating complexity doesn’t make things simple, it simply makes them more complex,.

On Simplicity

Simplicity is easy when you have nothing. Get two sticks, rub them together and you have a fire. It’s not so easy when you’ve spent 25-years designing and building a nuclear power station. What do I need to start a fire?

Simplicity is a confidence trick. Know your customers, know your market, ask for what it will take to satisfy both, and stick to this. The less confident your are about either, the more scope creep you’ll get, the less specific you’ll be about pretty much every phase of the architecture, the design and ultimately the product. In the cloud software business this is less of an issue, you don’t have releases per se. You roll out function and even if you are not in “google perpetual beta mode” you don’t really have customers on back releases of your product, and you are mostly not waiting for them to upgrade.

If you have a public API you have to protect and migrate that, but otherwise you take care of the customers data, and as you push out new function, they come with you. Since they don’t have to do anything, and for many of the web 2.0 sites we’ve all become used to, don’t have any choice or advance notice, it’s mostly no big deal. However, there is still a requirement that someone that has to know the customer, and know what they want. In the web 2.0 world that’s still the purview of a small cadre of top talent, Zuckerberg, Jobs, Williams, Page, Schmidt, Brin et al.

The same isn’t true for those old world companies, mine included. There are powerful groups and executives who have a vested interest in what and how products are designed, architected and delivered.  They know their customers, their markets and what it will takes to make them. This is how old school software was envisasaged, a legacy, a profit line, even a control point.

The alternative to complexity is to stop and either start over, or at least over multiple product cycles go back and take out all the complexity. This brings with-it a multi-year technical debt, and often a negative op-ex that ,most businesses and product managers are not prepared to carry. It’s simpler, easier and often quicker to acquire and abandon. In with the new, out with the old.

Happy New Year! I Need…

Simplicity versus, well non-simplicity

I’ve had an interesting week, last Friday my corporate Blackberry Torch that was only 2-months old, was put in a ziploc bag with my name on it, and I was given a Dell Venue Pro phone with Windows Phone 7 in it’s place. I’ve written a detailed breakdown of what I liked and didn’t like. The phone itself is pretty rock solid, well designed, nice size, weight etc. and a great screen. Here is a video review which captures my views on the phone itself, a great piece of work from Dell.

What is interesting though is the Windows Phone software. Microsoft have obviously put a lot of time and effort into the User Interface and design experience. Although it features the usual finger touch actions we’ve come to expect, the UI itself, and the features it exposes have been carefully designed to make it simple to do simple things. There really are very few things you can change, alter, almost no settings, only very minimal menu choices etc.

What makes this interesting for me is this is exactly the approach we’ve taken with our UI. When trying to take 79-steps, involving 7x different products and simplify and automate it, it would be easy to make every step really complicated, and just reduce the number of steps. However, all that does is mean that there would be more chance of getting something wrong with each step; my experience with this type of design is that not only is the human operator more likely to make a mistake, but the number of options, configurations and choices drive up the complexity and testing costs become prohibitive, and eventually mistakes are made. Combinations not expected are not tested, tests are run in orthogonal configurations.

Back when the autonomic computing initiative was launched some 10-years ago at IBM, there seemed to be these two diametrically opposed desires. One desire was to simplify technology, the other was to make systems self managing. The problem with self managing is that it introduces an additional layer, in many cases, to automate and manage the existing complexity. To make this automation more flexible and to make it more adaptable, the automation was made more sophisticated and thus, more complex. The IBM Autonomic Computing website still exists and while I’m sure the research has moved on, as have the products, the mission and objectives are the same.

Our Virtual Integrated System work isn’t anywhere near as grandiose. Yet, in a small way it attempts to address whats at the core of IBMs’ Autonomic Computing, how to change the way we do things, how to be more efficient and effective with what we have. And that takes me back to Windows Phone 7. It’s great at what it does, but as a power user, it doesn’t do enough for me. I guess what I’m hoping at this point is that we’ll create a new category of system, it is neither simple, nor complex, it does what you want, the way you want it, but with flexibility. We’ll see.

What’s on your glass?

James Governor, @monkchips, makes some great points about UI design in his latest blog post. James discusses how Adobe is changing it’s toolchain to better support, endorse HTML5 and how open is a growth accelerator, not just a philosophical perspective. He get’s a useful plug in for the Dell Streak, and it as a piece of glass too 😉

I’ve alluded to it here before, we are heading in the same direction for both our PowerEdge 12g Lifecycle Controller and iDrac UI for one to one management of our servers; also for the simplified UI for the Virtual Integrated System, aka VIS. Flash/Flex/Silverlight had their time, they solved problems that at the time couldn’t be solved any other way. However, it was clear to me and I suspect to all those involved in the HTML5 standards efforts, that we were headed down a dead end of walled gardens“. What put this in perspective for me wasn’t James’ post, but one from fellow Redmonk, Cote, last year in which he discussed the web UI landscape.

Web UI Landscape by Cote of Redmonk

The details actually were not important, Cote ostentatiously discussing Apache Pivot, summarizes by saying “Closed source GUI frameworks have a tough time at it now-a-days, where-as open source ones by virtue of being free and open, potentially have an easier time to dig into the minds of Java developers.”

 

But really, it was the diagram that accompanied the article for me. It laid it the options as a flower, and as we know, flowers are grown in gardens, in this case, each was being cultivated in its’ own walled garden.

I cancelled the FLASH/WSMAN[1] proof of concept we’d built for the gen-next UI, and decided the right move was to adopt a more traditional MVC-like approach using open standards for our UI strategy.

We don’t have a commitment yet to deliver or exploit HTML5, but we’ve already adopted a REST style using HTTP for browser and HTML clients to interact with a number of our products, using Javascript and JSON and building towards having a foundation of re-useable UI artifacts. Off the back of this we’ve already seen some useful Android pilots.

Which takes us back to James post. He summarizes with “If the world of the API-driven web has taught us anything its that you can’t second guess User Interfaces. Someone else will always build one better. If you’re only allowing for deployment on one platform that cuts you off from innovation.” – Right on the money.

DISCLOSURE:
Redmonk are providing technology analysis for Dells Virtual Integrated System; James and I have professional contacts since 1996.

NOTES:
[1]WSMAN remains our key technology implementation for external partners and consoles to use to get information from the servers, and to send updates etc.

Senior Architect – Enterprise Systems Management and more

With things really rolling here at Dell on the software front we are still in the process of hiring,and are looking for some key people to fit into, or lead teams working on current and future software projects. At least currently these are based with our team here in Round Rock, TX. However, I’d like to hear if you’d be interested in joining our Dell west coast software labs in Sunnyvale and Palo Alto.

Here are a few of the current vacancies:

Senior SOA Architect – Enterprise Systems Management
Performance Engineer – SOA Infrastructure Management
Senior Java Developer – Systems Management
Senior Software Engineer – Systems Management-10069ZNS

Depending on how you count, there are over a 100 of us now working on the VIS and AIM products, with a whole lot more to come in 2011. Come join me and help make a fundamental change at Dell and be in on the beginning of something big!

Dell’s Virtual Integrated System

Open, Capable, Affordable - Dell VIS

Open, Capable, Affordable - Dell VIS

It’s always interesting travel, you learn so many new things. And so it was today, we arrived in Bangalore yesterday to bring two of the sprint teams in our “Maverick” design and teams up to speed.

In an overview of the “product” and it’s packaging, we briefly discussed naming. I was under the impression that we’d not started publicly discussing Dell’s Virtual Intergrated System (VIS), well I was wrong as one of the team pointed out.

Turns out a Dell.com web site already has overview descriptions of three of the core VIS offerings, VIS Integration Suite; VIS Delivery Center; and VIS Deploy infrastructure. You can read the descriptions here.

Essentially, Maverick is a services oriented infrastructure (SOI), built from modular services, pluggable components, transports and protocols that will allow us to build various product implementations and solutions from a common management architecture. It’s an exciting departure from traditional monolithic systems management products, or the typically un-integrated products which use different consoles, different terms for the same things, and to get the best use out of them require complex and often long services projects, or for you to change your business to match their management.

Blades a go-go in Austin

We’ve been working on some interesting technology prototypes of our common software architecture. It forms the core of the “Maverick” virtualization solution, the orchestrator for the Dell Virtual Integrated System(VIS).[More on this in a follow-on post].

We have a far reaching outlook for the common software architecture including embedded systems. One thing I’ve been looking at is creating a top-of-rack switch, with an embedded management server. We demonstrated it to Michael Dell and the Executive Leadership Team on Monday to show them where we are with software.

The same stack and applications for the next generation Blade Chasis Management Controller (CMC). For VIS, we are building a set of “adjacency” services so that it can scale to thousands of physical servers. So it was with some interest when I saw this piece in the Austin American Statesman, our “local” paper. It covers the new $9 million supercomputer at the J.J. Pickle Research Campus of the University of Texas, to be installed next year.

The newest “Lonestar” system will be built and deployed by the Texas Advanced Computing Center; it’s expected to be operational by February 2011 and will will include 1,888 M610 PowerEdge Blade servers from Dell Inc., each with two six-core Intel X5600 Westmere processors.

Our VP of Global higher education, John Mullen, was quoted as saying “The system will be built on open-system architecture, which means it can be expanded as needed, that’s a cost-effective switch from proprietary systems of the past.”

Another coincidence for me, the entrance to the J.J. Pickles campus is right opposite the entrance to my old IBM office on Braker Lance, proving once again that old adage, as one do closes, another opens.

Got ServiceMix?

If you’ve been keeping an eye on the news and job position listings at Dell you’ll have seen a number of positions open-up over the last 3-months for Java and Service Bus developers, not to mention our completed acquisition of Scalent. We are busy working on the first release of the Dell “soup to nuts” virtualization management, orchestration and deployment software, one of the core technologies of which is Apache ServiceMix.

One of the open positions we’ve got is for a Senior Software Engineer with solid ServiceMix skills from a programming perspective. This job listing is the position, the job description and skills will be updated over the next few days but if you’d like to join the team architecting, designing and programming Dell’s first real software product, that’s aiming at making the virtual data center easy to use, as well as open, capable and affordable to run, go ahead and apply now.

If you make it through the HR process, I’ll see you at the interview…

WSMAN for the masses

Well sort of. We are starting to hear a lot of questions and interest in our implementation of WSMAN on the Dell PowerEdge 11g management products. Chris Poblete, a development engineer in our team has started the first of a series of simple’ish how-to’s on using WSMAN.

You can find Chris’s entries over on the formal Dell Techcenter blogs – the first entry serves and some simple background info, he gets into code in the second and subsequent blogs.


About & Contact

I'm Mark Cathcart, formally a Senior Distinguished Engineer, in Dells Software Group; before that Director of Systems Engineering in the Enterprise Solutions Group at Dell. Prior to that, I was IBM Distinguished Engineer and member of the IBM Academy of Technology. I am a Fellow of the British Computer Society (bsc.org) I'm an information technology optimist.


I was a member of the Linux Foundation Core Infrastructure Initiative Steering committee. Read more about it here.

Subscribe to updates via rss:

Feed Icon

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 2,066 other subscribers

Blog Stats

  • 90,344 hits