Archive for January, 2009

Dell OpenManage and Dell Managment Console

Not sure how many Dell customers I have for my blog yet since it’s early days, but I just learned from Scott Hanson SystemEdge blog on Direct2Dell that tomorrow, Thursday Jan. 29th at omswebcast11am (central) time, our Executive Briefing center are going to do an hour long demo of the Dell OpenManage  platform management and the shortterm roadmap, as well upcoming Dell Management Console (DMC).

The demo will be rerun a number of times this year, so I know its late notice for Thursday, but you can go here to sign-up for any of the events. The web conferences will use Microsoft Livemeeting.

I’d be really interested in any feedback on OpenManage and DMC. While one of my short term focus items is on the platform and firmware management and structure for the 11g server platform, I’m looking at the longer term 12g platforms, especially their integration with other Dell platforms like EqualLogic but also increasingly partner management platform like EMC, Microsoft and Cisco etc.

If you have any colleagues who work in operations, networking or storage in the x86 space, please pass the link along, and ask them if they have time to leave comments here. Thanks!

IBM et clouds

I note from Cotes People over Process Redmonk blog, that Sam Palmisano, CEO at IBM, has given Erich Clemente, Vice President, Strategy and General Manager of Enterprise Initiatives the mission of sorting out IBM’s disperate cloud initiatives.

I worked for Erich for 3-years, he was a great 2nd line, had good vision and great business acumen, as a former Systems Engineer, understood technology better than I think a lot of people would give him credit for.

If I was still at IBM, I’d have loved to work with Erich on this, and helped him carve through the silos, the marketing treacle, the services dilema and the hosting potential. I only hope he doesn’t allow himself to get  tied up in knots by people trying to define global architectures and claims of leading by creating all encompassing standards.

Reuse, recycle, repair – Oral-B disaster

Sometime ago I commented on the repair status for iPhones, like my no longer used Palm Treo, which was subsequently a target for a class action lawsuit over breakdowns, the iPhone has some expensive repair options and I said at the time “Is it unreasonable to expect the designers of one of the best gadgets in the last few years to think about how they are serviced, refurbished and disposed of, I think not.

We simply can’t go on forever buying stuff and dumping the old, unwanted broken stuff without regard.”

Oral-B Pulsar vibrating toothbrush Picture AttributionNoncommercialShare AlikeSome rights reserved by Inju

And so it was last Friday evening I was making my usual dash up and down the isles at the grocery store. I don’t make a list, since I live alone I can mostly look at the isle and decide if I need to go down it.

I knew my tooth brush had reached the “sorry” stage and needed to be replaced. I’d owned one of those electric ones with the big handle that took 2x AA batteries and had two distinct heads, one rotated and the other moved up and down. Only the head wasn’t really big enough for me, so I’d stopped using it.

As I glanced through the racks and racks of toothbrushes I glanced at one by Oral-B that looked like it had a good size head, I picked it up band threw it in my shopping cart.

Yesterday morning, after reading this excellent post by Adobe all around good guy, Duane Nickull, on how to improve Vancouver. Duane lists a staggering number of good/simple steps, including “2. Immediately ban the sale of the following items from store shelves within Vancouver:” – Duane went on to list a number of common sense things.

I’d like to add at least one Duane, this Oral-B Pulsar toothbrush. This is outrageous. When I opened it, rather than buy a regular toothbrush, I found I’d bought an electric one. Worse than that, the tootbrush couldn’t be opened and specifically said on the packet that the battery was not designed to be replaced.

Given the tootbrush part isn’t going to last more than 6-8 weeks brushing twice per day for a reasonable amount of time, that means I’d be wasting 7x entirely good eletric motors per year, worse still, I’d be deliberately disposing of 7x AA batteries into landfill per year with all the environmental impact that has. It’s not unreasonable to assume that Procter and Gamble will sell at least half a million of these each year, the landfill consequences of dumping those batteries is unforgivable.

I’ve written Oral-B telling them that I’m buycotting their toothbrushes until they withdraw this product, or at least modify it and the instructions so the battery can be removed and the instructions tell you to remove the battery before discarding it. Please do the same. Their contact details are here. 

[Update: My email submission was assigned  ‘090127-000612’.]

What’s up with industry standard servers? – The IBM View

I finally had time to read through the IBM 4Q ’08 results yesterday evening, it is good to see that Power Systems saw a revenue growth for the 10th straight quarter,  and that the virtualization  and high utilization rates are driving sales of both mainframe and Power servers.

I was somewhat surprised though to see the significant decline(32%) in x86 servers sales, System x in IBM nomenclature, put down to the strong demand “virtualizing and consolidating workloads into more efficient platforms such as POWER and mainframe”.

I certainly didn’t see any significant spike in interest in Lx86 in the latter part of my time with IBM and as far as I know, IBM still doesn’t have a reference customer for it many references for it, despite a lot of good technical work going into it. The focus from sales just wasn’t there. So that means customers were porting, rewriting or buying new applications, not something that would usually show up in quarterly sales swings, more as long term trends.

Seems to me the more likely reason behind IBM’s decline in x86 was simply as Bob Moffat[IBM Senior Vice President and Group Executive, Systems & Technology Group] put it in his December ’08 interview with CRN’s ChannelWeb when referring to claims by HP’s Mark Hurd “The stuff that Mr. Hurd said was going away kicked his ass: Z Series [mainframe hardware] outgrew anything that he sells. [IBM] Power [servers] outgrew anything that he sells. So he didn’t gain share despite the fact that we screwed up execution in [x86 Intel-based server] X Series.”

Moffat is quoted as saying IBM screwed up x86 execution multiple times, so one assumes at least Moffat thinks it’s true. And yes, as I said on twitter yesterday was a brutal day in the tech industry, and certainly with the Intel and Microsoft layoffs, the poor AMD results, and the IBM screw-up in sales and Sun starting previously announced layoffs, as the IBM results say industry standard hardware is susceptible to the economic downtown. I’d disagree with the IBM results statement though in that industry standard hardware is “clearly more susceptible”.

My thoughts and best wishes go out to all those who found out yesterday that their jobs were riffed, surplused or rebalanced, many of those, including 10-people I know personally, did not work in the x86 or as IBM would have it, “industry standard” hardware business.

The Windows Legacy

My good friend and fellow Brit’ Nigel Dessau posted his thoughts, and to some degree, frustrations with Windows Vista and potentially Windows 7 today on his personal blog, here.

The problem is of course they are stuck in their own legacy. If I were Microsoft,  I’d declare Windows 8 would only support Windows 7 and earlier apps and drivers in a virtual machine.

They’d declare a bunch of their more low level interfaces deprecated with Windows 7 and won’t be accessible in Windows 8 except in a Windows 7 VM.

Then they’d make their Windows virtual machine technology abstract all physical devices, so that Windows could handle them how they thought best, and wouldn’t let applications talk to devices directly, only via the abstraction. They would have generic storage, generic network, and generic graphics interfaces that applications could write to and Microsoft would deal with everything else.

This would initially limit the number of devices that would be supported, but thats really status quo anyway. They would declare how devices that want to play in the Windows space would behave, declare the specs, and Microsoft would own the testing and to a degree validation of almost all drivers or they could farm this out to a seperate organization who would independently certify the device, not write the code. Once they stabilised the generic interfaces though, the whole Windows system itself would become more stable.

This would be a big step for Microsoft. When you look at the Windows ecosystem, there are hundreds of thousands of Windows applications and utilities. Way too many of them though are to deal with the inadeqaucies of Windows itself, or missing function. Cut out the ability to write these sort of applications and their will be at least an infrastructure developer backlash. It might even provoke more antitrust claims. While I know nothing about the iPhone, this would likely put Windows 8 in the same position with respect to developers.

For all I know, this could be what they have in mind, it’s and area I need to get up to speed on with them, and obviously the processor roadmaps for AMD and Intel, as well as understanding where Linux is headed.

Oh, Now it’s legacy IT that’s dead. Huh?

I got a pingback Dana Gardners ZDNet blog for my “Is SOA dead?” post. Dana, rather than addressing the issue I raised yesterday, just moved the goalposts, claiming “Legacy IT is dead“.

I agree with many of his comments, and after my post “Life is visceral“, which Dana so ably goes on to prove with his post. I liked some of the fine flowing language, some of it almost prosaic, especially this “We need to stop thinking of IT as an attached appendage of each and every specific and isolated enterprise. Yep, 2000 fully operational and massive appendages for the Global 2000. All costly, hugely redundant, unique largely only in how complex and costly they are all on their own.” – whatever that means?

However, thinking about a reasonable challenge for anyone considering jumping to a complete services or cloud/services, not migrating, not having a roadmap or architecture to get there, but as Dana suggests, grasping the nettle and just doing it.

One of the simplest and easiest examples I’ve given before for why I suspect as Dana would have it, “legacy  systems” exist, is becuase there are some problems you just can NOT be split apart a thousand times, whose data can NOT be replicated into a million pieces.

Let’s agree. Google handles millions of queries per seconds, as do ebay and Amazon, well probably. However, in the case of the odd goggle query not returning anything, as opposed to returning no results, no one really cares or understands, they just click the browser refresh button and wait. Pretty much the same for Amazon, the product is there, you click buy, and if every now and again there was one item of a product left at an Amazon store front, if someone else has bought it between the time you looked for it and decided to buy, you just suffer through the email that the item will be back in stock in 5-days after all, it will take longer than that to track down someone to discuss it with.

If you ran your banking or credit card systems this way, no one would much care when it came to queries. Sure, your partner is out shopping, you are home working on your investments. Your partner goes to get some cash, checks the balance and the money is there. You want to transfer a large amount of money into a money market account, you check and there amount is just there, you’ll transfer some more into the account overnight from your savings and loan and you know your partner only ever uses credit, right?. You both proceed, a real transactional system lets one of you proceed and the other fails, even if there is only 1-second, and possibly less difference between your transactions coming in.

In the google model, this doesn’t matter, it’s all only queries. If your partner does a balance check, a second or more after you’ve done a transfer, and see’s the the wrong balance, it will only matter when they are embarressed 20-seconds later trying to use that balance, that isn’t there anymore.

Of course, you can argue banks dont’ work like that, they reconcile balances at the end of the day. You will though when that exceptional balance charge kicks-in if both transactions work. Most banks systems are legacy systems from a different perspective, and should be dead. We, as customers, have been pushing for straight through processing for years, why should I wait for 3-days for a check to clear? 

So you can’t have it both ways, out of genuine professional understanding and interest, I’d like to see any genuine transaction based systems that are largely or wholly services based or that run in the cloud.

In order to what Dana advocates, move off ALL legacy systems, those transaction systems need to cope with 1000, and upto 2000 transactions per second. Oh yeah, it’s not just banks that use “legacy IT”, there are airlines, travel companies, anywhere where there is finite product and an almost infinite number of customers.

Remember, Amazon and ebay and paypal don’t do their own credit card processing as far as I’m aware, they are just merchants who pass the transaction on to a, err, legacy system.

Some background reading should include one that I used early in my career. Around the time I was advocating moving Chemical Bank, NY’s larger transaction systems to virtual machines, which we did. I was attending VM Internals education at Amdahl in Columbia, MD. One of the instructors thought I might find the paper useful.

It was written by a team at Tandem Computer and Amdahl, including the late, great Jim Gray. It was written in 1984. Early on in this paper they describe environments that support 800 transactions per second in 1984. Yes, 1984. These days, even in the current economic environment, 1000tps are common, and 2000tps are table stakes.

Their paper is preserved and online here on

And finally, since I’m all about context. I’m an employee of Dell, I started work there today. What is written here is my opinion, based on 34-years IT experience and much of it garned from the sharp end, designing an I/O subsystem to support an large NY banks transactional, inter-bank transfer system, as well as being responsible for the worlds first virtualized credit card authorization system etc. but I didn’t work for Dell, or for that matter, IBM then. 

Speakers corner anyone?

Life is visceral

After I posted my “Is SOA Dead?” entry, Joel Zimmerman, aka Deadmau5 reminded me of this by Spiro Agnew, in a speech, just down the road in Houston, Texas, on 22 May 1970, where VP Agnew said in response to the Vietnam War riots: “Subtlety is lost, and fine distinctions based on acute reasoning are carelessly ignored in a headlong jump to a predetermined conclusion.”

“Life is visceral rather than intellectual. And the most visceral practitioners of life are those who characterize themselves as intellectuals. Truth is to them revealed rather than logically proved. And the principal infatuations of today revolve around the social sciences, those subjects which can accommodate any opinion, and about which the most reckless conjecture cannot be discredited. Education is being redefined at the demand of the uneducated to suit the ideas of the uneducated.”

You can read the full text here or you can listen to it here. Why don’t people have diction like that anymore? Why have I taken to spelling everything the American way? And why are there no riots anymore over outrageous goverment actions?

About & Contact

I'm Mark Cathcart, formally a Senior Distinguished Engineer, in Dells Software Group; before that Director of Systems Engineering in the Enterprise Solutions Group at Dell. Prior to that, I was IBM Distinguished Engineer and member of the IBM Academy of Technology. I am a Fellow of the British Computer Society ( I'm an information technology optimist.

I was a member of the Linux Foundation Core Infrastructure Initiative Steering committee. Read more about it here.

Subscribe to updates via rss:

Feed Icon

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 797 other followers

Blog Stats

  • 85,230 hits