Archive for the 'powervm' Category

Appliances – Good, bad or virtual ?

So, in another prime example of “Why do Analysts blogs make it so hard to have a conversation?” , Gordon Haff of Illuminata today tweeted a link to a new blog post of his on appliances. No comments allowed, no trackbacks provided.

He takes Chuck Hollis (EMC) post and opines various positions on it. It’s not clear what the notion of “big appliance” is as Chuck uses it. Personally, I think he’s talking about solutions. Yes, I know it’s a fine line, but a large all purpose data mining solution with its’own storage, own server, own console, etc. is no more an appliance than a kitchen is. The kitchen will contain appliances but it is not one itself. If thats not what Chuck is describing, then his post has some confusion, very few organizations will have a large number of these “solutions”.

On the generally accepted view of appliances, I think both Gordon and Chuck are being a little naive when they think that all compute appliances can be made virtual and run on shared resource machines.

While at IBM I spent a lot of time, and learned some valuable lessons about appliances. I was looking at the potential for the first generation of IBM designed WebSphere DataPower appliances. At first, it seemd to me even 3-years ago that turning them into a virtual appliance would be a good idea. However, I’d made the same mistake that Hollis and Haff make. They assume that the type of processing done in an appliance can be transparently replaced by the onward march of Moores Law on Intel and IBM Power processors.

The same can be said for most appliances I’ve looked at. They have unique hardware design, which often includes numerous specialized processing functions, such as encryption, key management and even environmental monitoring. Appliances though real value add is that they are designed with a very specific market opportunity in mind. That design will require complex workload analysis, and reviewing the balance between general purpose compute, graphics, security, I/O and much more, and producing a balanced design and most importantly, a complete user experience to support it. Thats often the key.

Some appliances offer the sort of hardware based security and tamper protection that can never be replaced by general purpose machines.

Yes Hollis and Haff make a fair point that these appliances need separate management, the real point is that many of these appliances need NO management at all. You set them up, then run them. Because the workload is tested and integrated the software rarely, if ever fails. Since the hardware isn’t generally extensible, aka as Chuck would have it, you are locked into what you buy, updating drivers and introducing incompatibility isn’t an issue as it is with most general purpose servers.

As for trading one headache for another, while it’s a valid point, my experience so far with live migration and pools of virtual servers, network switches, SAN setup etc. is that you are once again trading one headache for another. While in a limited fashion it’s fairly straight forward to do live migration of a virtual workload from one system to another. Doing it at scale, which is what is required if you’ve reached the “headache”point that Chuck is positing, is far from simple.

Chuck closes his blog entry with:

Will we see a best-of-both-worlds approach in the future?

Well I’d say that was more than likely, in fact it’s happening and has been for a while. The beauty of an appliance is that the end user is not exposed to the internal workings. They don’t have to worry about most configuration options and setup, management is often minimised or eliminated, and many appliances today offer “phone home” like features for upgrade and maintenance. I know, we build many of them here at Dell for our customers, including EMC, Google etc.

One direction that we are likely to see, is that in the same current form factor of an appliance, it will become a fault tolerant appliance by replicating key parts of the h/w, virtualizing the appliance and running multiple copies of the appliance workload within a single physical appliance, all once again delivering that workload and deployment specific features and functions. This in turn reduces the number of physical appliance a customer will need. So the best of both worlds, although I suspect that not what Chuck was hinting at.

While there is definitely a market for virtual software stacks, complete application and OS instances, presuming that you can move all h/w appliances to this model, is missing the point.

Let’s not forget, SANs are often just another form of appliance, as are TOR/EOR network switches, and things like the Cisco Nexus. Haff says that appliances have been around since the late 1990’s, well at least as far as I can recall, in the category of “big appliances”, the IBM Parallel Query Server which ran a customized mainframe DB2 workload, and attached to an IBM S/390 Enterprise Server was around in the early 1990’s.

Before that many devices were in fact sold as appliances, they were just not called that, but by todays definition, thats exactly what they were. My all time favorite was the IBM 3704, part of the IBM 3705 communications controller family. The 3704 was all about integrated function and a unique user experience, with at the time(1976) an almost space age touch panel user interface.

IBM Big Box quandary

In another follow-up from EMC World, the last session I went to was “EMC System z, z/OS, z/Linux and z/VM”. I thought it might be useful to hear what people were doing in the mainframe space, although largely unrelated to my current job. It was almost 10-years to the day that I was at IBM, were writing the z/Linux strategy, hearing about early successes etc. and strangely, current EMC CTO Jeff Nick and I were engaged in vigourous debate about implementation details of z/Linux the night before we went and told SAP about IBM’s plans.

The EMC World session demonstrated, that as much as things change, the they stay the same. It also reminded me, how borked the IT industry is, that we mostly force customers to choose by pricing rather than function. 10-12 years ago z/Linux on the mainframe was all about giving customers new function, a new way to exploit the technology that they’d already invested in. It was of course also to further establish the mainframes role as a server consolidation platform through virtualization and high levels of utilization.(1)

What I heard were two conflicting and confusing stories, at least they should be for IBM. The first was a customer who was moving all his Oracle workloads from a large IBM Power Systems server to z/Linux on the mainframe. Why? Becuase the licensing on the IBM Power server was too expensive. Using z/Linux, and the Integrated Facility for Linux (IFL) allows organizations to do a cost avoidance exercise. Processor capacity on the IFL doesn’t count towards the total installed, general processor capacity and hence doesn’t bump up the overall software licensing costs for all the other users. It’s a complex discussion and that wasn’t the purpose of this post, so I’ll leave it at that.

This might be considered a win for IBM, but actually it was a loss. It’s also a loss for the customer. IBM lost because the processing was being moved from it’s growth platform, IBM Power Systems, to the legacy System z. It’s good for z since it consolidates it’s hold in that organization, or probably does. Once the customer has done the migration and conversion, it will be interesting to see how they feel the performance compares. IBM often refers to IFL and it’s close relatives the ziip and zaap as speciality engines. Giving the impression that they perform faster than the normal System z processors. It’s largely an urban myth though, since these “specialty” engines really only deliver the same performance, they are just measured, monitored and priced differently.

The customer lost becuase they’ve spent time and effort to move from one architecture to another, really only to avoid software and server pricing issues. While the System z folks will argue the benefits of their platform, and I’m not about to “dis” them, actually the IBM Power server can pretty mouch deliver a good enough implementation as to make the difference, largely irrelavant.

The second confliction I heard about was from EMC themselves. The second main topic of the session was a discussion about moving some of the EMC Symmetrix products off the mainframe, as customers have reported that they are using too much mainframe capacity to run. The guys from EMC were thinking of moving the function of the products to commodity x86 processors and then linking those via high speed networking into the mainframe. This would move the function out of band and save mainframe processor cycles, which in turn would avoid an upgrade, which in turn would avoid bumping the software costs up for all users.

I was surprised how quickly I interjected and started talking about WLM SRM Enclaves and moving the EMC apps to run on z/Linux etc. This surely makes much more sense.

I was left with though a definate impression that there are still hard times ahead for IBM in large non-X86 virtualized servers. Not that they are not great pieces of engineering, they are. But getting to grips with software pricing once and for all should really be their prime focus, not a secondary or tertiary one. We were working towards pay per use once before, time to revist me thinks.

(1) Sport the irony of this statement given the preceeding “Nano, Nano” post!

Whither IBM, Sun and Sparc?

So the twitterati and blog space is alight with discussion that IBM is to buy Sun for $6.25 billion. The only way we’ll know if there is any truth to it is if it goes ahead, these rumors are never denied.

Everyone is of course focussed on the big questions which mostly are around hardware synergies(server, chips, storage) and Java. Since I don’t work at IBM I have no idea whats going on or if there is any truth to this. There some more interesting technical discussions to be had than those generally think they have an informed opinion.

IBM bought Transitive in 2008; Transitive has some innovative emulation software, called QuickTransit. It allows binaries created and compiled on one platform, to be run on another hardware platform without change or recompilation. There were some deficiencies, and you can read more into this in my terse summary blog post at the time of the acquisition announcement. Prior to acquisition QuickTransit supported a number of platforms including SPARC and PowerMac and had been licensed by a number of companies, including IBM.

I assume IBM is in the midst of their classic “blue rinse” process and this explains the almost complete elimination of the Transitive web site(1), and it’s nothing more sinister than they are getting ready to re-launch under the IBM branding umbrella of POwerVM or some such.

Now, one could speculate that by acquiring SUN, IBM would achieve three things that would enhance their PowerVM stratgey and build on their Transitive acquisition. First, they could reduce the platforms supported by QuickTransit and over time, not renegotiate their licensing agreements with 3rd parties. This would give IBM “leverage” in offering binary emulation for the architectures previsouly supported, on say, only the Power and Mainframe processor ranges.

Also, by further enhancing QuickTransit, and driving it into the IBM microcode/firmware layer, thus making it more reliable, providing higher performance by reducing duplicate instruction handling, they could effectively eliminate future SPARC based hardware utilising the UNIX based Power hardware, PowerVM virtualization. This would also have the effect taking this level of emulation mainstream and negating much of the transient(pun intended) nature typically associated with this sort of technology.

Finally, by acquiring SUN, IBM would eliminate any IP barriers that might occur due to the nature of the implementation of the SPARC instruction set.

That’s not to say that there are not any problems to overcome. First, as it currently stands the emulation tends to map calls from one OS into another, rather than operating at a pure architecture level. Pushing some of the emulation down into the firmware/microcode layer wouldn’t help emulation of CALL SOLARIS API with X, Y, even if it would emulate the machine architecture instructions that execute to do this. So, is IBM really committed to becoming a first class SOLARIS provider? I don’t see any proof of this since the earlier announcement. Solaris on Power is pretty non-existentThe alternative is that IBM is to use Transitive technology to map these calls into AIX, which is much more likely.

In economic downturns, big, cash rich companies are kings. Looking back over the last 150 years there are plenty of examples of big buying competitors and emerging from the downturn even more powerful. Ultimately I believe that the proprietary chip business is dead, it’s just a question of how long it takes for it to die and if regulators feel that by allowing mergers and acquisitions in this space is good or bad for the economy and the economic recovery.

So, there’s a thought. As I said, I don’t work at IBM.

(1) It is mildly ammusing to see that one of the few pages left extoles the virtues of the Transitive technology by one Mendel Rosenblum, formerly Chief Scientist and co-founder of VMWare.

What’s up with industry standard servers? – The IBM View

I finally had time to read through the IBM 4Q ’08 results yesterday evening, it is good to see that Power Systems saw a revenue growth for the 10th straight quarter,  and that the virtualization  and high utilization rates are driving sales of both mainframe and Power servers.

I was somewhat surprised though to see the significant decline(32%) in x86 servers sales, System x in IBM nomenclature, put down to the strong demand “virtualizing and consolidating workloads into more efficient platforms such as POWER and mainframe”.

I certainly didn’t see any significant spike in interest in Lx86 in the latter part of my time with IBM and as far as I know, IBM still doesn’t have a reference customer for it many references for it, despite a lot of good technical work going into it. The focus from sales just wasn’t there. So that means customers were porting, rewriting or buying new applications, not something that would usually show up in quarterly sales swings, more as long term trends.

Seems to me the more likely reason behind IBM’s decline in x86 was simply as Bob Moffat[IBM Senior Vice President and Group Executive, Systems & Technology Group] put it in his December ’08 interview with CRN’s ChannelWeb when referring to claims by HP’s Mark Hurd “The stuff that Mr. Hurd said was going away kicked his ass: Z Series [mainframe hardware] outgrew anything that he sells. [IBM] Power [servers] outgrew anything that he sells. So he didn’t gain share despite the fact that we screwed up execution in [x86 Intel-based server] X Series.”

Moffat is quoted as saying IBM screwed up x86 execution multiple times, so one assumes at least Moffat thinks it’s true. And yes, as I said on twitter yesterday was a brutal day in the tech industry, and certainly with the Intel and Microsoft layoffs, the poor AMD results, and the IBM screw-up in sales and Sun starting previously announced layoffs, as the IBM results say industry standard hardware is susceptible to the economic downtown. I’d disagree with the IBM results statement though in that industry standard hardware is “clearly more susceptible”.

My thoughts and best wishes go out to all those who found out yesterday that their jobs were riffed, surplused or rebalanced, many of those, including 10-people I know personally, did not work in the x86 or as IBM would have it, “industry standard” hardware business.

IBM Annouces Plans to acquire Transitive

As is the way with these things, public comment is full of legal trip-wires, none of which I propose to activate. Suffice to say that today IBM announced plans to acquire Transitive, who provide the core technology for PowerVM Lx86.

We’ve also done due dilligence on the patents and copyrights for the Intel SSE Instruction set and will be looking at how we can upgrade the level of Intel support provided in Lx86.

Lx86 on Power update

I had an interesting discussion with an IBM Client IT Architect earlier today; his customer wants to run Windows on his IBM Power Systems Server. It wasn’t a new discussion, I’d had it numerous times over the past 10-years or so, only in the old days the target platform was System z aka the mainframe. Let the record show we even had formal meetings with Microsoft back in the late 90’s about porting their then HAL and WIN32. Lots of reasons why it didn’t work out.

Only these days we think it’s a much more interesting proposition. Given the drive to virtualize x86 servers, to consolidate from a management and energy efficiency perspective, is now is all the rage in with many clients, the story doesn’t have to be sold, you just have to explain how much better at it IBM Power Servers are. Now of course we don’t run Windows, and that’s where this conversation got interesting.

His client wanted to virtualize. They’d got caught up in some of the early gold rush to Linux and had replaced a bunch of Windows print and low access file servers with Linux running on the same hardware, worked well, job done. Roll forward 3-years and now the hardware is creaking at best. The client hadn’t moved any other apps to Linux and was centralizing around larger, virtualized x86 servers to save license costs for Windows.

I’ve no idea what they’ll do next, but my point was, it’s not Windows you need, it’s Linux. And, if you want to centralise around a large virtualized server, it’s not x86 but Power. You can either port the apps to Linux on Power, or if as you say, they don’t want to/can’t port, it’s more than likley they can run the apps with Lx86.

The latest release of PowerVM Lx86 is V1.3, and is available now. We’ve added support for some new instructions and improved the performance in processing other instructions. We provide support for additional Linux operating systems

  • SUSE Linux Enterprise Server 10 Service Pack 2 for Power
  • Red Hat Enterprise Linux 4 update 7 for Power

and have simplified a number of installation related activities, for example embedding the PowerVM Lx86 installation, with the IBM Installation Toolkit for Linux v3.1. Also

  • Archiving previously installed environment for backup or migration to other systems.
  • Automate installation for non-interactive installation and installation from an archive.
  • SELinux is supported by PowerVM Lx86 when running on RHEL

PowerVM Lx86 is supplied with Provided with PowerVM Express, Standard, and Enterprise Editions.

And so back to the question in hand, why not Windows? Technically there is no real reason. Yes, there are some minor architecture differences. But these can be handled via traps and then fixed in software or firmware. The real issue from my perspective is support. If your vendor/ISV won’t support their software running on Windows on the server, or at a minimum requires you to recreate the problem in a supported environment(and we all know how hard that can be), why would you do it?

This has always been the biggest problem when introducing any new emulated/virtualized environment. It’s not at all clear that this is resolved yet even on x86 virtualized environments. Then there are those pesky license agreements you either sign, or agree to by “clicking”. These normally restrict the environments that you run the software on. Legally, we are also restricted in what we can emulate, patents and copyright laws apply across hardware too. Just Do It – might be a slogan that went a long way for Nike Marketing, but that’s not something I’ve heard a lawyer advise.

Virtualization, The Recession and The Mainframe

Robin Bloor has posted an interesting entry on his “Have mac will blog” blog on the above subject. He got a few small things wrong, well mostly, he got all the facts wrong but, right from a populist historical rewrite perspective. Of course I posted a comment, but as always made a few typos that I now cannot correct, so here is the corrected version(feel free to delete the original comments Robin… or just make fun of me for the mistakes, but know I was typing outdoors at the South Austin Trailer Park and Eatery, with open toe sandles on and it’s cold tonight in Austin, geeky I know!)

What do they say about a person who is always looking back to their successes? Well, in my case, it’s only becuase I can’t post on my future successes, they are considered too confidential for me to even leave slides with customers when I visit… 

VM revisited, enjoy:

 

Mark Cathcart (Should have) said,
on October 23rd, 2008 at 8:16 pm

Actually Robin, while it’s true that the S/360 operating systems were written in Assembler, and much of the 370 operating systems, PL/S was already in use for some of the large and complex components.

It is also widely known that virtualization, as you know it on the mainframe today, was first introduced on the S/360 model-67. This was a “bastard child” of the S/360 processors that had virtual memory extensions. At that point, the precursor to VM/370 used on the S/360-67 was CP-67.

I think you’ll also find that IBM never demonstrated 40,000 Linux virtual machines on a single VM system, it was David Boyes of Sine Nomine, who also recently ported Open Solaris to VM.

Also, there’s no such thing as pSeries Unix in the marketing nomenclature any more, it’s now Power Systems, whose virtualization now supports AIX aka IBM “Unix”, System i or IBM i to use the the modern vernacular and Linux on Power.

Wikipedia is a pretty decent source for information on mainframe virtualization, right up until VM/XA where there are some things that need correcting, I just have not had the time yet.

Oh yeah, by the way. While 2TB of memory on a mainframe gives pretty impressive virtualization capabilities, my favorite anecdote, and it’s true because I did it, was back in 1983. At Chemical Bank in New York. We virtualized a complete, production, high availability, online credit card authorization system, by adding just 4Mb of memory boosting the total system memory to a whopping 12Mb of memory! Try running any Intel hypervisor or operating system on just 12Mb of memory these days, a great example of how efficient the mainframe virtualization is!

 


About & Contact

I'm Mark Cathcart, formally a Senior Distinguished Engineer, in Dells Software Group; before that Director of Systems Engineering in the Enterprise Solutions Group at Dell. Prior to that, I was IBM Distinguished Engineer and member of the IBM Academy of Technology. I'm an information technology optimist.


I was a member of the Linux Foundation Core Infrastructure Initiative Steering committee. Read more about it here.

Subscribe to updates via rss:

Feed Icon

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 887 other followers

Blog Stats

  • 83,841 hits