Archive for the 'powervm' Category

Appliances – Good, bad or virtual ?

So, in another prime example of “Why do Analysts blogs make it so hard to have a conversation?” , Gordon Haff of Illuminata today tweeted a link to a new blog post of his on appliances. No comments allowed, no trackbacks provided.

He takes Chuck Hollis (EMC) post and opines various positions on it. It’s not clear what the notion of “big appliance” is as Chuck uses it. Personally, I think he’s talking about solutions. Yes, I know it’s a fine line, but a large all purpose data mining solution with its’own storage, own server, own console, etc. is no more an appliance than a kitchen is. The kitchen will contain appliances but it is not one itself. If thats not what Chuck is describing, then his post has some confusion, very few organizations will have a large number of these “solutions”.

On the generally accepted view of appliances, I think both Gordon and Chuck are being a little naive when they think that all compute appliances can be made virtual and run on shared resource machines.

While at IBM I spent a lot of time, and learned some valuable lessons about appliances. I was looking at the potential for the first generation of IBM designed WebSphere DataPower appliances. At first, it seemd to me even 3-years ago that turning them into a virtual appliance would be a good idea. However, I’d made the same mistake that Hollis and Haff make. They assume that the type of processing done in an appliance can be transparently replaced by the onward march of Moores Law on Intel and IBM Power processors.

The same can be said for most appliances I’ve looked at. They have unique hardware design, which often includes numerous specialized processing functions, such as encryption, key management and even environmental monitoring. Appliances though real value add is that they are designed with a very specific market opportunity in mind. That design will require complex workload analysis, and reviewing the balance between general purpose compute, graphics, security, I/O and much more, and producing a balanced design and most importantly, a complete user experience to support it. Thats often the key.

Some appliances offer the sort of hardware based security and tamper protection that can never be replaced by general purpose machines.

Yes Hollis and Haff make a fair point that these appliances need separate management, the real point is that many of these appliances need NO management at all. You set them up, then run them. Because the workload is tested and integrated the software rarely, if ever fails. Since the hardware isn’t generally extensible, aka as Chuck would have it, you are locked into what you buy, updating drivers and introducing incompatibility isn’t an issue as it is with most general purpose servers.

As for trading one headache for another, while it’s a valid point, my experience so far with live migration and pools of virtual servers, network switches, SAN setup etc. is that you are once again trading one headache for another. While in a limited fashion it’s fairly straight forward to do live migration of a virtual workload from one system to another. Doing it at scale, which is what is required if you’ve reached the “headache”point that Chuck is positing, is far from simple.

Chuck closes his blog entry with:

Will we see a best-of-both-worlds approach in the future?

Well I’d say that was more than likely, in fact it’s happening and has been for a while. The beauty of an appliance is that the end user is not exposed to the internal workings. They don’t have to worry about most configuration options and setup, management is often minimised or eliminated, and many appliances today offer “phone home” like features for upgrade and maintenance. I know, we build many of them here at Dell for our customers, including EMC, Google etc.

One direction that we are likely to see, is that in the same current form factor of an appliance, it will become a fault tolerant appliance by replicating key parts of the h/w, virtualizing the appliance and running multiple copies of the appliance workload within a single physical appliance, all once again delivering that workload and deployment specific features and functions. This in turn reduces the number of physical appliance a customer will need. So the best of both worlds, although I suspect that not what Chuck was hinting at.

While there is definitely a market for virtual software stacks, complete application and OS instances, presuming that you can move all h/w appliances to this model, is missing the point.

Let’s not forget, SANs are often just another form of appliance, as are TOR/EOR network switches, and things like the Cisco Nexus. Haff says that appliances have been around since the late 1990’s, well at least as far as I can recall, in the category of “big appliances”, the IBM Parallel Query Server which ran a customized mainframe DB2 workload, and attached to an IBM S/390 Enterprise Server was around in the early 1990’s.

Before that many devices were in fact sold as appliances, they were just not called that, but by todays definition, thats exactly what they were. My all time favorite was the IBM 3704, part of the IBM 3705 communications controller family. The 3704 was all about integrated function and a unique user experience, with at the time(1976) an almost space age touch panel user interface.

IBM Big Box quandary

In another follow-up from EMC World, the last session I went to was “EMC System z, z/OS, z/Linux and z/VM”. I thought it might be useful to hear what people were doing in the mainframe space, although largely unrelated to my current job. It was almost 10-years to the day that I was at IBM, were writing the z/Linux strategy, hearing about early successes etc. and strangely, current EMC CTO Jeff Nick and I were engaged in vigourous debate about implementation details of z/Linux the night before we went and told SAP about IBM’s plans.

The EMC World session demonstrated, that as much as things change, the they stay the same. It also reminded me, how borked the IT industry is, that we mostly force customers to choose by pricing rather than function. 10-12 years ago z/Linux on the mainframe was all about giving customers new function, a new way to exploit the technology that they’d already invested in. It was of course also to further establish the mainframes role as a server consolidation platform through virtualization and high levels of utilization.(1)

What I heard were two conflicting and confusing stories, at least they should be for IBM. The first was a customer who was moving all his Oracle workloads from a large IBM Power Systems server to z/Linux on the mainframe. Why? Becuase the licensing on the IBM Power server was too expensive. Using z/Linux, and the Integrated Facility for Linux (IFL) allows organizations to do a cost avoidance exercise. Processor capacity on the IFL doesn’t count towards the total installed, general processor capacity and hence doesn’t bump up the overall software licensing costs for all the other users. It’s a complex discussion and that wasn’t the purpose of this post, so I’ll leave it at that.

This might be considered a win for IBM, but actually it was a loss. It’s also a loss for the customer. IBM lost because the processing was being moved from it’s growth platform, IBM Power Systems, to the legacy System z. It’s good for z since it consolidates it’s hold in that organization, or probably does. Once the customer has done the migration and conversion, it will be interesting to see how they feel the performance compares. IBM often refers to IFL and it’s close relatives the ziip and zaap as speciality engines. Giving the impression that they perform faster than the normal System z processors. It’s largely an urban myth though, since these “specialty” engines really only deliver the same performance, they are just measured, monitored and priced differently.

The customer lost becuase they’ve spent time and effort to move from one architecture to another, really only to avoid software and server pricing issues. While the System z folks will argue the benefits of their platform, and I’m not about to “dis” them, actually the IBM Power server can pretty mouch deliver a good enough implementation as to make the difference, largely irrelavant.

The second confliction I heard about was from EMC themselves. The second main topic of the session was a discussion about moving some of the EMC Symmetrix products off the mainframe, as customers have reported that they are using too much mainframe capacity to run. The guys from EMC were thinking of moving the function of the products to commodity x86 processors and then linking those via high speed networking into the mainframe. This would move the function out of band and save mainframe processor cycles, which in turn would avoid an upgrade, which in turn would avoid bumping the software costs up for all users.

I was surprised how quickly I interjected and started talking about WLM SRM Enclaves and moving the EMC apps to run on z/Linux etc. This surely makes much more sense.

I was left with though a definate impression that there are still hard times ahead for IBM in large non-X86 virtualized servers. Not that they are not great pieces of engineering, they are. But getting to grips with software pricing once and for all should really be their prime focus, not a secondary or tertiary one. We were working towards pay per use once before, time to revist me thinks.

(1) Sport the irony of this statement given the preceeding “Nano, Nano” post!

Whither IBM, Sun and Sparc?

So the twitterati and blog space is alight with discussion that IBM is to buy Sun for $6.25 billion. The only way we’ll know if there is any truth to it is if it goes ahead, these rumors are never denied.

Everyone is of course focussed on the big questions which mostly are around hardware synergies(server, chips, storage) and Java. Since I don’t work at IBM I have no idea whats going on or if there is any truth to this. There some more interesting technical discussions to be had than those generally think they have an informed opinion.

IBM bought Transitive in 2008; Transitive has some innovative emulation software, called QuickTransit. It allows binaries created and compiled on one platform, to be run on another hardware platform without change or recompilation. There were some deficiencies, and you can read more into this in my terse summary blog post at the time of the acquisition announcement. Prior to acquisition QuickTransit supported a number of platforms including SPARC and PowerMac and had been licensed by a number of companies, including IBM.

I assume IBM is in the midst of their classic “blue rinse” process and this explains the almost complete elimination of the Transitive web site(1), and it’s nothing more sinister than they are getting ready to re-launch under the IBM branding umbrella of POwerVM or some such.

Now, one could speculate that by acquiring SUN, IBM would achieve three things that would enhance their PowerVM stratgey and build on their Transitive acquisition. First, they could reduce the platforms supported by QuickTransit and over time, not renegotiate their licensing agreements with 3rd parties. This would give IBM “leverage” in offering binary emulation for the architectures previsouly supported, on say, only the Power and Mainframe processor ranges.

Also, by further enhancing QuickTransit, and driving it into the IBM microcode/firmware layer, thus making it more reliable, providing higher performance by reducing duplicate instruction handling, they could effectively eliminate future SPARC based hardware utilising the UNIX based Power hardware, PowerVM virtualization. This would also have the effect taking this level of emulation mainstream and negating much of the transient(pun intended) nature typically associated with this sort of technology.

Finally, by acquiring SUN, IBM would eliminate any IP barriers that might occur due to the nature of the implementation of the SPARC instruction set.

That’s not to say that there are not any problems to overcome. First, as it currently stands the emulation tends to map calls from one OS into another, rather than operating at a pure architecture level. Pushing some of the emulation down into the firmware/microcode layer wouldn’t help emulation of CALL SOLARIS API with X, Y, even if it would emulate the machine architecture instructions that execute to do this. So, is IBM really committed to becoming a first class SOLARIS provider? I don’t see any proof of this since the earlier announcement. Solaris on Power is pretty non-existentThe alternative is that IBM is to use Transitive technology to map these calls into AIX, which is much more likely.

In economic downturns, big, cash rich companies are kings. Looking back over the last 150 years there are plenty of examples of big buying competitors and emerging from the downturn even more powerful. Ultimately I believe that the proprietary chip business is dead, it’s just a question of how long it takes for it to die and if regulators feel that by allowing mergers and acquisitions in this space is good or bad for the economy and the economic recovery.

So, there’s a thought. As I said, I don’t work at IBM.

(1) It is mildly ammusing to see that one of the few pages left extoles the virtues of the Transitive technology by one Mendel Rosenblum, formerly Chief Scientist and co-founder of VMWare.

What’s up with industry standard servers? – The IBM View

I finally had time to read through the IBM 4Q ’08 results yesterday evening, it is good to see that Power Systems saw a revenue growth for the 10th straight quarter,  and that the virtualization  and high utilization rates are driving sales of both mainframe and Power servers.

I was somewhat surprised though to see the significant decline(32%) in x86 servers sales, System x in IBM nomenclature, put down to the strong demand “virtualizing and consolidating workloads into more efficient platforms such as POWER and mainframe”.

I certainly didn’t see any significant spike in interest in Lx86 in the latter part of my time with IBM and as far as I know, IBM still doesn’t have a reference customer for it many references for it, despite a lot of good technical work going into it. The focus from sales just wasn’t there. So that means customers were porting, rewriting or buying new applications, not something that would usually show up in quarterly sales swings, more as long term trends.

Seems to me the more likely reason behind IBM’s decline in x86 was simply as Bob Moffat[IBM Senior Vice President and Group Executive, Systems & Technology Group] put it in his December ’08 interview with CRN’s ChannelWeb when referring to claims by HP’s Mark Hurd “The stuff that Mr. Hurd said was going away kicked his ass: Z Series [mainframe hardware] outgrew anything that he sells. [IBM] Power [servers] outgrew anything that he sells. So he didn’t gain share despite the fact that we screwed up execution in [x86 Intel-based server] X Series.”

Moffat is quoted as saying IBM screwed up x86 execution multiple times, so one assumes at least Moffat thinks it’s true. And yes, as I said on twitter yesterday was a brutal day in the tech industry, and certainly with the Intel and Microsoft layoffs, the poor AMD results, and the IBM screw-up in sales and Sun starting previously announced layoffs, as the IBM results say industry standard hardware is susceptible to the economic downtown. I’d disagree with the IBM results statement though in that industry standard hardware is “clearly more susceptible”.

My thoughts and best wishes go out to all those who found out yesterday that their jobs were riffed, surplused or rebalanced, many of those, including 10-people I know personally, did not work in the x86 or as IBM would have it, “industry standard” hardware business.

IBM Annouces Plans to acquire Transitive

As is the way with these things, public comment is full of legal trip-wires, none of which I propose to activate. Suffice to say that today IBM announced plans to acquire Transitive, who provide the core technology for PowerVM Lx86.

We’ve also done due dilligence on the patents and copyrights for the Intel SSE Instruction set and will be looking at how we can upgrade the level of Intel support provided in Lx86.

Lx86 on Power update

I had an interesting discussion with an IBM Client IT Architect earlier today; his customer wants to run Windows on his IBM Power Systems Server. It wasn’t a new discussion, I’d had it numerous times over the past 10-years or so, only in the old days the target platform was System z aka the mainframe. Let the record show we even had formal meetings with Microsoft back in the late 90’s about porting their then HAL and WIN32. Lots of reasons why it didn’t work out.

Only these days we think it’s a much more interesting proposition. Given the drive to virtualize x86 servers, to consolidate from a management and energy efficiency perspective, is now is all the rage in with many clients, the story doesn’t have to be sold, you just have to explain how much better at it IBM Power Servers are. Now of course we don’t run Windows, and that’s where this conversation got interesting.

His client wanted to virtualize. They’d got caught up in some of the early gold rush to Linux and had replaced a bunch of Windows print and low access file servers with Linux running on the same hardware, worked well, job done. Roll forward 3-years and now the hardware is creaking at best. The client hadn’t moved any other apps to Linux and was centralizing around larger, virtualized x86 servers to save license costs for Windows.

I’ve no idea what they’ll do next, but my point was, it’s not Windows you need, it’s Linux. And, if you want to centralise around a large virtualized server, it’s not x86 but Power. You can either port the apps to Linux on Power, or if as you say, they don’t want to/can’t port, it’s more than likley they can run the apps with Lx86.

The latest release of PowerVM Lx86 is V1.3, and is available now. We’ve added support for some new instructions and improved the performance in processing other instructions. We provide support for additional Linux operating systems

  • SUSE Linux Enterprise Server 10 Service Pack 2 for Power
  • Red Hat Enterprise Linux 4 update 7 for Power

and have simplified a number of installation related activities, for example embedding the PowerVM Lx86 installation, with the IBM Installation Toolkit for Linux v3.1. Also

  • Archiving previously installed environment for backup or migration to other systems.
  • Automate installation for non-interactive installation and installation from an archive.
  • SELinux is supported by PowerVM Lx86 when running on RHEL

PowerVM Lx86 is supplied with Provided with PowerVM Express, Standard, and Enterprise Editions.

And so back to the question in hand, why not Windows? Technically there is no real reason. Yes, there are some minor architecture differences. But these can be handled via traps and then fixed in software or firmware. The real issue from my perspective is support. If your vendor/ISV won’t support their software running on Windows on the server, or at a minimum requires you to recreate the problem in a supported environment(and we all know how hard that can be), why would you do it?

This has always been the biggest problem when introducing any new emulated/virtualized environment. It’s not at all clear that this is resolved yet even on x86 virtualized environments. Then there are those pesky license agreements you either sign, or agree to by “clicking”. These normally restrict the environments that you run the software on. Legally, we are also restricted in what we can emulate, patents and copyright laws apply across hardware too. Just Do It – might be a slogan that went a long way for Nike Marketing, but that’s not something I’ve heard a lawyer advise.

Virtualization, The Recession and The Mainframe

Robin Bloor has posted an interesting entry on his “Have mac will blog” blog on the above subject. He got a few small things wrong, well mostly, he got all the facts wrong but, right from a populist historical rewrite perspective. Of course I posted a comment, but as always made a few typos that I now cannot correct, so here is the corrected version(feel free to delete the original comments Robin… or just make fun of me for the mistakes, but know I was typing outdoors at the South Austin Trailer Park and Eatery, with open toe sandles on and it’s cold tonight in Austin, geeky I know!)

What do they say about a person who is always looking back to their successes? Well, in my case, it’s only becuase I can’t post on my future successes, they are considered too confidential for me to even leave slides with customers when I visit… 

VM revisited, enjoy:

 

Mark Cathcart (Should have) said,
on October 23rd, 2008 at 8:16 pm

Actually Robin, while it’s true that the S/360 operating systems were written in Assembler, and much of the 370 operating systems, PL/S was already in use for some of the large and complex components.

It is also widely known that virtualization, as you know it on the mainframe today, was first introduced on the S/360 model-67. This was a “bastard child” of the S/360 processors that had virtual memory extensions. At that point, the precursor to VM/370 used on the S/360-67 was CP-67.

I think you’ll also find that IBM never demonstrated 40,000 Linux virtual machines on a single VM system, it was David Boyes of Sine Nomine, who also recently ported Open Solaris to VM.

Also, there’s no such thing as pSeries Unix in the marketing nomenclature any more, it’s now Power Systems, whose virtualization now supports AIX aka IBM “Unix”, System i or IBM i to use the the modern vernacular and Linux on Power.

Wikipedia is a pretty decent source for information on mainframe virtualization, right up until VM/XA where there are some things that need correcting, I just have not had the time yet.

Oh yeah, by the way. While 2TB of memory on a mainframe gives pretty impressive virtualization capabilities, my favorite anecdote, and it’s true because I did it, was back in 1983. At Chemical Bank in New York. We virtualized a complete, production, high availability, online credit card authorization system, by adding just 4Mb of memory boosting the total system memory to a whopping 12Mb of memory! Try running any Intel hypervisor or operating system on just 12Mb of memory these days, a great example of how efficient the mainframe virtualization is!

 

2008 IBM Power Systems Technical University featuring AIX and Linux

Yep, it’s a mouthful. I’ve just been booking some events and presentations for later in the year, and this one, which I had initially hoped to attend clashes with one, so now I can’t.

However, in case the snappy new title passed you buy, it is still the excellent IBM Technical conference it used to be when it was the IBM System p, AIX and Linux Technical University. It runs 4.5 days from 8 – 12 September in Chicago and offers an agenda that includes more than 150 knowledge-packed sessions and hands-on training delivered by top IBM developers and Power Systems experts.

Since the “IBM i” conference is running alongside, you can choose to attend sessions in either event. Sadly I couldn’t find a link for the conference abstracts, but there is more detail online here.

RedMonk IT Management PodCast #10 thoughts

I’ve been working on slides this afternoon for a couple of projects, and wondering why producing slides hasn’t really gotten any easier in 20-years since Freelance under DOS? Why is it I’ve got a 22 flatscreen monitor as an extended desktop, and I’m using a trackpoint and mouse to move things around, and waiting for Windows to move pixel by pixel…

Anyway, I clicked on the LIBSyn link for the RedMonk IT Management Podcast #10 from back in April for some background noise. In the first 20-mins or so, Cote and John get into some interesting discussion about Power Systems, especially in relation to some projects Johns’ working on. As they joke and laugh their way through an easy discussion, they get a bit confused about naming and training.

First, the servers are called IBM Power Systems, or Power. The servers span from blades to high-end scalable monster servers. They all use the Power PC architecture, instruction set RISC chip. Formally there had been two versions of the same servers, System p and System i.

Three operating systems can run natively on Power Systems, AIX, IBM i (formally i5/OS and OS/400) and Linux. You can run these concurrently in any combination using the native virtualization, PowerVM. Amongst the features of PowerVM is the ability to create Logical Partitions. These are a hardware implementation and hardware protected Type-1 Hypervisor. So, it’s like VMware but not at all. You can get more on this in this white paper. For a longer read, see the IBM Systems Software Information Center.

John then discussed the need for training and the complexity of setting up a Power System. Sure, if you want to run a highly flexible, dynamically configurable, highly virtualized server, then you need to do training. Look at the massive market for Microsoft Windows, VMware and Cisco Networking certifications. Is there any question that running complex systems would require similar skills and training?

Of course, John would say that though, as someone who makes a living doing training and consulting, and obviously has a great deal of experience monitoring and managing systems.

However, many of our customers don’t have such a need, they do trust the tools and will configure and run systems without 4-6 months of training. Our autonomic computing may not have achieved everything we envisaged, but it has made a significant difference. You can use the System Config tool at order time, either alone, with your business partner or IBMer, and do the definition for the system, have it installed and provisioned and up and running within half a day.

When I first started in Power Systems, I didn’t take any classes, was not proficient in AIX or anything else Power related. I was able to get a server up and running from scratch and get WebSphere running business applications having read a couple of redbooks. Monitoring and debugging would have taken more time, another book. Clearly becoming an expert always takes longer, see the wikipedia definition of expert.

ps. John, if you drop out of the sky from 25k ft, it doesn’t matter if the flight was a mile or a thousand miles… you’ll hit the ground at the same speed 😉

pps. Cote I assume your exciting editing session on episode 11, wasn’t so exiciting…

ppps. 15-minutes on travel on Episode #11, time for RedmOnk Travel Podcast

On Power Systems and Security

One of the topics I’m trying to close on at the moment is Power Systems Security. I have my views on where I think we need to be, where the emerging technology challenges are, what the industry drivers are(yours and ours), and the competitive pressures.

If you want to comment or email me with your thoughts on Power Systems security, I’d like to hear. What’s important, what’s not?  Of course I’m interested in OS related issues, AIX, i, or Linux on Power. I’m also interested in requirements that span all three, that need to apply across hardware and PowerVM.

Interested in mobility? Want your keys to move between systems with you? Not much good if you move the system but can’t read the data becuase you don’t have key authority. Is encryption in your Power Systems future? Is it OK to have it in software only, to have it as an offload engine or does it need to run faster via acceleration. Do you have numbers, calculations on how many, what key sizes etc.

Let’s be clear though, we have plans and implementations in all these areas. What I’m interested in are your thoughts and requirements.

Appliances, Stacks and software virtual machines

A couple of things from the “Monkmaster” this morning peaked my interest and deserved a post rather than a comment. First up was James post on “your Sons IBM“. James discusses a recent theme of his around stackless stacks, and simplicity. Next-up came a tweet link on cohesiveFT and their elastic server on demand.

These are very timely, I’ve been working on a effort here in Power Systems for the past couple of months with my ATSM, Meghna Paruthi, on our appliance strategy. These are, as always with me, one layer lower than the stuff James blogs on, I deal with plumbing. It’s a theme and topic I’ll return to a few times in the coming weeks as I’m just about to wrap up the effort. We are currently looking for some Independent Software Vendors( ISVs) who already package their offerings in VMWare or Microsoft virtual appliance formats and either would like to do something similar for Power Systems, or alternatively have tried it and don’t think it would work for Power Systems.

Simple, easy to use software appliances which can be quickly and easily deployed into PowerVM Logical Partitions have a lot of promise. I’d like to have a market place of stackless, semi-or-total black box systems that can be deployed easily and quickly into a partition and use existing capacity or dynamic capacity upgrade on demand to get the equivalent of cloud computing within a Power System. Given we can already run circa 200-logical partitions on a single machine, and are planing something in the region of 4x that for the p7 based servers with PowerVM, we need to do something about the infrastructure for creating, packaging, servicing, updating and managing them.

We’ve currently got six-sorta-appliance projects in flight, one related to future datacenters, one with WebSphere XD, one with DB2, a couple around security and some ideas on entry level soft appliances.

So far it looks like OVF wrappers around the Network Installation Manager aka NIM, look like the way to go for AIX based appliances, with similar processes for i5/OS and Linux on Power appliances. However, there are a number of related issues about packaging, licensing and inter and intra appliance communication that I’m looking for some input on. So, if you are an ISV, or a startup, or even in independent contractor who is looking at how to package software for Power Systems, please feel free to post here, or email, I’d love to engage.

Redbooks on PowerVM and PowerVM Lx86

New Redbooks covering some of the key announcements from this week:

  1. PowerVM Virtualization on IBM System p Introduction and Configuration Fourth Edition – Draft(thanks to Monte and Scott for fixing up the title 🙂 ).
  2. PowerVM Virtualization on IBM System p Managing and Monitoring – currently a draft.
  3. Getting started with PowerVM Lx86
  4. i5/OS Program Conversion: Getting Ready for i5/OS V6R1 – draft

On PowerVM, Lx86 and virtualization of Windows

PowerVM logo Yesterday saw the announcement of a re-packaging, re-branding and new technology drive for POWER™ Virtualization now PowerVM™. You can see the full announcement here. It is good to be back working on VM, sorta.

Over on virtualiztion.info, Alessandro Perilli, says we are “missing the market in any case because its platform is unable to virtualize Windows operating systems”. I say not.

POWER isn’t Windows, it’s not x86 hardware. We scale much, much higher, perform much better and generally offer high availability features and function as standard or an add-on, way ahead of Windows. Running Windows on PowerVM and Power hardware would pick-up some of the reliability features of the hardware transparently, and the workload consolodation potential would be very attractive. What it comes down to though, is what it would take to virtualize Windows on PowerVM?

We could do it. We could add either hardware simulation or emulation or more likely translation that would allow the x86 architecture or Windows itself to be supported on PowerVM. There would be ongoing issues with the wide variety of h/w drivers and related issues, but lets put those aside for now.

We could have gone down a similar route to the old Bristol Technologies WIND/U WIN32 licensing and technology route, porting and running a subset of WIN32 or even via mono or .net. We might even call it PowerVM Wx86. Just reverse engineering MS technology is neither the right idea from a technology or business perspective.

So technically it could be done one way or another. The real question though is the same as the discussion about supporting Solaris on Power. Yes, it would be great to have the mother of all binary or source compatibility virtualization platforms. However, as always the real issue is not if it could be done, but how would you support your applications? After all isn’t it about “applications, applications, applications“?

And there’s the rub. If you wanted to run middleware and x86 binary applications on POWER hardware, then you’d need support for the binaries. For middleware, most of the industries leading middleware is already available on either of AIX, i5/OS or Linux on Power, some is available on all three. What would software vendors prefer to do in this case? Would they be asked to support an existing binary stack on Windows on PowerVM, or would they prefer to just continue to support the native middleware stacks that benefit directly from the Power features?

Most would rather go with the native software and not incur the complexities and additional support costs of running in an emulated or simulated environment. The same is true of most customer applications, especially those for which the customer doesn’t have easy or ready access to the source code for Windows applications.

In the x86 market, the same isn’t true, there’s less risk supporting virtualization such as Xen or VMware

The same isn’t true with PowerVM Lx86 applications. First because of the close affinity between Linux and Linux on POWER. There are already existing Linux on Power distributions, the source code is available, and most system calls are transparent and can be easily mapped into Linux on POWER. Second, drivers, device support etc. is handled natively within either the POWER hardware, PowerVM or within the Linux operating system, running in the PowerVM partition. Thirdly, IBM has worked with SuSe and RedHat to make the x86 runtime libraries available on Linux on POWER. Finally, many middleware packages already run on Linux on POWER, or it is available as open source and can be compiled to run on Linux on POWER.

All of which makes it a very different value proposition. Using NAS or SAN storage, it is perfectly possible to run the same binaries currently or as needed on x86 and PowerVM. The compilcations of doing this, the software stack required, as well as the legal conditions for running Windows binaries just don’t make it worth the effort.

Although not identical, many of the same issues arise running Solaris, either Solaris x86, or OpenSolaris PowerPC port. So, thats a wrap for now, still many interesting things going on here in Austin, I really well get back to the topic of Amazon, EC2 and loud computing, memo to self.


About & Contact

I'm Mark Cathcart, formally a Senior Distinguished Engineer, in Dells Software Group; before that Director of Systems Engineering in the Enterprise Solutions Group at Dell. Prior to that, I was IBM Distinguished Engineer and member of the IBM Academy of Technology. I am a Fellow of the British Computer Society (bsc.org) I'm an information technology optimist.


I was a member of the Linux Foundation Core Infrastructure Initiative Steering committee. Read more about it here.

Subscribe to updates via rss:

Feed Icon

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 2,066 other subscribers

Blog Stats

  • 89,480 hits