Archive for the 'lop' Category

Lx86 on Power update

I had an interesting discussion with an IBM Client IT Architect earlier today; his customer wants to run Windows on his IBM Power Systems Server. It wasn’t a new discussion, I’d had it numerous times over the past 10-years or so, only in the old days the target platform was System z aka the mainframe. Let the record show we even had formal meetings with Microsoft back in the late 90’s about porting their then HAL and WIN32. Lots of reasons why it didn’t work out.

Only these days we think it’s a much more interesting proposition. Given the drive to virtualize x86 servers, to consolidate from a management and energy efficiency perspective, is now is all the rage in with many clients, the story doesn’t have to be sold, you just have to explain how much better at it IBM Power Servers are. Now of course we don’t run Windows, and that’s where this conversation got interesting.

His client wanted to virtualize. They’d got caught up in some of the early gold rush to Linux and had replaced a bunch of Windows print and low access file servers with Linux running on the same hardware, worked well, job done. Roll forward 3-years and now the hardware is creaking at best. The client hadn’t moved any other apps to Linux and was centralizing around larger, virtualized x86 servers to save license costs for Windows.

I’ve no idea what they’ll do next, but my point was, it’s not Windows you need, it’s Linux. And, if you want to centralise around a large virtualized server, it’s not x86 but Power. You can either port the apps to Linux on Power, or if as you say, they don’t want to/can’t port, it’s more than likley they can run the apps with Lx86.

The latest release of PowerVM Lx86 is V1.3, and is available now. We’ve added support for some new instructions and improved the performance in processing other instructions. We provide support for additional Linux operating systems

  • SUSE Linux Enterprise Server 10 Service Pack 2 for Power
  • Red Hat Enterprise Linux 4 update 7 for Power

and have simplified a number of installation related activities, for example embedding the PowerVM Lx86 installation, with the IBM Installation Toolkit for Linux v3.1. Also

  • Archiving previously installed environment for backup or migration to other systems.
  • Automate installation for non-interactive installation and installation from an archive.
  • SELinux is supported by PowerVM Lx86 when running on RHEL

PowerVM Lx86 is supplied with Provided with PowerVM Express, Standard, and Enterprise Editions.

And so back to the question in hand, why not Windows? Technically there is no real reason. Yes, there are some minor architecture differences. But these can be handled via traps and then fixed in software or firmware. The real issue from my perspective is support. If your vendor/ISV won’t support their software running on Windows on the server, or at a minimum requires you to recreate the problem in a supported environment(and we all know how hard that can be), why would you do it?

This has always been the biggest problem when introducing any new emulated/virtualized environment. It’s not at all clear that this is resolved yet even on x86 virtualized environments. Then there are those pesky license agreements you either sign, or agree to by “clicking”. These normally restrict the environments that you run the software on. Legally, we are also restricted in what we can emulate, patents and copyright laws apply across hardware too. Just Do It – might be a slogan that went a long way for Nike Marketing, but that’s not something I’ve heard a lawyer advise.

Virtualization, The Recession and The Mainframe

Robin Bloor has posted an interesting entry on his “Have mac will blog” blog on the above subject. He got a few small things wrong, well mostly, he got all the facts wrong but, right from a populist historical rewrite perspective. Of course I posted a comment, but as always made a few typos that I now cannot correct, so here is the corrected version(feel free to delete the original comments Robin… or just make fun of me for the mistakes, but know I was typing outdoors at the South Austin Trailer Park and Eatery, with open toe sandles on and it’s cold tonight in Austin, geeky I know!)

What do they say about a person who is always looking back to their successes? Well, in my case, it’s only becuase I can’t post on my future successes, they are considered too confidential for me to even leave slides with customers when I visit… 

VM revisited, enjoy:


Mark Cathcart (Should have) said,
on October 23rd, 2008 at 8:16 pm

Actually Robin, while it’s true that the S/360 operating systems were written in Assembler, and much of the 370 operating systems, PL/S was already in use for some of the large and complex components.

It is also widely known that virtualization, as you know it on the mainframe today, was first introduced on the S/360 model-67. This was a “bastard child” of the S/360 processors that had virtual memory extensions. At that point, the precursor to VM/370 used on the S/360-67 was CP-67.

I think you’ll also find that IBM never demonstrated 40,000 Linux virtual machines on a single VM system, it was David Boyes of Sine Nomine, who also recently ported Open Solaris to VM.

Also, there’s no such thing as pSeries Unix in the marketing nomenclature any more, it’s now Power Systems, whose virtualization now supports AIX aka IBM “Unix”, System i or IBM i to use the the modern vernacular and Linux on Power.

Wikipedia is a pretty decent source for information on mainframe virtualization, right up until VM/XA where there are some things that need correcting, I just have not had the time yet.

Oh yeah, by the way. While 2TB of memory on a mainframe gives pretty impressive virtualization capabilities, my favorite anecdote, and it’s true because I did it, was back in 1983. At Chemical Bank in New York. We virtualized a complete, production, high availability, online credit card authorization system, by adding just 4Mb of memory boosting the total system memory to a whopping 12Mb of memory! Try running any Intel hypervisor or operating system on just 12Mb of memory these days, a great example of how efficient the mainframe virtualization is!


2008 IBM Power Systems Technical University featuring AIX and Linux

Yep, it’s a mouthful. I’ve just been booking some events and presentations for later in the year, and this one, which I had initially hoped to attend clashes with one, so now I can’t.

However, in case the snappy new title passed you buy, it is still the excellent IBM Technical conference it used to be when it was the IBM System p, AIX and Linux Technical University. It runs 4.5 days from 8 – 12 September in Chicago and offers an agenda that includes more than 150 knowledge-packed sessions and hands-on training delivered by top IBM developers and Power Systems experts.

Since the “IBM i” conference is running alongside, you can choose to attend sessions in either event. Sadly I couldn’t find a link for the conference abstracts, but there is more detail online here.

On Power Systems and Security

One of the topics I’m trying to close on at the moment is Power Systems Security. I have my views on where I think we need to be, where the emerging technology challenges are, what the industry drivers are(yours and ours), and the competitive pressures.

If you want to comment or email me with your thoughts on Power Systems security, I’d like to hear. What’s important, what’s not?  Of course I’m interested in OS related issues, AIX, i, or Linux on Power. I’m also interested in requirements that span all three, that need to apply across hardware and PowerVM.

Interested in mobility? Want your keys to move between systems with you? Not much good if you move the system but can’t read the data becuase you don’t have key authority. Is encryption in your Power Systems future? Is it OK to have it in software only, to have it as an offload engine or does it need to run faster via acceleration. Do you have numbers, calculations on how many, what key sizes etc.

Let’s be clear though, we have plans and implementations in all these areas. What I’m interested in are your thoughts and requirements.

IBM Software and Power Systems Roadshow

In September and October 2007, the IBM Software Group Competitive Project office put on a short series of roadshows in North America and India to show some of the best aspects of IBM Middleware running on Power Systems. It’s not an out and out marketing event, but one designed and presented by some solid technical folks.

They’ve announced the first set of dates for 2008, and the events start next week. Strangely the workshop is listed on the Software/Linux web page but definitely covers AIX and Linux implementations. Here are the dates and locations, hope some of you new to Power or interested in IBM Middleware exploitation on Power can make it along.

Tampa, FL February 21, 2008
Charlotte, NC February 26, 2008
Philadelphia, PA February 28, 2008
Mohegan Sun, CT March 6, 2008
Hazelwood, MO March 11, 2008
Minneapolis, MN March 13, 2008

On PowerVM, Lx86 and virtualization of Windows

PowerVM logo Yesterday saw the announcement of a re-packaging, re-branding and new technology drive for POWER™ Virtualization now PowerVM™. You can see the full announcement here. It is good to be back working on VM, sorta.

Over on, Alessandro Perilli, says we are “missing the market in any case because its platform is unable to virtualize Windows operating systems”. I say not.

POWER isn’t Windows, it’s not x86 hardware. We scale much, much higher, perform much better and generally offer high availability features and function as standard or an add-on, way ahead of Windows. Running Windows on PowerVM and Power hardware would pick-up some of the reliability features of the hardware transparently, and the workload consolodation potential would be very attractive. What it comes down to though, is what it would take to virtualize Windows on PowerVM?

We could do it. We could add either hardware simulation or emulation or more likely translation that would allow the x86 architecture or Windows itself to be supported on PowerVM. There would be ongoing issues with the wide variety of h/w drivers and related issues, but lets put those aside for now.

We could have gone down a similar route to the old Bristol Technologies WIND/U WIN32 licensing and technology route, porting and running a subset of WIN32 or even via mono or .net. We might even call it PowerVM Wx86. Just reverse engineering MS technology is neither the right idea from a technology or business perspective.

So technically it could be done one way or another. The real question though is the same as the discussion about supporting Solaris on Power. Yes, it would be great to have the mother of all binary or source compatibility virtualization platforms. However, as always the real issue is not if it could be done, but how would you support your applications? After all isn’t it about “applications, applications, applications“?

And there’s the rub. If you wanted to run middleware and x86 binary applications on POWER hardware, then you’d need support for the binaries. For middleware, most of the industries leading middleware is already available on either of AIX, i5/OS or Linux on Power, some is available on all three. What would software vendors prefer to do in this case? Would they be asked to support an existing binary stack on Windows on PowerVM, or would they prefer to just continue to support the native middleware stacks that benefit directly from the Power features?

Most would rather go with the native software and not incur the complexities and additional support costs of running in an emulated or simulated environment. The same is true of most customer applications, especially those for which the customer doesn’t have easy or ready access to the source code for Windows applications.

In the x86 market, the same isn’t true, there’s less risk supporting virtualization such as Xen or VMware

The same isn’t true with PowerVM Lx86 applications. First because of the close affinity between Linux and Linux on POWER. There are already existing Linux on Power distributions, the source code is available, and most system calls are transparent and can be easily mapped into Linux on POWER. Second, drivers, device support etc. is handled natively within either the POWER hardware, PowerVM or within the Linux operating system, running in the PowerVM partition. Thirdly, IBM has worked with SuSe and RedHat to make the x86 runtime libraries available on Linux on POWER. Finally, many middleware packages already run on Linux on POWER, or it is available as open source and can be compiled to run on Linux on POWER.

All of which makes it a very different value proposition. Using NAS or SAN storage, it is perfectly possible to run the same binaries currently or as needed on x86 and PowerVM. The compilcations of doing this, the software stack required, as well as the legal conditions for running Windows binaries just don’t make it worth the effort.

Although not identical, many of the same issues arise running Solaris, either Solaris x86, or OpenSolaris PowerPC port. So, thats a wrap for now, still many interesting things going on here in Austin, I really well get back to the topic of Amazon, EC2 and loud computing, memo to self.

Whither the Hardware Management Console

So, most larger IBM server users have a Hardware Management Console. The word console makes these boxes seem like they just provide a GUI into the inner workings of the IBM Servers, but actually they provide a huge amount of additional function and the systems wouldn’t be usable without them. More correctly they should have probably been called the IBM Server server.

As I’ve alluded to in a couple of prior posts, and on Twitter, I’ve been heavily involved in looking again at the role of Platform Management, that is the configuration, deployment, operation, monitoring of one or more System p homogeneous servers running in Blade or rack mounted systems. Yes, I understand that most organisations have other servers and want to manage them as well, and the work we are doing will definitely allow the System p Platform Management to be extended and driven by external Systems Management tools such as the IBM Systems Director, Tivoli Systems Management, BMC, Computer Associates etc. This will be through both existing and emerging industry standards(see blog).

However, what I’m focussed on short term is the role of the various tools within System p and AIX, but also to support Linux on Power and i5/OS, PAVE Linxu x/86 binaries etc.

As part of that it seems like re-missioning the HMC might be a good idea. On some of our Systems we have a feature called the Integrated Virtualization Manager(IVM) which provides some of the function of the HMC but without the requirement to run an external “console” aka the server server as it runs in a logical partition on the server itself.

I’m interested in any observation and comment on these two things. Would you want to see more function in an re-missioned HMC or does the function belong internally to the system, say running in a logical partition like the IVM? What do you see as the pro’s and con’s of each?

Over the past 6-months I’ve had a lot of feedback on both of these, I’ll incorporate any comments with those and hopefully towards the end of July be able to publish at least an outline or high-level design of where our thinking is.

See you in about 500-miles of cycling and a long spa weekends time!

POWER6, Workload Partitions and Mobility

In the last month, I should have written a number of blog posts on our latest product announcements, instead I’ve been really busy. I have been spending most of my time on a root and branch review of what I’m calling platform management. I was also in Orlando for the IBM IMPACT 2007 conference briefing press and analysts on the announcements, but spent much of my time on a couple of key POWER7 topics.

More on that platform management later, but for the record here is a quick summary of some key part od the announcement, and it’s a doozy(1).


IBM POWER6(TM) 570 ServerThis week saw the much speculated ECLIPZ, or POWER6™, more correctly the IBM System p™ 570 server, based on innovative IBM POWER6™ processor technology. The “server” can go from a 2- to 16-core configuration, each core supporting 2-threads.

Interestingly one of the neat features is the ability to switch the processor into low power mode while still running. In most cases this will save up to 50% of the electricity it consumes with little effect or degradation seen by individual applications.

Decimal Floating Point

Much misunderstood, while the focus has been on the power, energy consumption and throughput and stunning benchmark results, one of the more interesting and valuable features is the addition of Decimal Floating Point instructions in the Power Architecure.

This is the first time IBM has put base-10 decimal instructions, and it is the result of probably four or five years concerted vision and effort of IBM Fellow, and fellow Brit’. Mike Cowlishaw. Mike realised that as computer architectures moved to 64-bit and possibly larger, commercial business programs would potentially suffer inconsistent and a best potential rounding errors in decimal arithmetic during the conversion from decimal to binary and back again.

This was especially true in business languages such as COBOL and his own procedures language REXX, and others. Mike was able to marshal through various standards organisations a platform neutral standard implementation and with the launch of Power6 it is now in hardware, and a number of compilers, middleware packages and databases have been updated to use these new instructions. Oh yes, becuase the new instructions are in pure hardware they also run faster and improve performance of applications that use them, but thats a secondary benefit!


Live Partition Mobility will migrate a running partition and its’ workload to another partition on another IBM System p server. This suppoirt will include AIX and Linux, and be extended to i5/OS in time. It has some great applications, not just an availability feature for when you need to to do hardware maintenance, or restarts etc. but also potentially a key feature in organisations that are heavily focussed on energy management.

Say you’ve got three Power servers, over the weekend and during holiday periods the workloads on two of them is very lightly used. Instead of shutting the systems down, or building a complex cluster based application where you can shutdown nodes within the system, you simply use Live Partition Migration to migrate all the workloads running on the two lightly used servers onto the 3rd. Once complete, usually within 15-minutes, you can shutdown and power off the two servers you are no longer using.

Prior to the workload useage returning to normal, you just power on both servers and use Live Partition Migration to move the workloads back. No system outages, no file system recoverys.

Workload Partitions

Part of an upcoming formal announcement of AIX 6 will be a feature called Workload Partitions. Yes, you might say this is similar to Sun Solaris Container function, but I wouldn’t!

Workload Partitions are a key part of a development effort to overhall AIX over the next few years and move it much more into a federated, services based model. Workload partitions allow you to quikly and easily start multiple instances of workloads within a single copy of the AIX operating system. Each Workload Partition has memroy and file space seperation from each other and share commonAIX libraries.

One reason why AIX Workload Partitions will become increasingly important is to take up the complexity of running applications in a multi-threaded, multi-core processor. Along with Logical Partitioning and Micro-Partitioning, it will allow users and programmers to relatively seemlessly use the ever increasing power and throughput over these systems without resorting to complex multi-threaded programming, queue and lock management etc.

Application Mobility

Also part of the AIX 6 announcement will be Application Mobility. This provides the mobility between AIX instances on the same and different physical Power based servers of work running in Workload Partitions. In the short-term this will be useful in operational availability and energy management. Longer term it will become a key feature of an automated, fedrated, workload managed server infrastructure for AIX.

So, thats a wrap for now. I had hoped to write more, and more timeloy as well as more detail, but work and immenent vacation meant I was left frantically typing this a couple of hours before leaving for the airport.

(1) A unique or strikingly different one of its kind

PAVE open beta

Here is a new twist to the “old” server consolidation story, literally.

We’ve opened up the beta program for the IBM System p Application Virtual Environment or p AVE. What p AVE does allow the consolidation x86 Linux Workloads on System P Servers.

You can take advantage of the Advanced Power Virtualization to move your Linux binaries from older Intel servers to just one(or more) Power based servers.

One of the things we want to get out of the beta program is some real world performance feedback. Since System p AVE will allow most x86 Linux binaries to run unmodified and take advantage of the Power based servers not just for execution speed and throughput that many Linux apps will experience, but allow you to make power, cooling and space savings by consolidating x86 server footprints onto System p and switching the old servers off.

From the beta announce:

“Applications should run, without any change to the application and without having to predefine that application to the Linux on POWER operating system with p AVE installed. The system will “just know” the application is a Linux x86 binary at runtime and run it automatically in a p AVE environment. Behind the scenes, p AVE creates a virtual x86 environment and file structure, and executes x86 Linux applications by dynamically translating and mapping x86 instructions and system calls to a POWER Architecture™ processor-based system. It uses caching to optimize performance, so an application’s performance can actually increase the longer it runs.”

Here is some more detail in a recent IBM p ave Redpaper

For those that don’t or can’t take part in the beta, IBM intends to make this capability generally available in second half of 2007.

In search of partitioning

In his Enterprise Architecture: Virtualization and Management by Magazine blog post, James McGovern muses on mainframe virtualization leadership and if the likes of James Governor and the 451 Group will start blogging about it. He also wonders if “IBM mainframes would make a better participant in a grid architecture than Sun, Dell or HP?”

It’s not clear where the link is to management by magazine, but the blogsphere is certainly a funny old world. Partitioning and virtualization is taking off in a big way and a few short clicks this is all back in focus. Continue reading ‘In search of partitioning’

Linux on Power (LoP)

Linux on PowerI’m very interested in what’s going on with Linux on Power, lots of reading, powerpoint presentations, emails and IBM only Notes databases later, I’ve discovered that the best place for information, code etc. is indeed the DeveloperWorks web site.

About & Contact

I'm Mark Cathcart, formally a Senior Distinguished Engineer, in Dells Software Group; before that Director of Systems Engineering in the Enterprise Solutions Group at Dell. Prior to that, I was IBM Distinguished Engineer and member of the IBM Academy of Technology. I am a Fellow of the British Computer Society ( I'm an information technology optimist.

I was a member of the Linux Foundation Core Infrastructure Initiative Steering committee. Read more about it here.

Subscribe to updates via rss:

Feed Icon

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 915 other followers

Blog Stats

  • 89,088 hits