Archive for the 'partitions' Category

IBM 3090 Training

Between 2001 and 2004, I had an office in the home of the mainframes, IBM Poughkeepsie, in Building 705. As a Brit’, it wasn’t my natural home, also, I wasn’t a developer or a designer, as a software architect focusing in software and application architectures, it never felt like home.

IBM Library number ZZ25-6897.

One day, on my way to lunch at the in-house cafeteria, I walked by a room whose door was always closed. There was a buzz of people coming from it, and the door was open. A sign outside said “Library closing, Take anything you can use!”

I have some great books, a few of which I plan to scan and donate the output to either the Computer History Museum, or to the Internet Archive.

One of the more fun things I grabbed were a few IBM training laserdiscs. I had no idea what I’d do with them, I had never owned a laserdisc player. I just thought they’d look good sitting on my bookshelf. Especially since they are the same physical size as vinyl albums.

Now 16-years on, I’ve spent the last 4-years digitising my entire vinyl collection, in total some 2,700 albums. One of my main focus areas has been the music of Jazz producer, Creed Taylor. One of the side effects from that is I’ve created a new website, ctproduced.com – In record collecting circles, I’m apparently a completionist. I try to buy everything.

And so it was I started acquiring laserdiscs by Creed Taylor. It took a while, and I’m still missing Blues At Bradleys by Charles Fambrough. While I’ve not got around to writing about them in any detail, you can find them at the bottom of the entry here.

What I had left were the IBM laserdiscs. On monday I popped the first laserdisc in, it was for the IBM 3090 Processor complex. It was a fascinating throwback for me. I’d worked with IBM Kingston on a number of firmware and software availability issues, both as a customer, and later as an IBM Senior Software Engineer.

I hope your find the video fascinating. The IBM 3090 Processor was, to the best of my knowledge, the last of the real “mainframes”. Sure we still have IBM processor architecture machines that are compatible with the 3090 and earlier architectures. However, the new systems, more powerful, more efficient, are typically a single frame system. Sure, a parallel sysplex can support multiple mainframes, it doesn’t require them. Enjoy!

Physicalization at work – software pricing at bay

This is an unashamed take from an Arstechnica.com article, and I certainly can’t take credit for the term. I’m just back from a week of touring around Silicon valley talking about our thinking for Dell 12G servers, to Dell customers and especially to those that take our products and integrate them into their own product offerings. It was a great learning experience, and if you took time to see me and the team, thank you!

One of the more interesting discussions both amongst the Dell team, and with the customers and integrators, was around the concept of physicalization. Instead of building bigger and faster servers, based around more and more cores and sockets, why not have a general purpose, low power, low complexity physical server that is boxed up, aggregated and multiplexed into a physicalization offering?

For example, as discussed in the arstechnica article, using a very simplified, atom based server, eliminate many of the older software and hardware additions that make motherboards more complex and more expensive to build, which in turn with the reduced power and heat, makes them even more reliable. Putting twelve, or more in a single 2U server makes a lot of sense.

They also, typically don’t need a lot of complex virtualization software to make full use of the servers. That might sound like heresy in these days when virtualization is assumed and the major driver behind much of the marketing spend, and much of the technology spend.

So what’s driving this? Well mostly, if you think about it, the amount of complexity needed in the x86 marketplace these days, and also in mainframe and Power/UNIX marketplace is through complex software and systems management. That complexity is driven by two needs.

  1. Server utilization – in order to utilize the increasing processor power, sockets and cores, you need to virtualize the server and split into consumable, useful chunks. This would normal require a complex discussion about multi-threaded programming and complexity, but I’ll ignore that this time. Net, net there are very few workloads and applications that can use the effective capacity offered by current top-end Intel and AMD x86 processors.
  2. Software Pricing – Since the hardware vendors, including Dell, sell these larger virtualized servers as great business opportunities to simplify IT and server deployment by consolidating disperate, and often distributed server workloads into a single, larger, more manageable server, the software vendors want in on the act. Otherwise they lose out on revenue as the customer deploys fewer and fewer servers. On eploy to combat this, to to charge by core or socket. Ultimately their software software does little and sometimes nothing to exploit these features, they just charge, well, because they can. In a virtualized server environment, the same is true. The software vendors don’t exploit the virtualization layer, heck in some cases they are even reluctant to support their software running in this environment and require customers to recreate any problems in a non-virtualized environment before looking at them.

And so it is that physicalization is starting to become attractive. I’ve discussed both the software pricing and virtualization topics many times in the past. In fact, I’ve expressed my frustration that software pricing still seems to drive our industry and, more importantly, our customers to do things that they otherwise wouldn’t. Does your company make radical changes to your IT infrastructure just to get around uncompetitive and often restrictive software pricing practices? Is physicalization interesting or just another dead-end IT trend?

Now here’s an interesting uptime challenge

I was reading through some Cisco blogs to catch-up on what’s going on in their world for a current project of mine, when I saw this blog entry for today called “Beat this uptime” from Omar Sultan at Cisco.

They have some servers with five, seven and even nine years uptime. Great, except the utilization is so low as to not warrant the electricity they’ve used. Rather than an uptime boast, these systems seem like a great opportunity for a green datacenter consolidation and save the electricity!

Hmm, I love the smell of virtualization in the morning. I emailed Tim Sipples, it will be interesting what the mainframe blog makes of this, I know that when I was new technology architect for System z they had some customers with uptime in the 3-4 year space, running millions of transactions per day and at 85%+ utilization. I never checked for Power Systems but I suspect there are many similar out there.

[Update:] Actually if you post with stats on your system uptime directly to the Cisco blog, you can win a fleece. I missed that when I first read it!

2008 IBM Power Systems Technical University featuring AIX and Linux

Yep, it’s a mouthful. I’ve just been booking some events and presentations for later in the year, and this one, which I had initially hoped to attend clashes with one, so now I can’t.

However, in case the snappy new title passed you buy, it is still the excellent IBM Technical conference it used to be when it was the IBM System p, AIX and Linux Technical University. It runs 4.5 days from 8 – 12 September in Chicago and offers an agenda that includes more than 150 knowledge-packed sessions and hands-on training delivered by top IBM developers and Power Systems experts.

Since the “IBM i” conference is running alongside, you can choose to attend sessions in either event. Sadly I couldn’t find a link for the conference abstracts, but there is more detail online here.

Power Systems and SOA Synergy

One of the things I pushed for when I first joined Power Systems(then System p) was for the IBM redbooks to focus more on software stacks, and to relate how the Power Systems hardware can be exploited to deliver a more extensive, and easier to use and more efficient hardware stack than many scale out solutions.

Scott Vetter, ITSO Austin project lead, who I first worked with back in probably 1992 in Poughkeepsie, and the Austin based ITSO team, including Monte Poppe from our System Test team, who has recently been focusing on SAP configurations, have just published a new IBM Redbook.

The Redbook, Power Systems and SOA Synergy, SG24-7607, is available free for download from the redbooks abstract page here.

The book was written by systems people, and will be useful to systems people. It contains as useful summary and overview of SOA applications, ESB’s, WebSphere etc. as well as some examples of how and what you can use Power Systems for, including things like WPARs in AIX.

Power VM configurability, Virtual Service Partitions and I/O virtualization

I must admit I’ve been a bit pre-occupied lately to post much in the way of meaningful content. For a frame of reference, I’m off looking at I/O Virtualization, NIC, FBA, Switch integration and optimization, as well as next generation data center fabrics. It’s a fascinating area, ripe for some invention and there are some great ideas out there. Hopefully more on this later.

I’ve also been looking at why we’d want to create a set of extensible interfaces that would allow virtual partitions to be used to extend the Power platform function, I have to say, the more I think about this the more interesting it is. I’d be interested in your feedback on the idea of creating a set of published interfaces to Power VM to allow you to add function running in a logical partition, or a virtual service partition to add or replace function that we provide. So, for example, maybe you want to add a monitor or accounting agent to function where we do not provide source code. We’d document the interface, provide a standard calling mechanism, a shared memory interface and so on. Then, you’d implement your function in an LPAR, probably using Linux on Power, or any other way you want.

Then, based on an event in an OS, Middleware, business application running in an LPAR under AIX, IBM i or Linux on Power generates a call to the OS, Hypervisor, or VIOS, instead of us providing the function, the hypervisor or VIOS would check to see if a Virtual Service Partition had been registered for that function, if so the call and event handling would be directed there instead of to the normal destination.

In this way we could also provide a structured way to extend the platform, where we currently would like to provide function, or customers have asked for it, but it hasn’t made our development list. Any comments? Good idea, bad idea, something else ?

RedMonk IT Management PodCast #10 thoughts

I’ve been working on slides this afternoon for a couple of projects, and wondering why producing slides hasn’t really gotten any easier in 20-years since Freelance under DOS? Why is it I’ve got a 22 flatscreen monitor as an extended desktop, and I’m using a trackpoint and mouse to move things around, and waiting for Windows to move pixel by pixel…

Anyway, I clicked on the LIBSyn link for the RedMonk IT Management Podcast #10 from back in April for some background noise. In the first 20-mins or so, Cote and John get into some interesting discussion about Power Systems, especially in relation to some projects Johns’ working on. As they joke and laugh their way through an easy discussion, they get a bit confused about naming and training.

First, the servers are called IBM Power Systems, or Power. The servers span from blades to high-end scalable monster servers. They all use the Power PC architecture, instruction set RISC chip. Formally there had been two versions of the same servers, System p and System i.

Three operating systems can run natively on Power Systems, AIX, IBM i (formally i5/OS and OS/400) and Linux. You can run these concurrently in any combination using the native virtualization, PowerVM. Amongst the features of PowerVM is the ability to create Logical Partitions. These are a hardware implementation and hardware protected Type-1 Hypervisor. So, it’s like VMware but not at all. You can get more on this in this white paper. For a longer read, see the IBM Systems Software Information Center.

John then discussed the need for training and the complexity of setting up a Power System. Sure, if you want to run a highly flexible, dynamically configurable, highly virtualized server, then you need to do training. Look at the massive market for Microsoft Windows, VMware and Cisco Networking certifications. Is there any question that running complex systems would require similar skills and training?

Of course, John would say that though, as someone who makes a living doing training and consulting, and obviously has a great deal of experience monitoring and managing systems.

However, many of our customers don’t have such a need, they do trust the tools and will configure and run systems without 4-6 months of training. Our autonomic computing may not have achieved everything we envisaged, but it has made a significant difference. You can use the System Config tool at order time, either alone, with your business partner or IBMer, and do the definition for the system, have it installed and provisioned and up and running within half a day.

When I first started in Power Systems, I didn’t take any classes, was not proficient in AIX or anything else Power related. I was able to get a server up and running from scratch and get WebSphere running business applications having read a couple of redbooks. Monitoring and debugging would have taken more time, another book. Clearly becoming an expert always takes longer, see the wikipedia definition of expert.

ps. John, if you drop out of the sky from 25k ft, it doesn’t matter if the flight was a mile or a thousand miles… you’ll hit the ground at the same speed 😉

pps. Cote I assume your exciting editing session on episode 11, wasn’t so exiciting…

ppps. 15-minutes on travel on Episode #11, time for RedmOnk Travel Podcast

Appliances, Stacks and software virtual machines

A couple of things from the “Monkmaster” this morning peaked my interest and deserved a post rather than a comment. First up was James post on “your Sons IBM“. James discusses a recent theme of his around stackless stacks, and simplicity. Next-up came a tweet link on cohesiveFT and their elastic server on demand.

These are very timely, I’ve been working on a effort here in Power Systems for the past couple of months with my ATSM, Meghna Paruthi, on our appliance strategy. These are, as always with me, one layer lower than the stuff James blogs on, I deal with plumbing. It’s a theme and topic I’ll return to a few times in the coming weeks as I’m just about to wrap up the effort. We are currently looking for some Independent Software Vendors( ISVs) who already package their offerings in VMWare or Microsoft virtual appliance formats and either would like to do something similar for Power Systems, or alternatively have tried it and don’t think it would work for Power Systems.

Simple, easy to use software appliances which can be quickly and easily deployed into PowerVM Logical Partitions have a lot of promise. I’d like to have a market place of stackless, semi-or-total black box systems that can be deployed easily and quickly into a partition and use existing capacity or dynamic capacity upgrade on demand to get the equivalent of cloud computing within a Power System. Given we can already run circa 200-logical partitions on a single machine, and are planing something in the region of 4x that for the p7 based servers with PowerVM, we need to do something about the infrastructure for creating, packaging, servicing, updating and managing them.

We’ve currently got six-sorta-appliance projects in flight, one related to future datacenters, one with WebSphere XD, one with DB2, a couple around security and some ideas on entry level soft appliances.

So far it looks like OVF wrappers around the Network Installation Manager aka NIM, look like the way to go for AIX based appliances, with similar processes for i5/OS and Linux on Power appliances. However, there are a number of related issues about packaging, licensing and inter and intra appliance communication that I’m looking for some input on. So, if you are an ISV, or a startup, or even in independent contractor who is looking at how to package software for Power Systems, please feel free to post here, or email, I’d love to engage.

On PowerVM, Lx86 and virtualization of Windows

PowerVM logo Yesterday saw the announcement of a re-packaging, re-branding and new technology drive for POWERâ„¢ Virtualization now PowerVMâ„¢. You can see the full announcement here. It is good to be back working on VM, sorta.

Over on virtualiztion.info, Alessandro Perilli, says we are “missing the market in any case because its platform is unable to virtualize Windows operating systems”. I say not.

POWER isn’t Windows, it’s not x86 hardware. We scale much, much higher, perform much better and generally offer high availability features and function as standard or an add-on, way ahead of Windows. Running Windows on PowerVM and Power hardware would pick-up some of the reliability features of the hardware transparently, and the workload consolodation potential would be very attractive. What it comes down to though, is what it would take to virtualize Windows on PowerVM?

We could do it. We could add either hardware simulation or emulation or more likely translation that would allow the x86 architecture or Windows itself to be supported on PowerVM. There would be ongoing issues with the wide variety of h/w drivers and related issues, but lets put those aside for now.

We could have gone down a similar route to the old Bristol Technologies WIND/U WIN32 licensing and technology route, porting and running a subset of WIN32 or even via mono or .net. We might even call it PowerVM Wx86. Just reverse engineering MS technology is neither the right idea from a technology or business perspective.

So technically it could be done one way or another. The real question though is the same as the discussion about supporting Solaris on Power. Yes, it would be great to have the mother of all binary or source compatibility virtualization platforms. However, as always the real issue is not if it could be done, but how would you support your applications? After all isn’t it about “applications, applications, applications“?

And there’s the rub. If you wanted to run middleware and x86 binary applications on POWER hardware, then you’d need support for the binaries. For middleware, most of the industries leading middleware is already available on either of AIX, i5/OS or Linux on Power, some is available on all three. What would software vendors prefer to do in this case? Would they be asked to support an existing binary stack on Windows on PowerVM, or would they prefer to just continue to support the native middleware stacks that benefit directly from the Power features?

Most would rather go with the native software and not incur the complexities and additional support costs of running in an emulated or simulated environment. The same is true of most customer applications, especially those for which the customer doesn’t have easy or ready access to the source code for Windows applications.

In the x86 market, the same isn’t true, there’s less risk supporting virtualization such as Xen or VMware

The same isn’t true with PowerVM Lx86 applications. First because of the close affinity between Linux and Linux on POWER. There are already existing Linux on Power distributions, the source code is available, and most system calls are transparent and can be easily mapped into Linux on POWER. Second, drivers, device support etc. is handled natively within either the POWER hardware, PowerVM or within the Linux operating system, running in the PowerVM partition. Thirdly, IBM has worked with SuSe and RedHat to make the x86 runtime libraries available on Linux on POWER. Finally, many middleware packages already run on Linux on POWER, or it is available as open source and can be compiled to run on Linux on POWER.

All of which makes it a very different value proposition. Using NAS or SAN storage, it is perfectly possible to run the same binaries currently or as needed on x86 and PowerVM. The compilcations of doing this, the software stack required, as well as the legal conditions for running Windows binaries just don’t make it worth the effort.

Although not identical, many of the same issues arise running Solaris, either Solaris x86, or OpenSolaris PowerPC port. So, thats a wrap for now, still many interesting things going on here in Austin, I really well get back to the topic of Amazon, EC2 and loud computing, memo to self.

Catching up on IBM Redbooks

Trying to find a reference book on AIX 6, I looked at the latest list of Redbooks for Power Systems, these are the ones listed in the RSS feedRSS Feed since the start of October 2007.

Continue reading ‘Catching up on IBM Redbooks’

Last weeks announcement recap, Power6 Blades and AIX

Thanks to the folks over at the “Power Architecture zone editors’ notebook” blog here is their summary of last weeks announcements.

Get yours today: Listen UNIX users — the newly available IBM BladeCenter JS22 with Power6 is what you’ve been waiting for. Couple the JS22’s Power6 processor technology with the built-in Advanced Power Virtualization and you’ve got a lot of Power concentrated in a compact container (which can also save you on space and energy costs). It comes loaded with two 4GHz dual-core processors, an optional hard drive, and an Ethernet controller; it supports as much as 32GB of memory; the first shipments are configured for the BladeCenter H and BladeCenter HT chassis. And its virtualization features make it really special (see following entry for more on this).

And what’s a new blade without a complementary OS: Targeted for Friday, November 9, 2007, the release of AIX 6 from the beta bin should provide users improved security management and virtualization features that take advantage of a hypervisor included in the Power6 processor so you can get 100 percent application up time. The Workload Partition Manager should let sysadmins create multiple partitions (each with customized memory settings for users and application workloads) and the Live Application Mobility feature can shift applications from one server to another on the fly (and they keep running while migrating). Then there’s the Security Expert which lets users control more than 300 security settings (role-based access to applications, user-based authentication, etc.). These OS utilities should work well with the Power6 Live Partition Mobility hypervisor which can move an entire OS (AIX, RHEL, and SLES) and its workloads from one server to another while they are running. (In fact, you can preview AIX 6 here if you can’t wait until Friday.)

More mobility, this time SAP

After my post back in August about the Partition Mobility video posted on YouTube, I got a few emails about the steps it actually took, what it looked like etc.

Walter Orb at the IBM SAP International Competence center along with Mattias Koechel have produced an excellent, instructional and illustrative example of Power Live Partition Mobility. Unfortunately, it hasn’t been published publicly because it isn’t a straight video but created with a tool called camtasia and it is not a polished marketing piece, neither was the YouTube video, but this one’s more educational.

It made some of the steps clearer for me, also it shows that you can use Power 6 Live Partition Mobility with SAP, rather than the Oracle workload shown in the earlier video. If you are interested in seeing in, including the relevant HMC screens, performance monitor etc., comment here or email me and I’ll get you a copy for Windows.

[Update] I just got to read the press release accompanying this weeks announcements, it includes a great customer reference for Live Partition Mobility and SAP. The quote is from Informations-Technologie Austria (iT-Austria), the leading Austrian provider of IT services for the financial sector, and can be found here in the full press release.

[Update: 10/29/08] Almost a year later and I still get requests for this video. I’m delighted to say it’s online now and can be found on this ibm.com web page and is now available in Flash and Windows Media versions.

A trapped animal is always dangerous

I initially wrote the following a version of this as a comment to John Meyers blog entry over on sun.com – Somewhere between starting the comment on his blog and finishing it, comments were closed and it didn’t get accepted.

A number of people over the past few weeks having been egging me on to respond to John’s blog entries comparing SUN and POWER offerings. It’s great being an evangelist, being the ultimate believer in a product, technology, cleaning equipment or life saving gadget, you can’t fail, the world is your oyster, your vision is world domination and your business allows you to do it, better still, they encourage you. I’m certainly not going to do a line by line analysis and deconstruction of his writings, it’s just unproductive. He has an opinion, and he is entitled to it.

Over the next few weeks though I will post some thoughts on the general assertions. Here though, is the response I originally wrote to this blog post.

John, it’s been fun reading your POWER and virtualization analysis, you are obviously passionate about your position and the technology at SUN. SUN have clearly done a good job at filling some gaps in their product portfolio over the past few years, some in response to competitive pressure from IBM and others, and some as industry leadership.

There is no doubt that SUN have done some things that have meant IBM has had to respond. However, what you seem to have glossed over, in direct comparison to POWER Systems, rather than IBM Virtualization in general, is the real need for some of the features, and their real usability, rather than just the technical implementation. Hey, but thats life through “rose-tinted spectacles”. Oh yes, this isn’t “hubris”, I didn’t create logical partitioning, but I did contribute to it as well as a number of other important virtualization technologies.

Matais, I assume you mean CTSS which was developed at MIT to run on an IBM 709 computer between 1959-1961.

One of the programmers on that project, Bob Creasey, went on to become the project lead for CP40 the first ever IBM Virtualization implementation. CTSS was really more of a time sharing system, rather than “virtualization.”

Gene Amdahl, then Chief Architect for the S/360 product line at IBM, visited MIT a number of times and had meetings with the Professor and the CTSS team with a view to making enhancements to the hardware architecture. It is reported that they didn’t see eye-to-eye over a number of things.

There is a written history and more of this than you’d ever want to know at: http://www.princeton.edu/~melinda/25paper.pdf

The concept of “domains” and logical partitions isn’t included in the above. It would not be correct though to state that Amdahl created LPARS. He actually lead a company that created a firmware/hardware implementation of multiple domains. IBM’s implementation of logical partitions differed significantly although used a similar basic premiss. Further discussion with revisions, corrections and updates probably belongs elsewhere, where it can be maintained, and not a reply to a blog post where it cannot.

There are number of companion documents that show the roll of other important users and customers which helped IBM improve its’ virtualization offerings.

Regards.

For the record I also wrote comments on the Solaris/Linux/AIX conspiracy theory here

AIX/6 and Power 6 Enhancements — Tools for the task

I’ve been catching up on some back issues of the IBM Open Systems magazine, in the latest issue, August ’07, Ken Milberg provides a useful overview of the new AIX Workload Partitions and comes to the same conclusion I did, “I see the WPAR as a real complement to LPARs, not a replacement”.

Over on Julien Gabel “blog’o thnet”, he does a liberal comparison of the new AIX and Power 6 features with some existing and many upcoming features promised for Solaris. It’s an interesting comparison insomuch that there has been much discussion over the similarity between AIX Workload Partitions and Solaris Containers. One of the reasons we introduced containers now was the linkage and exploitation of Live Application Mobility. Julien draws the distinction between Solaris Containers as utilization feature, and not a virtualization feature.

For my part I don’t see the difference, and the more you think about this, the more obvious it becomes. Continue reading ‘AIX/6 and Power 6 Enhancements — Tools for the task’

AIX, VIOS and HMC Facts & Features

There is an excellent document listing and comparing all the functions, features, etc. of AIX, VIOS and HMC. It’s in the techdocs library and was written and updated by Ravi Singh. You can get it here.

I’m about a month late with this, thanks to a post by Richard Brader, another Brit’ in Power-land ,who updates the “IBM System p Expert Corner – for Business Partners” wiki.


About & Contact

I'm Mark Cathcart, formally a Senior Distinguished Engineer, in Dells Software Group; before that Director of Systems Engineering in the Enterprise Solutions Group at Dell. Prior to that, I was IBM Distinguished Engineer and member of the IBM Academy of Technology. I am a Fellow of the British Computer Society (bsc.org) I'm an information technology optimist.


I was a member of the Linux Foundation Core Infrastructure Initiative Steering committee. Read more about it here.

Subscribe to updates via rss:

Feed Icon

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 2,066 other subscribers

Blog Stats

  • 89,480 hits