Archive for the 'partitions' Category

Physicalization at work – software pricing at bay

This is an unashamed take from an Arstechnica.com article, and I certainly can’t take credit for the term. I’m just back from a week of touring around Silicon valley talking about our thinking for Dell 12G servers, to Dell customers and especially to those that take our products and integrate them into their own product offerings. It was a great learning experience, and if you took time to see me and the team, thank you!

One of the more interesting discussions both amongst the Dell team, and with the customers and integrators, was around the concept of physicalization. Instead of building bigger and faster servers, based around more and more cores and sockets, why not have a general purpose, low power, low complexity physical server that is boxed up, aggregated and multiplexed into a physicalization offering?

For example, as discussed in the arstechnica article, using a very simplified, atom based server, eliminate many of the older software and hardware additions that make motherboards more complex and more expensive to build, which in turn with the reduced power and heat, makes them even more reliable. Putting twelve, or more in a single 2U server makes a lot of sense.

They also, typically don’t need a lot of complex virtualization software to make full use of the servers. That might sound like heresy in these days when virtualization is assumed and the major driver behind much of the marketing spend, and much of the technology spend.

So what’s driving this? Well mostly, if you think about it, the amount of complexity needed in the x86 marketplace these days, and also in mainframe and Power/UNIX marketplace is through complex software and systems management. That complexity is driven by two needs.

  1. Server utilization – in order to utilize the increasing processor power, sockets and cores, you need to virtualize the server and split into consumable, useful chunks. This would normal require a complex discussion about multi-threaded programming and complexity, but I’ll ignore that this time. Net, net there are very few workloads and applications that can use the effective capacity offered by current top-end Intel and AMD x86 processors.
  2. Software Pricing – Since the hardware vendors, including Dell, sell these larger virtualized servers as great business opportunities to simplify IT and server deployment by consolidating disperate, and often distributed server workloads into a single, larger, more manageable server, the software vendors want in on the act. Otherwise they lose out on revenue as the customer deploys fewer and fewer servers. On eploy to combat this, to to charge by core or socket. Ultimately their software software does little and sometimes nothing to exploit these features, they just charge, well, because they can. In a virtualized server environment, the same is true. The software vendors don’t exploit the virtualization layer, heck in some cases they are even reluctant to support their software running in this environment and require customers to recreate any problems in a non-virtualized environment before looking at them.

And so it is that physicalization is starting to become attractive. I’ve discussed both the software pricing and virtualization topics many times in the past. In fact, I’ve expressed my frustration that software pricing still seems to drive our industry and, more importantly, our customers to do things that they otherwise wouldn’t. Does your company make radical changes to your IT infrastructure just to get around uncompetitive and often restrictive software pricing practices? Is physicalization interesting or just another dead-end IT trend?

Now here’s an interesting uptime challenge

I was reading through some Cisco blogs to catch-up on what’s going on in their world for a current project of mine, when I saw this blog entry for today called “Beat this uptime” from Omar Sultan at Cisco.

They have some servers with five, seven and even nine years uptime. Great, except the utilization is so low as to not warrant the electricity they’ve used. Rather than an uptime boast, these systems seem like a great opportunity for a green datacenter consolidation and save the electricity!

Hmm, I love the smell of virtualization in the morning. I emailed Tim Sipples, it will be interesting what the mainframe blog makes of this, I know that when I was new technology architect for System z they had some customers with uptime in the 3-4 year space, running millions of transactions per day and at 85%+ utilization. I never checked for Power Systems but I suspect there are many similar out there.

[Update:] Actually if you post with stats on your system uptime directly to the Cisco blog, you can win a fleece. I missed that when I first read it!

2008 IBM Power Systems Technical University featuring AIX and Linux

Yep, it’s a mouthful. I’ve just been booking some events and presentations for later in the year, and this one, which I had initially hoped to attend clashes with one, so now I can’t.

However, in case the snappy new title passed you buy, it is still the excellent IBM Technical conference it used to be when it was the IBM System p, AIX and Linux Technical University. It runs 4.5 days from 8 – 12 September in Chicago and offers an agenda that includes more than 150 knowledge-packed sessions and hands-on training delivered by top IBM developers and Power Systems experts.

Since the “IBM i” conference is running alongside, you can choose to attend sessions in either event. Sadly I couldn’t find a link for the conference abstracts, but there is more detail online here.

Power Systems and SOA Synergy

One of the things I pushed for when I first joined Power Systems(then System p) was for the IBM redbooks to focus more on software stacks, and to relate how the Power Systems hardware can be exploited to deliver a more extensive, and easier to use and more efficient hardware stack than many scale out solutions.

Scott Vetter, ITSO Austin project lead, who I first worked with back in probably 1992 in Poughkeepsie, and the Austin based ITSO team, including Monte Poppe from our System Test team, who has recently been focusing on SAP configurations, have just published a new IBM Redbook.

The Redbook, Power Systems and SOA Synergy, SG24-7607, is available free for download from the redbooks abstract page here.

The book was written by systems people, and will be useful to systems people. It contains as useful summary and overview of SOA applications, ESB’s, WebSphere etc. as well as some examples of how and what you can use Power Systems for, including things like WPARs in AIX.

Power VM configurability, Virtual Service Partitions and I/O virtualization

I must admit I’ve been a bit pre-occupied lately to post much in the way of meaningful content. For a frame of reference, I’m off looking at I/O Virtualization, NIC, FBA, Switch integration and optimization, as well as next generation data center fabrics. It’s a fascinating area, ripe for some invention and there are some great ideas out there. Hopefully more on this later.

I’ve also been looking at why we’d want to create a set of extensible interfaces that would allow virtual partitions to be used to extend the Power platform function, I have to say, the more I think about this the more interesting it is. I’d be interested in your feedback on the idea of creating a set of published interfaces to Power VM to allow you to add function running in a logical partition, or a virtual service partition to add or replace function that we provide. So, for example, maybe you want to add a monitor or accounting agent to function where we do not provide source code. We’d document the interface, provide a standard calling mechanism, a shared memory interface and so on. Then, you’d implement your function in an LPAR, probably using Linux on Power, or any other way you want.

Then, based on an event in an OS, Middleware, business application running in an LPAR under AIX, IBM i or Linux on Power generates a call to the OS, Hypervisor, or VIOS, instead of us providing the function, the hypervisor or VIOS would check to see if a Virtual Service Partition had been registered for that function, if so the call and event handling would be directed there instead of to the normal destination.

In this way we could also provide a structured way to extend the platform, where we currently would like to provide function, or customers have asked for it, but it hasn’t made our development list. Any comments? Good idea, bad idea, something else ?

RedMonk IT Management PodCast #10 thoughts

I’ve been working on slides this afternoon for a couple of projects, and wondering why producing slides hasn’t really gotten any easier in 20-years since Freelance under DOS? Why is it I’ve got a 22 flatscreen monitor as an extended desktop, and I’m using a trackpoint and mouse to move things around, and waiting for Windows to move pixel by pixel…

Anyway, I clicked on the LIBSyn link for the RedMonk IT Management Podcast #10 from back in April for some background noise. In the first 20-mins or so, Cote and John get into some interesting discussion about Power Systems, especially in relation to some projects Johns’ working on. As they joke and laugh their way through an easy discussion, they get a bit confused about naming and training.

First, the servers are called IBM Power Systems, or Power. The servers span from blades to high-end scalable monster servers. They all use the Power PC architecture, instruction set RISC chip. Formally there had been two versions of the same servers, System p and System i.

Three operating systems can run natively on Power Systems, AIX, IBM i (formally i5/OS and OS/400) and Linux. You can run these concurrently in any combination using the native virtualization, PowerVM. Amongst the features of PowerVM is the ability to create Logical Partitions. These are a hardware implementation and hardware protected Type-1 Hypervisor. So, it’s like VMware but not at all. You can get more on this in this white paper. For a longer read, see the IBM Systems Software Information Center.

John then discussed the need for training and the complexity of setting up a Power System. Sure, if you want to run a highly flexible, dynamically configurable, highly virtualized server, then you need to do training. Look at the massive market for Microsoft Windows, VMware and Cisco Networking certifications. Is there any question that running complex systems would require similar skills and training?

Of course, John would say that though, as someone who makes a living doing training and consulting, and obviously has a great deal of experience monitoring and managing systems.

However, many of our customers don’t have such a need, they do trust the tools and will configure and run systems without 4-6 months of training. Our autonomic computing may not have achieved everything we envisaged, but it has made a significant difference. You can use the System Config tool at order time, either alone, with your business partner or IBMer, and do the definition for the system, have it installed and provisioned and up and running within half a day.

When I first started in Power Systems, I didn’t take any classes, was not proficient in AIX or anything else Power related. I was able to get a server up and running from scratch and get WebSphere running business applications having read a couple of redbooks. Monitoring and debugging would have taken more time, another book. Clearly becoming an expert always takes longer, see the wikipedia definition of expert.

ps. John, if you drop out of the sky from 25k ft, it doesn’t matter if the flight was a mile or a thousand miles… you’ll hit the ground at the same speed ;-)

pps. Cote I assume your exciting editing session on episode 11, wasn’t so exiciting…

ppps. 15-minutes on travel on Episode #11, time for RedmOnk Travel Podcast

Appliances, Stacks and software virtual machines

A couple of things from the “Monkmaster” this morning peaked my interest and deserved a post rather than a comment. First up was James post on “your Sons IBM“. James discusses a recent theme of his around stackless stacks, and simplicity. Next-up came a tweet link on cohesiveFT and their elastic server on demand.

These are very timely, I’ve been working on a effort here in Power Systems for the past couple of months with my ATSM, Meghna Paruthi, on our appliance strategy. These are, as always with me, one layer lower than the stuff James blogs on, I deal with plumbing. It’s a theme and topic I’ll return to a few times in the coming weeks as I’m just about to wrap up the effort. We are currently looking for some Independent Software Vendors( ISVs) who already package their offerings in VMWare or Microsoft virtual appliance formats and either would like to do something similar for Power Systems, or alternatively have tried it and don’t think it would work for Power Systems.

Simple, easy to use software appliances which can be quickly and easily deployed into PowerVM Logical Partitions have a lot of promise. I’d like to have a market place of stackless, semi-or-total black box systems that can be deployed easily and quickly into a partition and use existing capacity or dynamic capacity upgrade on demand to get the equivalent of cloud computing within a Power System. Given we can already run circa 200-logical partitions on a single machine, and are planing something in the region of 4x that for the p7 based servers with PowerVM, we need to do something about the infrastructure for creating, packaging, servicing, updating and managing them.

We’ve currently got six-sorta-appliance projects in flight, one related to future datacenters, one with WebSphere XD, one with DB2, a couple around security and some ideas on entry level soft appliances.

So far it looks like OVF wrappers around the Network Installation Manager aka NIM, look like the way to go for AIX based appliances, with similar processes for i5/OS and Linux on Power appliances. However, there are a number of related issues about packaging, licensing and inter and intra appliance communication that I’m looking for some input on. So, if you are an ISV, or a startup, or even in independent contractor who is looking at how to package software for Power Systems, please feel free to post here, or email, I’d love to engage.


About & Contact

I'm Mark Cathcart, Senior Distinguished Engineer, in Dells Software Group. I was formerly Director of Systems Engineering in the Enterprise Solutions Group at Dell, and an IBM Distinguished Engineer and member of the IBM Academy of Technology. I'm an information technology optimist.

Blog Stats

  • 75,128 hits

Subscribe to updates via rss:

Feed Icon

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 488 other followers


Follow

Get every new post delivered to your Inbox.

Join 488 other followers