Archive for the 'aix' Category

Hot News: Paint drys

I’m guessing I’m not so different from most people, the first time someone explains groundhog day, you laugh, but don’t believe what you are seeing. It’s kinda “n’ah, your kidding right!” but some take it seriously.

The same for the pronouncement that IBM makes regularly about server migrations to the Power Systems platforms and mainframes, you take a step back and say seriously, you are kidding, you are taking this seriously?

And that was my reaction when I saw this weeks piece from Timothy Prickett Morgan at The Register aka Vulcher central under the tagline “IBM gloats over HP, Oracle takeouts” – really, seriously, you are kidding right? Prickett Morgan covers IBM’s most recent claims that they migrated “286 customers, 182 were using Oracle (formerly Sun Microsystems) servers and 95 were using machines from Hewlett-Packard” Unix to IBMs AIX.

What surprises me is not that IBM made the claims, hey paint drys, but Prickette Morgan felt it worth writing up(The Register, tag line “Biting the Hand that feeds IT”), really, seriously?

AIX and Power Systems are great, it’s just not newsworthy at those minuscule rates compared to the inexorable rise of the x86 architecture in both private and cloud data centers, it really won’t be long before IBM can no longer afford to design and manufacture those systems. And there’s the clue to the migrations.

You stick your neck and go with Sun, now Oracle, or HP Unix systems, it’s a battle but either genuinely believe you were right, or you were just hoodwinked or cajoled into doing it for one reason or another. So, now they are both in terminal declines, whats a Data Center manager to do? Yep, the easiest thing is to claim you were right with the platform, and by doing so were part of a movement that forced IBM to lower it’s prices, and now the right thing to do is migrate to IBM as they have the best Unix solution. Phew thats alright, no one noticed and everyone goes on collecting their paychecks.

Prickett Morgan ends by wondering “why Oracle, HP, and Fujitsu don’t hit back every time IBM opens its mouth with takeout figures of their own to show they are getting traction against Big Blue with their iron.” – because frankly, no one cares except IBM. Everyone else is too busy building resilient, innovative, and cost effective solutions based on x86 Linux, either in their own data center, or in the “cloud”.

Unix migrations and game changers

More product talk, much closer to home for me are this weeks new Dell PowerEdge servers including the PowerEdge R910 which was specifically designed and configured for a market segment I’m fully aware of, RISC Server migration.

It’s well worth taking a look at this youtube video from the R910 h/w design team, for me this is something that I just think people don’t realise, just how much clever design goes into the Dell PowerEdge servers. I think this, better than anything else I’ve seen, embodies the difference between peoples perception of what Dell Server engineering does, and what we actually do. I can honestly say that even back to my IBM mainframe days, I’ve never seen a better designed, more easily accessible, configurable and thought out server.

In terms of configuration the R910 is specifically aimed at those who are rethinking proprietary UNIX deployments either on SUN SPARC or POWER AIX. Based on industry standards and the x86 architecture, the R910 is an ideal platform for RISC/UNIX migrations, large database deployments and server virtualization implementations. It’s a 4U rack server is available with a high-performance four-socket Intel Nehalem-EX processor, up to 64 DIMM slots for memory, redundant power supplies and a failsafe embedded hypervisor resulting in the performance, reliability and memory scalability needed to run mission critical applications. It also includes an option to have 10Gb ethernet right on the motherboard.

There are 3-other new servers this week, including the M910 Blade server, the R810 virtualization and consolidation server and the R815 virtualization, high-performance computing, email messaging, and database workloads server.

The PowerEdge R815 deserves it’s own “shout-out”, it comes with the same level of detail in h/w design as the R910, but is powered by the brand new 12-core AMD Opteron 6100 processors and has up to 32 DIMMs with up to 48 processor cores in a four-socket, 2U server.  As my friend and former IBM colleague, Nigel Dessau, now Chief Marketing Officer at AMD put it, the new AMD processors are “game changers

All this weeks new servers include the iDrac embedded management that our team works on, as well as the  Dell Lifecycle Controller. Lifecycle Controller provides IT administrators with a single console view of their entire IT infrastructure while performing a complete set of provisioning functions including system deployment, updates, hardware configuration and diagnostics.

For customer who are interested in migrating from proprietary UNIX environments we are also now offering a set of migration services to an open server platform and an open OS.

IBM update on Power 7

For those interested, IBM has apparently revealed some details on the upcoming Power 7 processors. Gordon Haff an analyst has written two blog entries on aspects of the disclosure meeting. The first on the size, capacity, performance and the second, on the design, threading and cache etc. Nice to see Gordon picked up on x86 Transitive, no word on any new developments though.

I suspect that given the state of the industry now, the Power Systems folks are feeling pretty pleased with the decisions we made on the threading design and processor threading requirements almost over two years ago, no point in chasing rocks if you have virtualization. Best not rest on your laurels though guys. You’ve got some really significant software pricing issues to deal with, and it will be interesting to see if you took my advice on an intentional architecture for the Power server platform management.

In a interesting, karmic sort of way, I’m doing an “Avoiding Accidental Architecture” pitch here at Dell this afternoon, I’ll be using the current Power 6 state of affairs as a good, or rather bad example. Thanks as always to Tom Maguire of EMC, and Grady Booch at IBM for the architecture meme.

IBM Big Box quandary

In another follow-up from EMC World, the last session I went to was “EMC System z, z/OS, z/Linux and z/VM”. I thought it might be useful to hear what people were doing in the mainframe space, although largely unrelated to my current job. It was almost 10-years to the day that I was at IBM, were writing the z/Linux strategy, hearing about early successes etc. and strangely, current EMC CTO Jeff Nick and I were engaged in vigourous debate about implementation details of z/Linux the night before we went and told SAP about IBM’s plans.

The EMC World session demonstrated, that as much as things change, the they stay the same. It also reminded me, how borked the IT industry is, that we mostly force customers to choose by pricing rather than function. 10-12 years ago z/Linux on the mainframe was all about giving customers new function, a new way to exploit the technology that they’d already invested in. It was of course also to further establish the mainframes role as a server consolidation platform through virtualization and high levels of utilization.(1)

What I heard were two conflicting and confusing stories, at least they should be for IBM. The first was a customer who was moving all his Oracle workloads from a large IBM Power Systems server to z/Linux on the mainframe. Why? Becuase the licensing on the IBM Power server was too expensive. Using z/Linux, and the Integrated Facility for Linux (IFL) allows organizations to do a cost avoidance exercise. Processor capacity on the IFL doesn’t count towards the total installed, general processor capacity and hence doesn’t bump up the overall software licensing costs for all the other users. It’s a complex discussion and that wasn’t the purpose of this post, so I’ll leave it at that.

This might be considered a win for IBM, but actually it was a loss. It’s also a loss for the customer. IBM lost because the processing was being moved from it’s growth platform, IBM Power Systems, to the legacy System z. It’s good for z since it consolidates it’s hold in that organization, or probably does. Once the customer has done the migration and conversion, it will be interesting to see how they feel the performance compares. IBM often refers to IFL and it’s close relatives the ziip and zaap as speciality engines. Giving the impression that they perform faster than the normal System z processors. It’s largely an urban myth though, since these “specialty” engines really only deliver the same performance, they are just measured, monitored and priced differently.

The customer lost becuase they’ve spent time and effort to move from one architecture to another, really only to avoid software and server pricing issues. While the System z folks will argue the benefits of their platform, and I’m not about to “dis” them, actually the IBM Power server can pretty mouch deliver a good enough implementation as to make the difference, largely irrelavant.

The second confliction I heard about was from EMC themselves. The second main topic of the session was a discussion about moving some of the EMC Symmetrix products off the mainframe, as customers have reported that they are using too much mainframe capacity to run. The guys from EMC were thinking of moving the function of the products to commodity x86 processors and then linking those via high speed networking into the mainframe. This would move the function out of band and save mainframe processor cycles, which in turn would avoid an upgrade, which in turn would avoid bumping the software costs up for all users.

I was surprised how quickly I interjected and started talking about WLM SRM Enclaves and moving the EMC apps to run on z/Linux etc. This surely makes much more sense.

I was left with though a definate impression that there are still hard times ahead for IBM in large non-X86 virtualized servers. Not that they are not great pieces of engineering, they are. But getting to grips with software pricing once and for all should really be their prime focus, not a secondary or tertiary one. We were working towards pay per use once before, time to revist me thinks.

(1) Sport the irony of this statement given the preceeding “Nano, Nano” post!

Virtualization, The Recession and The Mainframe

Robin Bloor has posted an interesting entry on his “Have mac will blog” blog on the above subject. He got a few small things wrong, well mostly, he got all the facts wrong but, right from a populist historical rewrite perspective. Of course I posted a comment, but as always made a few typos that I now cannot correct, so here is the corrected version(feel free to delete the original comments Robin… or just make fun of me for the mistakes, but know I was typing outdoors at the South Austin Trailer Park and Eatery, with open toe sandles on and it’s cold tonight in Austin, geeky I know!)

What do they say about a person who is always looking back to their successes? Well, in my case, it’s only becuase I can’t post on my future successes, they are considered too confidential for me to even leave slides with customers when I visit… 

VM revisited, enjoy:

 

Mark Cathcart (Should have) said,
on October 23rd, 2008 at 8:16 pm

Actually Robin, while it’s true that the S/360 operating systems were written in Assembler, and much of the 370 operating systems, PL/S was already in use for some of the large and complex components.

It is also widely known that virtualization, as you know it on the mainframe today, was first introduced on the S/360 model-67. This was a “bastard child” of the S/360 processors that had virtual memory extensions. At that point, the precursor to VM/370 used on the S/360-67 was CP-67.

I think you’ll also find that IBM never demonstrated 40,000 Linux virtual machines on a single VM system, it was David Boyes of Sine Nomine, who also recently ported Open Solaris to VM.

Also, there’s no such thing as pSeries Unix in the marketing nomenclature any more, it’s now Power Systems, whose virtualization now supports AIX aka IBM “Unix”, System i or IBM i to use the the modern vernacular and Linux on Power.

Wikipedia is a pretty decent source for information on mainframe virtualization, right up until VM/XA where there are some things that need correcting, I just have not had the time yet.

Oh yeah, by the way. While 2TB of memory on a mainframe gives pretty impressive virtualization capabilities, my favorite anecdote, and it’s true because I did it, was back in 1983. At Chemical Bank in New York. We virtualized a complete, production, high availability, online credit card authorization system, by adding just 4Mb of memory boosting the total system memory to a whopping 12Mb of memory! Try running any Intel hypervisor or operating system on just 12Mb of memory these days, a great example of how efficient the mainframe virtualization is!

 

2008 IBM Power Systems Technical University featuring AIX and Linux

Yep, it’s a mouthful. I’ve just been booking some events and presentations for later in the year, and this one, which I had initially hoped to attend clashes with one, so now I can’t.

However, in case the snappy new title passed you buy, it is still the excellent IBM Technical conference it used to be when it was the IBM System p, AIX and Linux Technical University. It runs 4.5 days from 8 – 12 September in Chicago and offers an agenda that includes more than 150 knowledge-packed sessions and hands-on training delivered by top IBM developers and Power Systems experts.

Since the “IBM i” conference is running alongside, you can choose to attend sessions in either event. Sadly I couldn’t find a link for the conference abstracts, but there is more detail online here.

Power Systems and SOA Synergy

One of the things I pushed for when I first joined Power Systems(then System p) was for the IBM redbooks to focus more on software stacks, and to relate how the Power Systems hardware can be exploited to deliver a more extensive, and easier to use and more efficient hardware stack than many scale out solutions.

Scott Vetter, ITSO Austin project lead, who I first worked with back in probably 1992 in Poughkeepsie, and the Austin based ITSO team, including Monte Poppe from our System Test team, who has recently been focusing on SAP configurations, have just published a new IBM Redbook.

The Redbook, Power Systems and SOA Synergy, SG24-7607, is available free for download from the redbooks abstract page here.

The book was written by systems people, and will be useful to systems people. It contains as useful summary and overview of SOA applications, ESB’s, WebSphere etc. as well as some examples of how and what you can use Power Systems for, including things like WPARs in AIX.

RedMonk IT Management PodCast #10 thoughts

I’ve been working on slides this afternoon for a couple of projects, and wondering why producing slides hasn’t really gotten any easier in 20-years since Freelance under DOS? Why is it I’ve got a 22 flatscreen monitor as an extended desktop, and I’m using a trackpoint and mouse to move things around, and waiting for Windows to move pixel by pixel…

Anyway, I clicked on the LIBSyn link for the RedMonk IT Management Podcast #10 from back in April for some background noise. In the first 20-mins or so, Cote and John get into some interesting discussion about Power Systems, especially in relation to some projects Johns’ working on. As they joke and laugh their way through an easy discussion, they get a bit confused about naming and training.

First, the servers are called IBM Power Systems, or Power. The servers span from blades to high-end scalable monster servers. They all use the Power PC architecture, instruction set RISC chip. Formally there had been two versions of the same servers, System p and System i.

Three operating systems can run natively on Power Systems, AIX, IBM i (formally i5/OS and OS/400) and Linux. You can run these concurrently in any combination using the native virtualization, PowerVM. Amongst the features of PowerVM is the ability to create Logical Partitions. These are a hardware implementation and hardware protected Type-1 Hypervisor. So, it’s like VMware but not at all. You can get more on this in this white paper. For a longer read, see the IBM Systems Software Information Center.

John then discussed the need for training and the complexity of setting up a Power System. Sure, if you want to run a highly flexible, dynamically configurable, highly virtualized server, then you need to do training. Look at the massive market for Microsoft Windows, VMware and Cisco Networking certifications. Is there any question that running complex systems would require similar skills and training?

Of course, John would say that though, as someone who makes a living doing training and consulting, and obviously has a great deal of experience monitoring and managing systems.

However, many of our customers don’t have such a need, they do trust the tools and will configure and run systems without 4-6 months of training. Our autonomic computing may not have achieved everything we envisaged, but it has made a significant difference. You can use the System Config tool at order time, either alone, with your business partner or IBMer, and do the definition for the system, have it installed and provisioned and up and running within half a day.

When I first started in Power Systems, I didn’t take any classes, was not proficient in AIX or anything else Power related. I was able to get a server up and running from scratch and get WebSphere running business applications having read a couple of redbooks. Monitoring and debugging would have taken more time, another book. Clearly becoming an expert always takes longer, see the wikipedia definition of expert.

ps. John, if you drop out of the sky from 25k ft, it doesn’t matter if the flight was a mile or a thousand miles… you’ll hit the ground at the same speed 😉

pps. Cote I assume your exciting editing session on episode 11, wasn’t so exiciting…

ppps. 15-minutes on travel on Episode #11, time for RedmOnk Travel Podcast

It takes a team – April Power Systems Announcements

I’ve had a few emails asking me if I was going to write a log entry on this month announcements, and to be honest I wasn’t. They are an impressive list of products, branding and customer announcements. I wasn’t anything to do with them, given I’m no longer asked to do marketing/sales types presentations, I picked that time to go do the Machupichhu/Inca trail trip in Peru.

The April announcements though were a credit to the teamwork across the even more global IBM. Core Processor and server development teams in Austin and Rochester, worked with domain specialists in Poughkeepsie and Boeblingen. On top of this were the software development and test teams in India, China and and ever increasing number of places.

The new UNIX enterprise server, the Power™ 595 is an impressive beast if the charts are anything to go by. I’m hoping to get Nancy to take me across the building to the test bring-up to have an up close and personal look sometime this week. The new POWER6 “Hydro-Cluster” supercomputer, the Power 575, is very impessive using a new super-dense system, with a unique, in-rack, water-cooling system and with 448 processor cores per rack. Apprently it offers users nearly five times the performance and more than three times the energy efficiency of its predecessor, IBM’s POWER5+™ processor running upto a industry busting clock cycle of up to 5 GHz.

These Super-dense systems are starting to become a really interesting value prop. On Friday I got a link to the IBM.COM public website that included a video on our iDATAPLEX offering. It was there Saturday and has gone today, but it was there as this search in the current google index shows. The video doesn’t show any technical details but does give an interesting insight into this x86 based super-dense, Internet scale, behemoth of a server. I was hoping there was other public comment or blog entries I could leach off for discussion points, but the only search results return job postings 😉

Anyone go to the iDATAPLEX session at IMPACT 2008 and want to comment ??

On Power Systems and Security

One of the topics I’m trying to close on at the moment is Power Systems Security. I have my views on where I think we need to be, where the emerging technology challenges are, what the industry drivers are(yours and ours), and the competitive pressures.

If you want to comment or email me with your thoughts on Power Systems security, I’d like to hear. What’s important, what’s not?  Of course I’m interested in OS related issues, AIX, i, or Linux on Power. I’m also interested in requirements that span all three, that need to apply across hardware and PowerVM.

Interested in mobility? Want your keys to move between systems with you? Not much good if you move the system but can’t read the data becuase you don’t have key authority. Is encryption in your Power Systems future? Is it OK to have it in software only, to have it as an offload engine or does it need to run faster via acceleration. Do you have numbers, calculations on how many, what key sizes etc.

Let’s be clear though, we have plans and implementations in all these areas. What I’m interested in are your thoughts and requirements.

IBM Power p570 Datamation Enterprise Server of the Year 2008

Feb. 12th Datamation announced their product of the year awards, the IBM Power Systems p570 server won enterprise server of the year, up against the IBM System x 3950 M2 Server, the HP MediaSmart Server, and the Dell PowerEdge 2970.

Details on all the award winners are here.

Appliances, Stacks and software virtual machines

A couple of things from the “Monkmaster” this morning peaked my interest and deserved a post rather than a comment. First up was James post on “your Sons IBM“. James discusses a recent theme of his around stackless stacks, and simplicity. Next-up came a tweet link on cohesiveFT and their elastic server on demand.

These are very timely, I’ve been working on a effort here in Power Systems for the past couple of months with my ATSM, Meghna Paruthi, on our appliance strategy. These are, as always with me, one layer lower than the stuff James blogs on, I deal with plumbing. It’s a theme and topic I’ll return to a few times in the coming weeks as I’m just about to wrap up the effort. We are currently looking for some Independent Software Vendors( ISVs) who already package their offerings in VMWare or Microsoft virtual appliance formats and either would like to do something similar for Power Systems, or alternatively have tried it and don’t think it would work for Power Systems.

Simple, easy to use software appliances which can be quickly and easily deployed into PowerVM Logical Partitions have a lot of promise. I’d like to have a market place of stackless, semi-or-total black box systems that can be deployed easily and quickly into a partition and use existing capacity or dynamic capacity upgrade on demand to get the equivalent of cloud computing within a Power System. Given we can already run circa 200-logical partitions on a single machine, and are planing something in the region of 4x that for the p7 based servers with PowerVM, we need to do something about the infrastructure for creating, packaging, servicing, updating and managing them.

We’ve currently got six-sorta-appliance projects in flight, one related to future datacenters, one with WebSphere XD, one with DB2, a couple around security and some ideas on entry level soft appliances.

So far it looks like OVF wrappers around the Network Installation Manager aka NIM, look like the way to go for AIX based appliances, with similar processes for i5/OS and Linux on Power appliances. However, there are a number of related issues about packaging, licensing and inter and intra appliance communication that I’m looking for some input on. So, if you are an ISV, or a startup, or even in independent contractor who is looking at how to package software for Power Systems, please feel free to post here, or email, I’d love to engage.

IBM Software and Power Systems Roadshow

In September and October 2007, the IBM Software Group Competitive Project office put on a short series of roadshows in North America and India to show some of the best aspects of IBM Middleware running on Power Systems. It’s not an out and out marketing event, but one designed and presented by some solid technical folks.

They’ve announced the first set of dates for 2008, and the events start next week. Strangely the workshop is listed on the Software/Linux web page but definitely covers AIX and Linux implementations. Here are the dates and locations, hope some of you new to Power or interested in IBM Middleware exploitation on Power can make it along.

Tampa, FL February 21, 2008
Charlotte, NC February 26, 2008
Philadelphia, PA February 28, 2008
Mohegan Sun, CT March 6, 2008
Hazelwood, MO March 11, 2008
Minneapolis, MN March 13, 2008

Catching up on IBM Redbooks

Trying to find a reference book on AIX 6, I looked at the latest list of Redbooks for Power Systems, these are the ones listed in the RSS feedRSS Feed since the start of October 2007.

Continue reading ‘Catching up on IBM Redbooks’

Last weeks announcement recap, Power6 Blades and AIX

Thanks to the folks over at the “Power Architecture zone editors’ notebook” blog here is their summary of last weeks announcements.

Get yours today: Listen UNIX users — the newly available IBM BladeCenter JS22 with Power6 is what you’ve been waiting for. Couple the JS22’s Power6 processor technology with the built-in Advanced Power Virtualization and you’ve got a lot of Power concentrated in a compact container (which can also save you on space and energy costs). It comes loaded with two 4GHz dual-core processors, an optional hard drive, and an Ethernet controller; it supports as much as 32GB of memory; the first shipments are configured for the BladeCenter H and BladeCenter HT chassis. And its virtualization features make it really special (see following entry for more on this).

And what’s a new blade without a complementary OS: Targeted for Friday, November 9, 2007, the release of AIX 6 from the beta bin should provide users improved security management and virtualization features that take advantage of a hypervisor included in the Power6 processor so you can get 100 percent application up time. The Workload Partition Manager should let sysadmins create multiple partitions (each with customized memory settings for users and application workloads) and the Live Application Mobility feature can shift applications from one server to another on the fly (and they keep running while migrating). Then there’s the Security Expert which lets users control more than 300 security settings (role-based access to applications, user-based authentication, etc.). These OS utilities should work well with the Power6 Live Partition Mobility hypervisor which can move an entire OS (AIX, RHEL, and SLES) and its workloads from one server to another while they are running. (In fact, you can preview AIX 6 here if you can’t wait until Friday.)


About & Contact

I'm Mark Cathcart, formally a Senior Distinguished Engineer, in Dells Software Group; before that Director of Systems Engineering in the Enterprise Solutions Group at Dell. Prior to that, I was IBM Distinguished Engineer and member of the IBM Academy of Technology. I am a Fellow of the British Computer Society (bsc.org) I'm an information technology optimist.


I was a member of the Linux Foundation Core Infrastructure Initiative Steering committee. Read more about it here.

Subscribe to updates via rss:

Feed Icon

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 2,066 other subscribers

Blog Stats

  • 90,343 hits