Archive for the 'zSeries' Category

I left IBM in 2008, last week I said goodbye

I decided to post this over on my main blog as it was more to do with the people and community than about IBM. It contains some great references and links to content.

IBM 3090 Training

Between 2001 and 2004, I had an office in the home of the mainframes, IBM Poughkeepsie, in Building 705. As a Brit’, it wasn’t my natural home, also, I wasn’t a developer or a designer, as a software architect focusing in software and application architectures, it never felt like home.

IBM Library number ZZ25-6897.

One day, on my way to lunch at the in-house cafeteria, I walked by a room whose door was always closed. There was a buzz of people coming from it, and the door was open. A sign outside said “Library closing, Take anything you can use!”

I have some great books, a few of which I plan to scan and donate the output to either the Computer History Museum, or to the Internet Archive.

One of the more fun things I grabbed were a few IBM training laserdiscs. I had no idea what I’d do with them, I had never owned a laserdisc player. I just thought they’d look good sitting on my bookshelf. Especially since they are the same physical size as vinyl albums.

Now 16-years on, I’ve spent the last 4-years digitising my entire vinyl collection, in total some 2,700 albums. One of my main focus areas has been the music of Jazz producer, Creed Taylor. One of the side effects from that is I’ve created a new website, ctproduced.com – In record collecting circles, I’m apparently a completionist. I try to buy everything.

And so it was I started acquiring laserdiscs by Creed Taylor. It took a while, and I’m still missing Blues At Bradleys by Charles Fambrough. While I’ve not got around to writing about them in any detail, you can find them at the bottom of the entry here.

What I had left were the IBM laserdiscs. On monday I popped the first laserdisc in, it was for the IBM 3090 Processor complex. It was a fascinating throwback for me. I’d worked with IBM Kingston on a number of firmware and software availability issues, both as a customer, and later as an IBM Senior Software Engineer.

I hope your find the video fascinating. The IBM 3090 Processor was, to the best of my knowledge, the last of the real “mainframes”. Sure we still have IBM processor architecture machines that are compatible with the 3090 and earlier architectures. However, the new systems, more powerful, more efficient, are typically a single frame system. Sure, a parallel sysplex can support multiple mainframes, it doesn’t require them. Enjoy!

The Zowe Open Source Project

This was announced today at SHARE St Louis. A great new effort and opportunity to integrate open source technologies and applications into the IBM z/OS operating system. Zowe, as the article says, is

a framework of software services that offers industry standard REST APIs, API catalog, extensible command line interface and web-based UI framework

They’ve also put together the zowe,org community for architects, developers and designers to share best practices. It’s not clear what the legal relationship is between the open mainframe project and zowe, but zowe is listed as a project, so that’s great news in terms of strategy and direction. As of writing, the open mainframe zowe project web page has the best detail on the project.

Zowe appears to be a collaboration between IBM and a number of companies, including Rocket Software. Rocket has a broad portfolio of software and systems that integrate with IBM Systems, they also have my friend, former colleague and sparing partner at IBM, Jim Porell on staff.

Open Distributed Challenges – Words Matter

I had an interesting exchange with Dez Blanchfield from Australia on twitter recently. At the time, based on his tweets, I assume Dez was an IBM employee. He isn’t and although our paths crossed briefly at the company in 2007, as far as I’m aware we never met.

The subject was open vs open source. Any longtime readers will know that’s part of what drove me to join IBM in 1986, to push back on the closing of doors, and help knock down walls in IBM openness.

At the end of our twitter exchange, the first 3-tweets are included above, I promised to track down one of my earlier papers. As far as I recall, and without going through piles of hard copy paper in storage, this one was formally published by IBM US using a similar name, and pretty much identical content, probably in the Spring 0f ’96.

It is still important to differentiate between de jure and de facto standards. Open Source creates new de facto standards every day, through wide adoption and implementation using that open source. While systems ,ove much more quickly these days, at Internet speed, there is still a robust need to de jure standards. Those that are legally, internationally and commonly recognised, whether or not they were first implemented through open source. Most technology standards these days are as that’s the best way to get them through standards organizations.

The PDF presented here is original, unedited, just converted to PDF from Lotus Word Pro.

Lotus Word Pro, and it’s predecessor, Ami Pro, are great examples of de facto standards, especially inside IBM. Following the rise of Microsoft Word and MS Office, Lotus products on the desktop effectively disappeared. Since even inside IBM, the Lotus source code was never available, not only were the products only a de facto standard, they were never open source. While in the post Lotus desktop software period considerable effort has been put into reverse engineer the file formats , and some free and chargeable convertors almost all of them can recover the text, most do a poor job or formatting.

For that reason, I bought a used IBM Thinkpad T42 with Windows XP; Lotus Smartsuite and still have a licensed copy of Adobe Acrobat to create PDF’s. Words matter, open source, open, and open standards are all great. As always, understand the limitations of each.

There are a load of my newer white papers in the ‘wayback’ machine, if you have any problems finding them, let me know, I’ll jump start the Thinkpad T42.

Mainframe Assembler Language 2.0

Those that still follow my blog from my days working in the IBM mainframe arena might be interested in the following.

One of the stalwarts of software at IBM, and self described grand poobar of High Level Assembler, John R. Ehrman has a 1300-page 2.0 version of his book “Assembler Language Programming for IBM System zâ„¢ Servers ” and it’s available in PDF form here. There are a wealth of other assembler resources that John has contributed here on ibm.com

(My) Influential Women in Tech

Taking some time out of work in the technical, software, computer industry has been really helpful to give my brain time to sift through the required, the necessary, the nice, and the pointless things that I’ve been involved in over 41-years in technology.

international-womens-day-logo1[1]Given that today is International Women’s Day 2016 and numerous tweets have flown by celebrating women, and given the people I follow, many women in Technology. I thought I’d take a minute to note some of the great women in Tech I had the opportunity to work with.

I was fortunate in that I spent much of my career at IBM. There is no doubt that IBM was a progressive employer on all fronts, women, minorities, the physically challenged, and that continues today with their unrelenting endorsement of the LGBT community. I never personally met or worked with current IBM CEO, Ginni Rometty, she like many that I did have the opportunity to work with, started out in Systems Engineering and moved into management. Those that I worked with included Barbara McDuffie, Leslie Wilkes, Linda Sanford and many others.

Among those in management at IBM that were most influential, Anona Amis at IBM UK. Anona was my manager in 1989-1990, at a time when I was frustrated and lacking direction after joining IBM two years earlier, with high hopes of doing important things. Anona, in the period of a year, taught me both how to value my contributions, but also how to make more valuable contributions. She was one of what I grew to learn, was the backbone of IBM, professional managers.

My four women of tech, may at sometime or other, have been managers. That though wasn’t why I was inspired by them.

Susan Malika: Sue, I met Sue initially through the CICS Product group, when we were first looking at ways to interface a web server to the CICS Transaction Monitor. Sue and the team already had a prototype connector implemented as a CGI. Over the coming years, I was influenced by Sue in a number of fields, especially in data interchange and her work on XML. Sue is still active in tech.

Peggy Zagelow: I’d always been pretty dismissive of databases, apart from a brief period with SQL/DS; I’d always managed fine without one. Early on in the days of evangelizing Java, I was routed to the IBM Santa Teresa lab, on an ad hoc query from Peggy about using Java as a procedures language for DB2. Her enthusiasm, and dogma about the structured, relational database; as well as her ability to code eloquently in Assembler was an inspiration. We later wrote a paper together, still available online[here]. Peggy is also still active in the tech sector at IBM.

Donna Dillenberger: Sometime in 1999, Donna and the then President of the IBM Academy of Technology, Ian Brackenbury, came to the IBM Bedfont office to discuss some ideas I had on making the Java Virtual Machine viable on large scale mainframe servers. Donna, translated a group of unconnected ideas and concepts I sketched out on a white board, into the “Scalable JVM”. The evolution of the JVM was a key stepping stone in the IBM evolution of Java. I’m pleased to see Donna was appointed an IBM Fellow in 2015. The paper on the JVM is here.(1).

Gerry Hackett: Finally, but most importantly, Geraldine aka Gerry Hackett. Gerry and I  met when she was a first line development manager in the IBM Virtual Machine development laboratory in Endicott New York, sometime around 1985. While Gerry would normally fall in the category of management, she is most steadfastly still an amazing technologist. Some years later I had the [dubious] pleasure of “flipping slides” for her as Gerry presented IBM Strategy. Aside: “Todays generation will never understand the tension between a speaker and a slide turner.” Today, Gerry is a Vice President at Dell. She recruited me to work at Dell in 2009, and under her leadership the firmware and embedded management team have made steady progress, and implemented some great ideas. Gerry has been a longtime advocate for women in technology, a career mentor, and a fantastic roll model.

Importantly, what all these women demonstrated, by the “bucketload”, was quiet, technological confidence; the ability to see, deliver and celebrate great ideas and great people. They were quiet unlike their male peers, not in achievement, but in approach. This why we need more women in technology, not because they are women, but because technical companies, and their products will not be as good without them.

(1). Edited to link to correct Dillenberger et al paper.

An old man and money

I was just sent a link to this ConnectedPlanet article by Susana Schwartz, and given my background in mainframes and x86 asked what I thought of the central premise. The analogy that came to mind almost immediately was too good not share.

The question the article was addressing was “will the IBM zEnterprise make mainframes sexy again?” My analogy, Hugh Hefner! Do you think Hugh Hefner is sexy? He has all the money, is a great revenue generator has some good products, but mostly while they do the same stuff they’ve always done, are looking a bit long in the tooth. What’s interesting is what surrounds Hugh. Same with zEnterprise, only there are much better ways to get that smart technology.

After a few miss-starts with a google search for “old man and young girls” – that will have set off some alarm bells in Dell IT, I set Google safesearch to strict and search for “old man with young women” and here we have it, my analogy for the IBM zEnterprise.

Image courtesy and copyright of the sun.co.uk

Image courtesy and copyright of the sun.co.uk

Do you want Hugh Hefner in the middle? He’s worth loads of money…

Any similarity between Hugh Hefner and an IBM mainframe is entirely coincidental, after all we all know mainframes are older and come from New York. Hugh is from Chicago.

Feel free to use the analogy to argue either way… just be careful to keep the discussion work safe. I’ve still got that J3000 spoof press release somewhere as well.

Appliances – Good, bad or virtual ?

So, in another prime example of “Why do Analysts blogs make it so hard to have a conversation?” , Gordon Haff of Illuminata today tweeted a link to a new blog post of his on appliances. No comments allowed, no trackbacks provided.

He takes Chuck Hollis (EMC) post and opines various positions on it. It’s not clear what the notion of “big appliance” is as Chuck uses it. Personally, I think he’s talking about solutions. Yes, I know it’s a fine line, but a large all purpose data mining solution with its’own storage, own server, own console, etc. is no more an appliance than a kitchen is. The kitchen will contain appliances but it is not one itself. If thats not what Chuck is describing, then his post has some confusion, very few organizations will have a large number of these “solutions”.

On the generally accepted view of appliances, I think both Gordon and Chuck are being a little naive when they think that all compute appliances can be made virtual and run on shared resource machines.

While at IBM I spent a lot of time, and learned some valuable lessons about appliances. I was looking at the potential for the first generation of IBM designed WebSphere DataPower appliances. At first, it seemd to me even 3-years ago that turning them into a virtual appliance would be a good idea. However, I’d made the same mistake that Hollis and Haff make. They assume that the type of processing done in an appliance can be transparently replaced by the onward march of Moores Law on Intel and IBM Power processors.

The same can be said for most appliances I’ve looked at. They have unique hardware design, which often includes numerous specialized processing functions, such as encryption, key management and even environmental monitoring. Appliances though real value add is that they are designed with a very specific market opportunity in mind. That design will require complex workload analysis, and reviewing the balance between general purpose compute, graphics, security, I/O and much more, and producing a balanced design and most importantly, a complete user experience to support it. Thats often the key.

Some appliances offer the sort of hardware based security and tamper protection that can never be replaced by general purpose machines.

Yes Hollis and Haff make a fair point that these appliances need separate management, the real point is that many of these appliances need NO management at all. You set them up, then run them. Because the workload is tested and integrated the software rarely, if ever fails. Since the hardware isn’t generally extensible, aka as Chuck would have it, you are locked into what you buy, updating drivers and introducing incompatibility isn’t an issue as it is with most general purpose servers.

As for trading one headache for another, while it’s a valid point, my experience so far with live migration and pools of virtual servers, network switches, SAN setup etc. is that you are once again trading one headache for another. While in a limited fashion it’s fairly straight forward to do live migration of a virtual workload from one system to another. Doing it at scale, which is what is required if you’ve reached the “headache”point that Chuck is positing, is far from simple.

Chuck closes his blog entry with:

Will we see a best-of-both-worlds approach in the future?

Well I’d say that was more than likely, in fact it’s happening and has been for a while. The beauty of an appliance is that the end user is not exposed to the internal workings. They don’t have to worry about most configuration options and setup, management is often minimised or eliminated, and many appliances today offer “phone home” like features for upgrade and maintenance. I know, we build many of them here at Dell for our customers, including EMC, Google etc.

One direction that we are likely to see, is that in the same current form factor of an appliance, it will become a fault tolerant appliance by replicating key parts of the h/w, virtualizing the appliance and running multiple copies of the appliance workload within a single physical appliance, all once again delivering that workload and deployment specific features and functions. This in turn reduces the number of physical appliance a customer will need. So the best of both worlds, although I suspect that not what Chuck was hinting at.

While there is definitely a market for virtual software stacks, complete application and OS instances, presuming that you can move all h/w appliances to this model, is missing the point.

Let’s not forget, SANs are often just another form of appliance, as are TOR/EOR network switches, and things like the Cisco Nexus. Haff says that appliances have been around since the late 1990’s, well at least as far as I can recall, in the category of “big appliances”, the IBM Parallel Query Server which ran a customized mainframe DB2 workload, and attached to an IBM S/390 Enterprise Server was around in the early 1990’s.

Before that many devices were in fact sold as appliances, they were just not called that, but by todays definition, thats exactly what they were. My all time favorite was the IBM 3704, part of the IBM 3705 communications controller family. The 3704 was all about integrated function and a unique user experience, with at the time(1976) an almost space age touch panel user interface.

70% of something is better than..

70% of nothing at all. [With apologies to Double Exposure]

As I’ve said before, I’m an avid reader of Robin Bloors Have Mac Will Blog, blog. I also follow him on twitter where he is @robinbloor. Sadly his blog doesn’t accept trackbacks, but I’ll leave a short comment so he gets to see this.

His latest blog entry, CA:Dancing with dinosaurs comes across as a bit of a puff piece in support of Computer Associates.

On the CA involvement with mainframes, Bloor seems to have overlooked the fact that CA has John Swainson as CEO, and Don Ferguson as Chief Architect. John was previously an IBM VP, Don an IBM Fellow and both Don and John were variously in charge of significant IBM Software Group projects/products.

Personally I’d like to see someone from IBM find/quote a source for that 70% data number. It’s been used for years and years with little or no foundation. Jim Porell quoted this number in some of his excellent and more recent System Z strategy presentations, It’s dated from, I think, 1995.

Secondly, I’d guess it depends what you can business critical data these days. If Google collapsed or had their data centers in Silicon Valley interrupted with the loss of Google docs, YouTube, Google search, Maps and similarly Microsoft and/or Yahoo went offline… I’d suspect the whole notion that 70% of business critical data resides of mainframes would be laughable. Yes, a large percentage of purely text based transactional data is on mainframes and yes the value of those transactions exceeds any other platform, but that is far from 70% of anything much these days… Increasingly these days startups, SME’s and Web 2.0 business don’t use mainframes for even their text based transactional data.

Finally on the Bloor/CA assertion that installing mainframe software is arcane. That maybe, but here I’m still in full agreement of the mainframe folks, especially if you are talking about real mainframe software as IBM would have it, installed by SMP/E. One of my few claims to fame was reverse engineering key parts of the IBM Mainframe VM service process nearly 20-years now. It was then, and SMP/E is now, still is years ahead of anything in the Windows and UNIX space for pre-req, co-req, if-req processing; the ability to build and maintain multiple non-trivial systems from a single data store using binary only program objects. CA are not the first to spot the need to provide an interface other than ISPF and JCL to build these jobs streams.

But really, continuing to label mainframes as dinosaurs is so 1990’s, it’s like describing Lance Armstrong as a push bike rider.

Simon Perry, Principal Associate Analyst – Sustainability, Quocirca, has written a similar piece with a little more detail entitled Mainframe management gets its swagger.

IBM Big Box quandary

In another follow-up from EMC World, the last session I went to was “EMC System z, z/OS, z/Linux and z/VM”. I thought it might be useful to hear what people were doing in the mainframe space, although largely unrelated to my current job. It was almost 10-years to the day that I was at IBM, were writing the z/Linux strategy, hearing about early successes etc. and strangely, current EMC CTO Jeff Nick and I were engaged in vigourous debate about implementation details of z/Linux the night before we went and told SAP about IBM’s plans.

The EMC World session demonstrated, that as much as things change, the they stay the same. It also reminded me, how borked the IT industry is, that we mostly force customers to choose by pricing rather than function. 10-12 years ago z/Linux on the mainframe was all about giving customers new function, a new way to exploit the technology that they’d already invested in. It was of course also to further establish the mainframes role as a server consolidation platform through virtualization and high levels of utilization.(1)

What I heard were two conflicting and confusing stories, at least they should be for IBM. The first was a customer who was moving all his Oracle workloads from a large IBM Power Systems server to z/Linux on the mainframe. Why? Becuase the licensing on the IBM Power server was too expensive. Using z/Linux, and the Integrated Facility for Linux (IFL) allows organizations to do a cost avoidance exercise. Processor capacity on the IFL doesn’t count towards the total installed, general processor capacity and hence doesn’t bump up the overall software licensing costs for all the other users. It’s a complex discussion and that wasn’t the purpose of this post, so I’ll leave it at that.

This might be considered a win for IBM, but actually it was a loss. It’s also a loss for the customer. IBM lost because the processing was being moved from it’s growth platform, IBM Power Systems, to the legacy System z. It’s good for z since it consolidates it’s hold in that organization, or probably does. Once the customer has done the migration and conversion, it will be interesting to see how they feel the performance compares. IBM often refers to IFL and it’s close relatives the ziip and zaap as speciality engines. Giving the impression that they perform faster than the normal System z processors. It’s largely an urban myth though, since these “specialty” engines really only deliver the same performance, they are just measured, monitored and priced differently.

The customer lost becuase they’ve spent time and effort to move from one architecture to another, really only to avoid software and server pricing issues. While the System z folks will argue the benefits of their platform, and I’m not about to “dis” them, actually the IBM Power server can pretty mouch deliver a good enough implementation as to make the difference, largely irrelavant.

The second confliction I heard about was from EMC themselves. The second main topic of the session was a discussion about moving some of the EMC Symmetrix products off the mainframe, as customers have reported that they are using too much mainframe capacity to run. The guys from EMC were thinking of moving the function of the products to commodity x86 processors and then linking those via high speed networking into the mainframe. This would move the function out of band and save mainframe processor cycles, which in turn would avoid an upgrade, which in turn would avoid bumping the software costs up for all users.

I was surprised how quickly I interjected and started talking about WLM SRM Enclaves and moving the EMC apps to run on z/Linux etc. This surely makes much more sense.

I was left with though a definate impression that there are still hard times ahead for IBM in large non-X86 virtualized servers. Not that they are not great pieces of engineering, they are. But getting to grips with software pricing once and for all should really be their prime focus, not a secondary or tertiary one. We were working towards pay per use once before, time to revist me thinks.

(1) Sport the irony of this statement given the preceeding “Nano, Nano” post!

Back in the day – way back

I suggested to @adamclyde we take a twitter conversation about the gray area between personal and corporate blogging offline, into email. In my response to him, like some “grumpy old man“, I started by recalling the good old days when my URL’s were emea.ibm.com/(something) then ibm.com/s390/corner and later ibm.com/servers/corner.

Later I went looking and found some of my webpages from 2000 on the Internet Archive. I was even more delighted find they had some of my old presentations. I didn’t check through all of them, but my V2 Corner is here. I’ve taken one of my better presentations from the Internet archive and posted it on slideshare.

Enterprise Workstation Management - From Chaos to Order

Enterprise Workstation Management - From Chaos to Order

The PDF version doesn’t have all the overlay colors right, and some of the embedded graphics are missing, but it’s still worth looking through for both content and style.

 

If Google can celebrate it’s 10th anniversary by reporting it’s 2001 index, well how about letting me get away with reposting a presentation from 1996 that originated in 1989! The presentation has it’s origins in 1989 as a Lotus Freelance presentation printed on real overheads via a plotter. It covers the management of workstations and PC’s in corporate environments.

This version is dated from June 1996 and was recovered from the Internet Archive. Some of the colored overlays are the wrong colors and some of the graphics missing. I still think its worth taking a look through for both style and content. I got the summary slide wrong, but not by much as we move to what some are calling Cloud Clients

Most Mainframe MIPS Installs are Linux

over on the ibmeye blog Greg makes this observation: “I found this surprising (if true): More than half the mainframe MIPS IBM sells are Linux” and “That seems to go against the trust of IBM’s marketing push.”

I have no idea if the numbers quoted are accurate, but I don’t see the inconsistency.

We’ve been on an Intel and general server consolidation drive for 15-years now. Back in the mid-90’s it was much harder, we were trying to convince organizations to move their Unix workloads to OS/390, aka MVS, aka z/OS, using the Unix Systems Services, but it was a tough sell. Even before that a few of us, primarily in Europe were driving to get customers to consolidated under utilized and unreliable file servers to MVS or VM using either LANRES(for Novell Netware) or the LAN File Services for MS and OS/2 LAN Servers.

I think the current trend to migrate to Linux on the mainframe is entirely consistent with organizations efforts to make the most of the environmental benefits of a large centralized server, along with the ease and openness of Linux. IBM has a massive internal effort, moving something like 3,500 servers.

Can you provide examples of where you think it’s inconsistent Greg?

Federal Reserve and Mainframes

Over on the Mainframe Executive blog, there is an open letter to the US Federal Reserve Bank, questioning the Fed’s apparent desire to move or switch their systems away from mainframes to distributed systems. Well you would expect less from the Mainframe Executive blog. I have a different take on why the Fed should not only keep their mainframe, but why they might want to move more work to it.

I worked on many of the early mainframe Internet applications. I did the high level design and oversaw the implementation of an Internet Banking Solution that the bank, Sun Microsystems and Microsoft had all failed to get to scale. Our design went from 3k users to I believe at the end of 2-years in production, close to 990k users without an upgrade, and without a system outage. It was built off two mainframe systems outside the firewall, running as a Sysplex. I also did a design review for a bank that had lost close to $60k from four accounts, the back end on the mainframe the mid-tiers and Internet servers distributed.

The point of this post though isn’t to gloat about my success, isn’t being a ‘mainframe bigot’ or even saying the Fed should use the mainframe. In the Mainframe Executive they raise the usual specter of security, yes security is a big deal for banks, even more so for the Fed. So yes, make a big deal of it.

However, the single most important thing to understand about building trusted computing systems, isn’t that you provide a 100% secure environment, in which applications aka business transactions, run. It is that you can show who did what, when, and how. Auditing is much more important than security. If you believe you have a 100% secure system and you lose some money but can’t audit it, what do you do, shrug your shoulders and say “oh well never mind”?

Auditing isn’t about just seeing that you have procedures in place. It is the ability to pick apart a debit transaction on a system that was executed at 4:05pm along with 30,000 others, show how that transaction was invoked, where from, under what security context, what ID, and the originating network address and more. That might require looging through logs of 7-10 distributed systems.

If like the bank I did the design review for, you can’t show the correlation of events leading up to the execution of the transaction, and you don’t know for certain where the user eneterd the network, what ID they used, and how that security context was passed from one system to another, then you don’t have security, no matter what they say.

When you are looking after the nation’s money, and despite the obvious current finicial position of the US, budgets not withstanding, I’d say that was pretty important. What does the Fed say?

I say “Show me the audit, show me the audit, show me the audit…” (repeat ad infinitum)

IBM’s new Enterprise Data Center vision

IBM announced today our new Enterprise Data Center vision. There are lots of links from the new ibm.com/datacenter web page which split out into their various constituencies Virtualization, Energy Efficiency, Security, Business resiliency and IT service delivery.

To net it out from my perspective though, there is a lot of good technology behind this, and an interesting direction summarized nicely starting on page-10 on the POV paper linked from the new data center page or here.

What it lays out are the three main stages of adoption for the new data center, simplified, shared and dynamic. The Clabby analytics paper, also linked from the new data center page or here, puts the three stages in a more consumable practical tabular format.

They are really not new, many of our customers will have discussed these with us many times before. In fact, there’s no coincidence that the new Enterprise Data Center vision was launched the same day as the new IBM Z10 mainframe. We started discussing and talking about these these when I worked for Enterprise Systems in 1999, and we formally laid the groundwork in the on demand strategy in 2003. In fact, I see the Clabby paper has used the on demand operating environment block architecture to illustrate the service patterns. Who’d have guessed.

Simplify: reduce costs for infrastructure, operations and management

Share: for rapid deployment of infrastructure, at any scale

Dynamic: respond to new business requests across the company and beyond

However, the new Enterprise Data Center isn’t based on a mainframe, Z10 or otherwise. It’s about a style of computing, how to build, migrate and exploit a modern data center. Power Systems has some unique functions in both the Share and Dynamic stages, like partition mobility, with lots more to come.

For some further insight into the new data center vision, take a look at the presentation linked off my On a Clear day post from December.

Funeral for a friend

Long time friend, and former IBM VM and LAN Systems Director, now fellow Austin resident, Art Olbert point me to this video. It’s the University of Manitoba holding a funeral procession for their mainframe system after some 47-years of service. Nothing on their web site says what they’ve replaced it with, I’ve emailed them and asked. Their web site is currently running on Apache on Linux after migrating from Solaris some time in 2005. As always, Slashdot covers this with comments that range from the helpful to the absolutely bizarre.

Art is familiar with this type of stunt, Art is lovingly remembered for blowing up an IBM mainframe at the announcement of the IBM LAN Server in the 1990’s. Sorry Art, couldn’t avoid mentioning it 🙂 – Ahh the good old days.


About & Contact

I'm Mark Cathcart, formally a Senior Distinguished Engineer, in Dells Software Group; before that Director of Systems Engineering in the Enterprise Solutions Group at Dell. Prior to that, I was IBM Distinguished Engineer and member of the IBM Academy of Technology. I am a Fellow of the British Computer Society (bsc.org) I'm an information technology optimist.


I was a member of the Linux Foundation Core Infrastructure Initiative Steering committee. Read more about it here.

Subscribe to updates via rss:

Feed Icon

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 2,066 other subscribers

Blog Stats

  • 90,344 hits