Archive for the 'mainframe' Category

Back to the future

This week Dell announced 3x major acquisitions, Wyse, Clerity Solutions, and Make Technologies. These acquisitions, once complete, will offer an awesome combination to move apps and customers to the cloud.

  • Wyse provides application virtualization capability which in essence will allow PC based applications to run as terminals in the cloud, accessing them via thin clients, increasingly mobile devices like tablets.
  • Clerity delivers application modernization and re-hosting solutions and services. Clerity’s capabilities will enable Dell Services to help customers reduce the cost of transitioning business-critical applications and data from legacy computing systems and onto more modern architectures, including the cloud.
  • Make Technologies brings application modernization software and services that reduce the cost, risk and time required to re-engineer applications, helping companies modernize their applications portfolios so they can reduce legacy infrastructure operating costs. These applications run most effectively on open, standardized platforms including the cloud.

A great set of solutions to let organizations looking to really get  their older apps into a modern execution and device environment. Exciting times for the Dell team supporting these customers.

This very much reminds me of 14-15 years ago and a whole slew of projects where we were trying to drive similar modernization into applications. IBM Network Station was about to be launched; we had a useful first release of the CICS Transcation Gateway and their was a great start at integrating Java with COBOL based applications and some fledgling work on extending the COBOL language to support object oriented principles. My poster session at the IBM Academy of Technology was on legacy modernization. In those days it was obvious that customers needed tools to help them get from where they’d been to where they would be going.

Enough never really got there, the financial case wasn’t often enough. However, given the performance, scalability and reliability of today’s x86/x64 systems, the lack of progress and demand for change have passed compelling, it’s essential.

VM Master Class

As is the way, the older you get the more entangled your life becomes. My ex-Wife, Wendy Cathcart, nee Foster, died of cancer recently, such a waste, a fantastic, vibrant woman and great Mother to our children. After the funeral the kids were saying how they’d hardly got any video of her. I had on my shelf, unwatched for probably 10-years or more a stack of VCR tapes. I’d meant to do something with them, but never got around to it.

I put the tapes into Expressions in video here in Austin, they were ever so helpful and were able to go from UK PAL format VCR tapes to DVD, to MPEG-4. Two of the tapes contained the summary videos from the 1992 and 1993, IBM VM Master Class conferences. And, here’s were the entanglement comes in. Wendy never much got involved in my work, we went on many business trips together, one of the most memorable was driving from North London to Cannes in the South of France. I had a number of presentations to give, and the first one was after lunch on Monday, the first day. I went to do registration and other related stuff Monday morning. I came back to the room to get the car keys and go and collect my overhead transparencies and handout copies from the car. Unfortunately for me, Wendy had set off in the car with a number of the other wives to go visit Nice, France and my slides and handouts were in the trunk/boot. D’oh.

Unlike this week where my twitter stream has been tweet bombed by #VMWorld, back in the 1980’s there were almost no VM conferences. IBM had held a couple of internal conferences, and the SHARE User group in the USA had a very active virtual machine group, there really wasn’t anything in Europe except 1-day user group meetings. My UK VM User Group, had been inspirational for me and I wanted to give something back and give other virtual machine systems programmers and administrators and chance to get together over an extended period, talk with each other, learn about the latest technologies and hear from some of the masters in the field.

And so it was that I worked through 1990 and 1991 with Paul Maceke to plan, and deliver the first ever VM Master Class. We held it at an IBM Education facility, La Hulpe, which was in a forest outside of Brussels, Belgium. As I recall, we had people met at the airport and bused them in in Sunday and the conference ran through Friday lunchtime, when we bused them back to the airport. Everything was done on site, meals, classes and hotel rooms. Back in the 1970’s and 1980’s in was required for computer systems to be represented by something iconic, for VM it was the bear. You can read why and almost everything else about the history of VM here on Melinda Varians web page, heck you can even get kindle format version of the history.

So, when it came to the Master Class we needed a bear related logo. Thats where Wendy came in. She drew the “graduate bear”, for which Paul got not only included in the folders, but also metal pins, what a star. Come the 1993 VM Master Class, Wendy did the artwork for the VM Bear and it’s Client/Server Cousin sitting on top of the world and as I remember, this time Paul actually got real soft toy bears. Thanks for all the great memories Wendy, the videos on youtube also remind me of many great people from the community, who came you name? Please feel free to add with comments here to avoid the Youtube comment minefield.

I’ll start with Dick Newson, and John Hartman, couldn’t be two different people, both totally innovative, great software developers and designers.

Hot News: Paint drys

I’m guessing I’m not so different from most people, the first time someone explains groundhog day, you laugh, but don’t believe what you are seeing. It’s kinda “n’ah, your kidding right!” but some take it seriously.

The same for the pronouncement that IBM makes regularly about server migrations to the Power Systems platforms and mainframes, you take a step back and say seriously, you are kidding, you are taking this seriously?

And that was my reaction when I saw this weeks piece from Timothy Prickett Morgan at The Register aka Vulcher central under the tagline “IBM gloats over HP, Oracle takeouts” – really, seriously, you are kidding right? Prickett Morgan covers IBM’s most recent claims that they migrated “286 customers, 182 were using Oracle (formerly Sun Microsystems) servers and 95 were using machines from Hewlett-Packard” Unix to IBMs AIX.

What surprises me is not that IBM made the claims, hey paint drys, but Prickette Morgan felt it worth writing up(The Register, tag line “Biting the Hand that feeds IT”), really, seriously?

AIX and Power Systems are great, it’s just not newsworthy at those minuscule rates compared to the inexorable rise of the x86 architecture in both private and cloud data centers, it really won’t be long before IBM can no longer afford to design and manufacture those systems. And there’s the clue to the migrations.

You stick your neck and go with Sun, now Oracle, or HP Unix systems, it’s a battle but either genuinely believe you were right, or you were just hoodwinked or cajoled into doing it for one reason or another. So, now they are both in terminal declines, whats a Data Center manager to do? Yep, the easiest thing is to claim you were right with the platform, and by doing so were part of a movement that forced IBM to lower it’s prices, and now the right thing to do is migrate to IBM as they have the best Unix solution. Phew thats alright, no one noticed and everyone goes on collecting their paychecks.

Prickett Morgan ends by wondering “why Oracle, HP, and Fujitsu don’t hit back every time IBM opens its mouth with takeout figures of their own to show they are getting traction against Big Blue with their iron.” – because frankly, no one cares except IBM. Everyone else is too busy building resilient, innovative, and cost effective solutions based on x86 Linux, either in their own data center, or in the “cloud”.

Deviation: The new old

104 modules in a Doepher A-100PMD12 double case sitting on top of the A-100PMB case

Deadmau5 Analalog Modular setup

IBM 360/40 at Attwood Statistics

IBM 360/40 at Attwood Statistics

Anyone that knows me, knows that I’ve retained a high level of interest in dance music. I guess it stems from growing up in and around London in the early 70’s and the emergence of  funk, and especially Jazz Funk, especially through some of the new music put together by people like Johnny Hammond(Los Conquistadors Chocolate), Idris Muhammed(Could Heaven Ever Be Like This) which remain to this day two of my all time favorite tracks, along with many from Quincy Jones.

Later, my interest was retained by the further exploitation of electronics as disco became the plat de jour and although I, like most others became disenchanted once it became metronomic and formulaic, I’m convinced that the style, type and beat of music you like and listen to create pathways in your brain to activate feelings.

As so it was that with time, and energy on my hands over the past few years I’ve re-engaged with dance music. Mostly because I like it, it activates those pathways in my mind that release feel good endorphins, I enjoy the freedom of the dance.

I’ve been to some great live performances, Tiesto and Gareth Emery especially down in San Antonio and Houston, and anyone who thinks these guys are just DJ’s, playing other peoples music through a computer or off CD’s is just missing the point.

However, one electronic music producer more than any other has really piqued my interest, Deadmau5, aka Joel Zimmerman from Toronto. I first saw Deadmau5 during South by South West (SXSW) in 2008, when Joel played at the now defunct Sky Lounge on Congress Ave. The club was small enough that you could actually stand at the side of the stage and see what he was doing, it was a fascinating insight. [In this video on YouTube, one of many from that night, not only can you see Joel "producing" music, but if you stop the video on the right frame at 16-seconds, you can see me in the audience! Who knew...]

I saw him again in March 2009 at Bar Rio in Houston. This time I had clear line of sight to what he was doing from the VIP balcony. It was fascinating, I actually saw and heard him make mistakes, not significant mistakes but ones that proved he was actually making live music. [You can read my review from the time here including links to YouTube videos.] It turns out he was something he was using during that Houston concert was either a prototype or something similar to a monome.

Joel regularly posts and runs live video streams from his home studio, and recently posted this video of his latest analog modular system. It and some of the other videos are a great insight into how dance music producers work. Watching this, this morning, I was struck with the similarities to the IBM 360/40 mainframe which was the first computer I worked on, especially I can remember the first time I was shown by an IBM Hardware Engineer, who might have been Paul Badger or Geoff Chapman, how the system worked. How to put it into instruction step, how to display the value of registers and so on. I felt the same way watching the Deadmau5 video, I got to get me some playtime with one of these.

And yes, the guy in the picture above is me and the 360/40. It was taken in probably the spring of 1976 I’d guess, at Attwood Statistics in Berkhampstead, Herts. UK.

The power and capacity of the IBM 36/40 are easily exceeded by handheld devices such as the Dell Streak. Meanwhile, it’s clear that some music producers are headed in the opposite direction, moving from digital software to analog hardware. The new old.

70% of something is better than..

70% of nothing at all. [With apologies to Double Exposure]

As I’ve said before, I’m an avid reader of Robin Bloors Have Mac Will Blog, blog. I also follow him on twitter where he is @robinbloor. Sadly his blog doesn’t accept trackbacks, but I’ll leave a short comment so he gets to see this.

His latest blog entry, CA:Dancing with dinosaurs comes across as a bit of a puff piece in support of Computer Associates.

On the CA involvement with mainframes, Bloor seems to have overlooked the fact that CA has John Swainson as CEO, and Don Ferguson as Chief Architect. John was previously an IBM VP, Don an IBM Fellow and both Don and John were variously in charge of significant IBM Software Group projects/products.

Personally I’d like to see someone from IBM find/quote a source for that 70% data number. It’s been used for years and years with little or no foundation. Jim Porell quoted this number in some of his excellent and more recent System Z strategy presentations, It’s dated from, I think, 1995.

Secondly, I’d guess it depends what you can business critical data these days. If Google collapsed or had their data centers in Silicon Valley interrupted with the loss of Google docs, YouTube, Google search, Maps and similarly Microsoft and/or Yahoo went offline… I’d suspect the whole notion that 70% of business critical data resides of mainframes would be laughable. Yes, a large percentage of purely text based transactional data is on mainframes and yes the value of those transactions exceeds any other platform, but that is far from 70% of anything much these days… Increasingly these days startups, SME’s and Web 2.0 business don’t use mainframes for even their text based transactional data.

Finally on the Bloor/CA assertion that installing mainframe software is arcane. That maybe, but here I’m still in full agreement of the mainframe folks, especially if you are talking about real mainframe software as IBM would have it, installed by SMP/E. One of my few claims to fame was reverse engineering key parts of the IBM Mainframe VM service process nearly 20-years now. It was then, and SMP/E is now, still is years ahead of anything in the Windows and UNIX space for pre-req, co-req, if-req processing; the ability to build and maintain multiple non-trivial systems from a single data store using binary only program objects. CA are not the first to spot the need to provide an interface other than ISPF and JCL to build these jobs streams.

But really, continuing to label mainframes as dinosaurs is so 1990’s, it’s like describing Lance Armstrong as a push bike rider.

Simon Perry, Principal Associate Analyst – Sustainability, Quocirca, has written a similar piece with a little more detail entitled Mainframe management gets its swagger.

IBM Big Box quandary

In another follow-up from EMC World, the last session I went to was “EMC System z, z/OS, z/Linux and z/VM”. I thought it might be useful to hear what people were doing in the mainframe space, although largely unrelated to my current job. It was almost 10-years to the day that I was at IBM, were writing the z/Linux strategy, hearing about early successes etc. and strangely, current EMC CTO Jeff Nick and I were engaged in vigourous debate about implementation details of z/Linux the night before we went and told SAP about IBM’s plans.

The EMC World session demonstrated, that as much as things change, the they stay the same. It also reminded me, how borked the IT industry is, that we mostly force customers to choose by pricing rather than function. 10-12 years ago z/Linux on the mainframe was all about giving customers new function, a new way to exploit the technology that they’d already invested in. It was of course also to further establish the mainframes role as a server consolidation platform through virtualization and high levels of utilization.(1)

What I heard were two conflicting and confusing stories, at least they should be for IBM. The first was a customer who was moving all his Oracle workloads from a large IBM Power Systems server to z/Linux on the mainframe. Why? Becuase the licensing on the IBM Power server was too expensive. Using z/Linux, and the Integrated Facility for Linux (IFL) allows organizations to do a cost avoidance exercise. Processor capacity on the IFL doesn’t count towards the total installed, general processor capacity and hence doesn’t bump up the overall software licensing costs for all the other users. It’s a complex discussion and that wasn’t the purpose of this post, so I’ll leave it at that.

This might be considered a win for IBM, but actually it was a loss. It’s also a loss for the customer. IBM lost because the processing was being moved from it’s growth platform, IBM Power Systems, to the legacy System z. It’s good for z since it consolidates it’s hold in that organization, or probably does. Once the customer has done the migration and conversion, it will be interesting to see how they feel the performance compares. IBM often refers to IFL and it’s close relatives the ziip and zaap as speciality engines. Giving the impression that they perform faster than the normal System z processors. It’s largely an urban myth though, since these “specialty” engines really only deliver the same performance, they are just measured, monitored and priced differently.

The customer lost becuase they’ve spent time and effort to move from one architecture to another, really only to avoid software and server pricing issues. While the System z folks will argue the benefits of their platform, and I’m not about to “dis” them, actually the IBM Power server can pretty mouch deliver a good enough implementation as to make the difference, largely irrelavant.

The second confliction I heard about was from EMC themselves. The second main topic of the session was a discussion about moving some of the EMC Symmetrix products off the mainframe, as customers have reported that they are using too much mainframe capacity to run. The guys from EMC were thinking of moving the function of the products to commodity x86 processors and then linking those via high speed networking into the mainframe. This would move the function out of band and save mainframe processor cycles, which in turn would avoid an upgrade, which in turn would avoid bumping the software costs up for all users.

I was surprised how quickly I interjected and started talking about WLM SRM Enclaves and moving the EMC apps to run on z/Linux etc. This surely makes much more sense.

I was left with though a definate impression that there are still hard times ahead for IBM in large non-X86 virtualized servers. Not that they are not great pieces of engineering, they are. But getting to grips with software pricing once and for all should really be their prime focus, not a secondary or tertiary one. We were working towards pay per use once before, time to revist me thinks.

(1) Sport the irony of this statement given the preceeding “Nano, Nano” post!

Whither IBM, Sun and Sparc?

So the twitterati and blog space is alight with discussion that IBM is to buy Sun for $6.25 billion. The only way we’ll know if there is any truth to it is if it goes ahead, these rumors are never denied.

Everyone is of course focussed on the big questions which mostly are around hardware synergies(server, chips, storage) and Java. Since I don’t work at IBM I have no idea whats going on or if there is any truth to this. There some more interesting technical discussions to be had than those generally think they have an informed opinion.

IBM bought Transitive in 2008; Transitive has some innovative emulation software, called QuickTransit. It allows binaries created and compiled on one platform, to be run on another hardware platform without change or recompilation. There were some deficiencies, and you can read more into this in my terse summary blog post at the time of the acquisition announcement. Prior to acquisition QuickTransit supported a number of platforms including SPARC and PowerMac and had been licensed by a number of companies, including IBM.

I assume IBM is in the midst of their classic “blue rinse” process and this explains the almost complete elimination of the Transitive web site(1), and it’s nothing more sinister than they are getting ready to re-launch under the IBM branding umbrella of POwerVM or some such.

Now, one could speculate that by acquiring SUN, IBM would achieve three things that would enhance their PowerVM stratgey and build on their Transitive acquisition. First, they could reduce the platforms supported by QuickTransit and over time, not renegotiate their licensing agreements with 3rd parties. This would give IBM “leverage” in offering binary emulation for the architectures previsouly supported, on say, only the Power and Mainframe processor ranges.

Also, by further enhancing QuickTransit, and driving it into the IBM microcode/firmware layer, thus making it more reliable, providing higher performance by reducing duplicate instruction handling, they could effectively eliminate future SPARC based hardware utilising the UNIX based Power hardware, PowerVM virtualization. This would also have the effect taking this level of emulation mainstream and negating much of the transient(pun intended) nature typically associated with this sort of technology.

Finally, by acquiring SUN, IBM would eliminate any IP barriers that might occur due to the nature of the implementation of the SPARC instruction set.

That’s not to say that there are not any problems to overcome. First, as it currently stands the emulation tends to map calls from one OS into another, rather than operating at a pure architecture level. Pushing some of the emulation down into the firmware/microcode layer wouldn’t help emulation of CALL SOLARIS API with X, Y, even if it would emulate the machine architecture instructions that execute to do this. So, is IBM really committed to becoming a first class SOLARIS provider? I don’t see any proof of this since the earlier announcement. Solaris on Power is pretty non-existentThe alternative is that IBM is to use Transitive technology to map these calls into AIX, which is much more likely.

In economic downturns, big, cash rich companies are kings. Looking back over the last 150 years there are plenty of examples of big buying competitors and emerging from the downturn even more powerful. Ultimately I believe that the proprietary chip business is dead, it’s just a question of how long it takes for it to die and if regulators feel that by allowing mergers and acquisitions in this space is good or bad for the economy and the economic recovery.

So, there’s a thought. As I said, I don’t work at IBM.

(1) It is mildly ammusing to see that one of the few pages left extoles the virtues of the Transitive technology by one Mendel Rosenblum, formerly Chief Scientist and co-founder of VMWare.

About & Contact

I'm Mark Cathcart, Senior Distinguished Engineer, in Dells Software Group. I was formerly Director of Systems Engineering in the Enterprise Solutions Group at Dell, and an IBM Distinguished Engineer and member of the IBM Academy of Technology. I'm an information technology optimist.

Blog Stats

  • 75,122 hits

Subscribe to updates via rss:

Feed Icon

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 488 other followers

Top Clicks

  • None


Get every new post delivered to your Inbox.

Join 488 other followers