Archive for the 'mainframe' Category

I left IBM in 2008, last week I said goodbye

I decided to post this over on my main blog as it was more to do with the people and community than about IBM. It contains some great references and links to content.

IBM 3090 Training

Between 2001 and 2004, I had an office in the home of the mainframes, IBM Poughkeepsie, in Building 705. As a Brit’, it wasn’t my natural home, also, I wasn’t a developer or a designer, as a software architect focusing in software and application architectures, it never felt like home.

IBM Library number ZZ25-6897.

One day, on my way to lunch at the in-house cafeteria, I walked by a room whose door was always closed. There was a buzz of people coming from it, and the door was open. A sign outside said “Library closing, Take anything you can use!”

I have some great books, a few of which I plan to scan and donate the output to either the Computer History Museum, or to the Internet Archive.

One of the more fun things I grabbed were a few IBM training laserdiscs. I had no idea what I’d do with them, I had never owned a laserdisc player. I just thought they’d look good sitting on my bookshelf. Especially since they are the same physical size as vinyl albums.

Now 16-years on, I’ve spent the last 4-years digitising my entire vinyl collection, in total some 2,700 albums. One of my main focus areas has been the music of Jazz producer, Creed Taylor. One of the side effects from that is I’ve created a new website, ctproduced.com – In record collecting circles, I’m apparently a completionist. I try to buy everything.

And so it was I started acquiring laserdiscs by Creed Taylor. It took a while, and I’m still missing Blues At Bradleys by Charles Fambrough. While I’ve not got around to writing about them in any detail, you can find them at the bottom of the entry here.

What I had left were the IBM laserdiscs. On monday I popped the first laserdisc in, it was for the IBM 3090 Processor complex. It was a fascinating throwback for me. I’d worked with IBM Kingston on a number of firmware and software availability issues, both as a customer, and later as an IBM Senior Software Engineer.

I hope your find the video fascinating. The IBM 3090 Processor was, to the best of my knowledge, the last of the real “mainframes”. Sure we still have IBM processor architecture machines that are compatible with the 3090 and earlier architectures. However, the new systems, more powerful, more efficient, are typically a single frame system. Sure, a parallel sysplex can support multiple mainframes, it doesn’t require them. Enjoy!

The Zowe Open Source Project

This was announced today at SHARE St Louis. A great new effort and opportunity to integrate open source technologies and applications into the IBM z/OS operating system. Zowe, as the article says, is

a framework of software services that offers industry standard REST APIs, API catalog, extensible command line interface and web-based UI framework

They’ve also put together the zowe,org community for architects, developers and designers to share best practices. It’s not clear what the legal relationship is between the open mainframe project and zowe, but zowe is listed as a project, so that’s great news in terms of strategy and direction. As of writing, the open mainframe zowe project web page has the best detail on the project.

Zowe appears to be a collaboration between IBM and a number of companies, including Rocket Software. Rocket has a broad portfolio of software and systems that integrate with IBM Systems, they also have my friend, former colleague and sparing partner at IBM, Jim Porell on staff.

Annual IBM Shareholder Meeting

ibm-i-death-star[1]

Picture: (C) Nick Litten

 

 

 

 

Remembering the dawn of the open source movement

and this isn’t it.

attwood statistics 1975

Me re-booting an IBM System 360/40 in 1975

When I first started in IT in 1974 or as it was called back then, data processing, open source was the only thing. People were already depending on it, and defending their right to access source code.

I’m delighted with the number and breadth of formal organizations that have grown-up around “open source”. They are a great thing. Strength comes in numbers, as does recognition and bargaining power. Congratulations to the Open Source Initiative and everything they’ve achieved in their 20-years.

I understand the difference between closed source, (restrictive) licensed source code, free source, open source etc. The point here isn’t to argue one over the other, but to merely illustrate the lineage that has led to where we are today.

Perhaps one of the more significant steps in the modern open source movement was the creation in 2000 of the Open Source Development Labs, (OSDL) which in 2007 merged with the Free Standards Group (FSG) to become the Linux Foundation. But of course source code didn’t start there.

Some people feel that the source code fissure was opened when  Linus Torvalds released his Linux operating system in 1991 as open source; while Linus and many others think the work by Richard Stallman on the GNU Toolset and GNU License started in 1983, was the first step. Stallman’s determined advocacy for source code rights and source access certainly was a big contributor to where open source is today.

But it started way before Stallman. Open source can not only trace its roots to two of the industries behemoths, IBM and AT&T, but the original advocacy came from them too. Back in the early 1960’s, open source was the only thing. There wasn’t a software industry per se until the US Government invoked its’ antitrust law against IBM and AT&T, eventually forcing them, among other things, to unbundle their software and make it separately available as well as many other related conditions.

’69 is the beginning, not the end

The U.S. vs.I.B.M. antitrust case started in 1969, with trial commencing in 1975(1). The case was specifically about IBM blocking competitive hardware makers getting access and customers being able to run competitive systems, primarily S/360 architecture, using IBM Software.

In the years leading up to 1969, customers had become increasingly frustrated, and angry at IBM’s policy to tie it’s software to its hardware. Since all the software at that time was source code available, what that really meant was a business HAD to have one IBM computer to get the source code, it could then purchase an IBM plug-compatible manufacturers (PCM) computer(2) and compile the source code with the manufacturers Assembler and tools, then run the binaries on the PCM systems.

IBM made this increasingly harder as the PCM systems became more competitive. Often large previously IBM only systems users who would have, 2, 4, sometimes even 6 IBM S/360 systems, costing tens of millions of dollars, would buy a single PCM computer. The IBM on-site systems engineers (SE) could see the struggles of the customer, and along with the customers themselves, started to push back against the policy. The SE job was made harder the more their hands were tied, and the more restrictions that were put on the source code.

To SHARE or not to?

For the customers in the US, one of their major user groups, SHARE had
a vast experience in source code distribution, it’s user created content, tools tapes were legend, what most never knew, is that back in 1959, with General Motors, SHARE had its own IBM mainframe (709) operating system, the SHARE Operating System (SOS).

At that time there was formal support offerings of on-site SE’s that would work on problems and defects in SOS. But by 1962, IBM had introduced it’s own S/7090 Operating System, which was both incompatible with SOS, and also at that time IBM withdrew support by it’s SE and Program Support Representatives (PSR’s) to work on SOS.

1965 is where to the best of my knowledge is when the open source code movement, as we know it today, started

To my knowledge, that’s where the open source code movement, as we know it today, started. Stallman’s experience with a printer driver mirrors exactly what had happened some 20-years before. The removal of source code, the inability to build working modifications to support a business initiative, using hardware and software ostentatiously already owned by the customer.

IBM made it increasingly harder to get the source code, until the antitrust case. By that time, many of IBMs customers had created and depended on small, and large modifications to IBM source code.

Antitrust outcomes

Computerworld - IBM OCOBy the mid-70’s, once of the results of years of litigation, and consent decrees in the United States, IBM had been required to unbundle its software, and make it available separately. Initially it was chargeable to customers who wanted to run it on PCM, non-IBM systems, but overtime as new releases and new function appeared, even customers with IBM systems saw a charge appear, especially as Field Developed Programs, moved to full Program Products and so on. In a bid to stop competing products, and user group offerings being developed from their products, this meant the IBM Products were increasingly supplied object-code-only (OCO). This became a a formal policy in 1983.

I’ve kept the press cutting from ComputerWorld(March 1985) shown above since my days at Chemical Bank in New York. It pretty much sums-up what was going on at the time, OCO and users and user groups fighting back against IBM.

What this also did is it gave life to the formal software market, companies were now used to paying for their software, we’ve never looked back. In the time since those days, software with source code available has continued to flourish. With each new twist and evolution of technology, open source thrives, finds it’s own place, sometimes a dominant position, sometimes subservient, in the background.

The times in the late 1950’s and 60’s were the dawn of open source. If users, programmers, researchers and scientists had not fought for their rights then, it is hard to know where the software industry would be now.

Footnotes

(1) The PCM industry had itself come about as a result of a 1956 antitrust case and the consent decree that followed.

(2) The 1969 antitrust case was eventually abandoned in 1982.

API’s and Mainframes

ab[1]

I like to try to read as many American Banker tech’ articles as I can. Since I don’t work anymore, I chose not to take out a subscription, so some I can read, others are behind their subscription paywall.

This one caught my eye. as it’s exactly what we did in circa 1998/99 at National Westminster Bank (NatWest) in the UK. The project was part of the rollout of a browser Intranet banking application, as a proof of concept, to be followed by a full blown Internet banking application. Previously both Microsoft and Sun had tackled the project and failed. Microsoft had scalability and reliability problems, and from memory, Sun just pushed too hard to move key components of the system to its servers, which in effect killed their attempt.

The key to any system design and architecture is being clear about what you are trying to achieve, and what the business needs to do. Yes, you need a forward looking API definition, one that can accept new business opportunities, and one that can grow with the business and the market. This is where old mainframe applications often failed.

Back in the 1960’s, applications were written to meet specific, and stringent taks, performance was key. Subsecond response times were almost always the norm’ as there would be hundreds or thousands of staff dependent on them for their jobs. The fact that many of those application has survived to this today, most still on the same mainframe platform is a tribute to their original design.

When looking at exploiting them from the web, if you let “imagineers” run away with what they “might” want, you’ll fail. You have to start with exposing the transaction and database as a set of core services based on the first application that will use them. Define your API structure to allow for growth and further exploitation. That’s what we successfully did for NatWest. The project rolled out on the internal IP network, and a year later, to the public via the Internet.

Of course we didn’t just expose the existing transactions, and yes, firewall, dispatching and other “normal” services as part of an Internet service were provided off platform. However, the core database and transaction monitor we behind a mainframe based webserver, which was “logically” firewalled from the production systems via an MPI that defined the API, and also routed requests.

So I read through the article to try to understand what the issue was that Shamir Karkal, the source for Barbas article, felt was the issue. Starting at the section “Will the legacy systems issue affect the industry’s ability to adopt an open API structure?” which began with a history lesson, I just didn’t find it.

The article wanders between a discussion of the apparent lack of a “service bus” style implementation, and the ability of Amazon to sell AWS and rapidly change the API to meet the needs of it’s users.

The only real technology discussion in the article that I found that had any merit, was where they talked about screen scraping. I guess I can’t argue with that, but surely we must be beyond that now? Do banks really still have applications that are bound by their greenscreen/3270/UI? That seems so 1996.

A much more interesting report is this one on more general Open Bank APIs. Especially since it takes the UK as a model and reflects on how poor US Banking is by comparison. I’ll be posting a summary on my ongoing frustrations with the ACH over on my personal blog sometime in the next few days. The key technology point here is that there is no way to have a realtime bank API, open, mainframe or otherwise, if the ACH system won’t process it. That’s America’s real problem.

Local StorageTek Legacy

May 2016

When I first moved to Colorado, I was fascinated and amused, sometimes twice per day on the school run, I’d pass Tape and Disk(or was it disc?) Dr. The roads led to nothing. and empty site, full of scrub grass and weeds. I’d always assumed it was a failed tax break development scheme. This seemed particularly likely as there is a large multi-property multi-family housing development across the street.

I was surprised recently on a Wednesday morning ride when one of the guys I was riding with declared he used to work at StorageTek there. I was fascinated. Although I remember IBM had a plant here that developed laser printers, but I knew that location was sold to Lexmark.

Rather than the roads leading to an undeveloped location, the location had at one time been a thriving location. Some poking around on the Denver Business Journal website revealed the story, and google maps had some pictures of the site in better days and from I36 you can even see some of the buildings. The picture below is a 2008 aerial picture of the site. Disk Dr is the road onto the site in the upper right, and Tape Dr on the lower right.StorageTek

From the Denver Business Journal

Asides from questions about the future of the site, the only real question is when did the site transfer between Louisville and Broomfield cities, see pictures above.

Mainframe Assembler Language 2.0

Those that still follow my blog from my days working in the IBM mainframe arena might be interested in the following.

One of the stalwarts of software at IBM, and self described grand poobar of High Level Assembler, John R. Ehrman has a 1300-page 2.0 version of his book “Assembler Language Programming for IBM System z™ Servers ” and it’s available in PDF form here. There are a wealth of other assembler resources that John has contributed here on ibm.com

(My) Influential Women in Tech

Taking some time out of work in the technical, software, computer industry has been really helpful to give my brain time to sift through the required, the necessary, the nice, and the pointless things that I’ve been involved in over 41-years in technology.

international-womens-day-logo1[1]Given that today is International Women’s Day 2016 and numerous tweets have flown by celebrating women, and given the people I follow, many women in Technology. I thought I’d take a minute to note some of the great women in Tech I had the opportunity to work with.

I was fortunate in that I spent much of my career at IBM. There is no doubt that IBM was a progressive employer on all fronts, women, minorities, the physically challenged, and that continues today with their unrelenting endorsement of the LGBT community. I never personally met or worked with current IBM CEO, Ginni Rometty, she like many that I did have the opportunity to work with, started out in Systems Engineering and moved into management. Those that I worked with included Barbara McDuffie, Leslie Wilkes, Linda Sanford and many others.

Among those in management at IBM that were most influential, Anona Amis at IBM UK. Anona was my manager in 1989-1990, at a time when I was frustrated and lacking direction after joining IBM two years earlier, with high hopes of doing important things. Anona, in the period of a year, taught me both how to value my contributions, but also how to make more valuable contributions. She was one of what I grew to learn, was the backbone of IBM, professional managers.

My four women of tech, may at sometime or other, have been managers. That though wasn’t why I was inspired by them.

Susan Malika: Sue, I met Sue initially through the CICS Product group, when we were first looking at ways to interface a web server to the CICS Transaction Monitor. Sue and the team already had a prototype connector implemented as a CGI. Over the coming years, I was influenced by Sue in a number of fields, especially in data interchange and her work on XML. Sue is still active in tech.

Peggy Zagelow: I’d always been pretty dismissive of databases, apart from a brief period with SQL/DS; I’d always managed fine without one. Early on in the days of evangelizing Java, I was routed to the IBM Santa Teresa lab, on an ad hoc query from Peggy about using Java as a procedures language for DB2. Her enthusiasm, and dogma about the structured, relational database; as well as her ability to code eloquently in Assembler was an inspiration. We later wrote a paper together, still available online[here]. Peggy is also still active in the tech sector at IBM.

Donna Dillenberger: Sometime in 1999, Donna and the then President of the IBM Academy of Technology, Ian Brackenbury, came to the IBM Bedfont office to discuss some ideas I had on making the Java Virtual Machine viable on large scale mainframe servers. Donna, translated a group of unconnected ideas and concepts I sketched out on a white board, into the “Scalable JVM”. The evolution of the JVM was a key stepping stone in the IBM evolution of Java. I’m pleased to see Donna was appointed an IBM Fellow in 2015. The paper on the JVM is here.(1).

Gerry Hackett: Finally, but most importantly, Geraldine aka Gerry Hackett. Gerry and I  met when she was a first line development manager in the IBM Virtual Machine development laboratory in Endicott New York, sometime around 1985. While Gerry would normally fall in the category of management, she is most steadfastly still an amazing technologist. Some years later I had the [dubious] pleasure of “flipping slides” for her as Gerry presented IBM Strategy. Aside: “Todays generation will never understand the tension between a speaker and a slide turner.” Today, Gerry is a Vice President at Dell. She recruited me to work at Dell in 2009, and under her leadership the firmware and embedded management team have made steady progress, and implemented some great ideas. Gerry has been a longtime advocate for women in technology, a career mentor, and a fantastic roll model.

Importantly, what all these women demonstrated, by the “bucketload”, was quiet, technological confidence; the ability to see, deliver and celebrate great ideas and great people. They were quiet unlike their male peers, not in achievement, but in approach. This why we need more women in technology, not because they are women, but because technical companies, and their products will not be as good without them.

(1). Edited to link to correct Dillenberger et al paper.

Back to the future

This week Dell announced 3x major acquisitions, Wyse, Clerity Solutions, and Make Technologies. These acquisitions, once complete, will offer an awesome combination to move apps and customers to the cloud.

  • Wyse provides application virtualization capability which in essence will allow PC based applications to run as terminals in the cloud, accessing them via thin clients, increasingly mobile devices like tablets.
  • Clerity delivers application modernization and re-hosting solutions and services. Clerity’s capabilities will enable Dell Services to help customers reduce the cost of transitioning business-critical applications and data from legacy computing systems and onto more modern architectures, including the cloud.
  • Make Technologies brings application modernization software and services that reduce the cost, risk and time required to re-engineer applications, helping companies modernize their applications portfolios so they can reduce legacy infrastructure operating costs. These applications run most effectively on open, standardized platforms including the cloud.

A great set of solutions to let organizations looking to really get  their older apps into a modern execution and device environment. Exciting times for the Dell team supporting these customers.

This very much reminds me of 14-15 years ago and a whole slew of projects where we were trying to drive similar modernization into applications. IBM Network Station was about to be launched; we had a useful first release of the CICS Transcation Gateway and their was a great start at integrating Java with COBOL based applications and some fledgling work on extending the COBOL language to support object oriented principles. My poster session at the IBM Academy of Technology was on legacy modernization. In those days it was obvious that customers needed tools to help them get from where they’d been to where they would be going.

Enough never really got there, the financial case wasn’t often enough. However, given the performance, scalability and reliability of today’s x86/x64 systems, the lack of progress and demand for change have passed compelling, it’s essential.

VM Master Class

As is the way, the older you get the more entangled your life becomes. My ex-Wife, Wendy Cathcart, nee Foster, died of cancer recently, such a waste, a fantastic, vibrant woman and great Mother to our children. After the funeral the kids were saying how they’d hardly got any video of her. I had on my shelf, unwatched for probably 10-years or more a stack of VCR tapes. I’d meant to do something with them, but never got around to it.

I put the tapes into Expressions in video here in Austin, they were ever so helpful and were able to go from UK PAL format VCR tapes to DVD, to MPEG-4. Two of the tapes contained the summary videos from the 1992 and 1993, IBM VM Master Class conferences. And, here’s were the entanglement comes in. Wendy never much got involved in my work, we went on many business trips together, one of the most memorable was driving from North London to Cannes in the South of France. I had a number of presentations to give, and the first one was after lunch on Monday, the first day. I went to do registration and other related stuff Monday morning. I came back to the room to get the car keys and go and collect my overhead transparencies and handout copies from the car. Unfortunately for me, Wendy had set off in the car with a number of the other wives to go visit Nice, France and my slides and handouts were in the trunk/boot. D’oh.

Unlike this week where my twitter stream has been tweet bombed by #VMWorld, back in the 1980’s there were almost no VM conferences. IBM had held a couple of internal conferences, and the SHARE User group in the USA had a very active virtual machine group, there really wasn’t anything in Europe except 1-day user group meetings. My UK VM User Group, had been inspirational for me and I wanted to give something back and give other virtual machine systems programmers and administrators and chance to get together over an extended period, talk with each other, learn about the latest technologies and hear from some of the masters in the field.

And so it was that I worked through 1990 and 1991 with Paul Maceke to plan, and deliver the first ever VM Master Class. We held it at an IBM Education facility, La Hulpe, which was in a forest outside of Brussels, Belgium. As I recall, we had people met at the airport and bused them in in Sunday and the conference ran through Friday lunchtime, when we bused them back to the airport. Everything was done on site, meals, classes and hotel rooms. Back in the 1970’s and 1980’s in was required for computer systems to be represented by something iconic, for VM it was the bear. You can read why and almost everything else about the history of VM here on Melinda Varians web page, heck you can even get kindle format version of the history.

So, when it came to the Master Class we needed a bear related logo. Thats where Wendy came in. She drew the “graduate bear”, for which Paul got not only included in the folders, but also metal pins, what a star. Come the 1993 VM Master Class, Wendy did the artwork for the VM Bear and it’s Client/Server Cousin sitting on top of the world and as I remember, this time Paul actually got real soft toy bears. Thanks for all the great memories Wendy, the videos on youtube also remind me of many great people from the community, who came you name? Please feel free to add with comments here to avoid the Youtube comment minefield.

I’ll start with Dick Newson, and John Hartman, couldn’t be two different people, both totally innovative, great software developers and designers.

Hot News: Paint drys

I’m guessing I’m not so different from most people, the first time someone explains groundhog day, you laugh, but don’t believe what you are seeing. It’s kinda “n’ah, your kidding right!” but some take it seriously.

The same for the pronouncement that IBM makes regularly about server migrations to the Power Systems platforms and mainframes, you take a step back and say seriously, you are kidding, you are taking this seriously?

And that was my reaction when I saw this weeks piece from Timothy Prickett Morgan at The Register aka Vulcher central under the tagline “IBM gloats over HP, Oracle takeouts” – really, seriously, you are kidding right? Prickett Morgan covers IBM’s most recent claims that they migrated “286 customers, 182 were using Oracle (formerly Sun Microsystems) servers and 95 were using machines from Hewlett-Packard” Unix to IBMs AIX.

What surprises me is not that IBM made the claims, hey paint drys, but Prickette Morgan felt it worth writing up(The Register, tag line “Biting the Hand that feeds IT”), really, seriously?

AIX and Power Systems are great, it’s just not newsworthy at those minuscule rates compared to the inexorable rise of the x86 architecture in both private and cloud data centers, it really won’t be long before IBM can no longer afford to design and manufacture those systems. And there’s the clue to the migrations.

You stick your neck and go with Sun, now Oracle, or HP Unix systems, it’s a battle but either genuinely believe you were right, or you were just hoodwinked or cajoled into doing it for one reason or another. So, now they are both in terminal declines, whats a Data Center manager to do? Yep, the easiest thing is to claim you were right with the platform, and by doing so were part of a movement that forced IBM to lower it’s prices, and now the right thing to do is migrate to IBM as they have the best Unix solution. Phew thats alright, no one noticed and everyone goes on collecting their paychecks.

Prickett Morgan ends by wondering “why Oracle, HP, and Fujitsu don’t hit back every time IBM opens its mouth with takeout figures of their own to show they are getting traction against Big Blue with their iron.” – because frankly, no one cares except IBM. Everyone else is too busy building resilient, innovative, and cost effective solutions based on x86 Linux, either in their own data center, or in the “cloud”.

Deviation: The new old

104 modules in a Doepher A-100PMD12 double case sitting on top of the A-100PMB case

Deadmau5 Analalog Modular setup

IBM 360/40 at Attwood Statistics

IBM 360/40 at Attwood Statistics

Anyone that knows me, knows that I’ve retained a high level of interest in dance music. I guess it stems from growing up in and around London in the early 70’s and the emergence of  funk, and especially Jazz Funk, especially through some of the new music put together by people like Johnny Hammond(Los Conquistadors Chocolate), Idris Muhammed(Could Heaven Ever Be Like This) which remain to this day two of my all time favorite tracks, along with many from Quincy Jones.

Later, my interest was retained by the further exploitation of electronics as disco became the plat de jour and although I, like most others became disenchanted once it became metronomic and formulaic, I’m convinced that the style, type and beat of music you like and listen to create pathways in your brain to activate feelings.

As so it was that with time, and energy on my hands over the past few years I’ve re-engaged with dance music. Mostly because I like it, it activates those pathways in my mind that release feel good endorphins, I enjoy the freedom of the dance.

I’ve been to some great live performances, Tiesto and Gareth Emery especially down in San Antonio and Houston, and anyone who thinks these guys are just DJ’s, playing other peoples music through a computer or off CD’s is just missing the point.

However, one electronic music producer more than any other has really piqued my interest, Deadmau5, aka Joel Zimmerman from Toronto. I first saw Deadmau5 during South by South West (SXSW) in 2008, when Joel played at the now defunct Sky Lounge on Congress Ave. The club was small enough that you could actually stand at the side of the stage and see what he was doing, it was a fascinating insight. [In this video on YouTube, one of many from that night, not only can you see Joel “producing” music, but if you stop the video on the right frame at 16-seconds, you can see me in the audience! Who knew…]

I saw him again in March 2009 at Bar Rio in Houston. This time I had clear line of sight to what he was doing from the VIP balcony. It was fascinating, I actually saw and heard him make mistakes, not significant mistakes but ones that proved he was actually making live music. [You can read my review from the time here including links to YouTube videos.] It turns out he was something he was using during that Houston concert was either a prototype or something similar to a monome.

Joel regularly posts and runs live video streams from his home studio, and recently posted this video of his latest analog modular system. It and some of the other videos are a great insight into how dance music producers work. Watching this, this morning, I was struck with the similarities to the IBM 360/40 mainframe which was the first computer I worked on, especially I can remember the first time I was shown by an IBM Hardware Engineer, who might have been Paul Badger or Geoff Chapman, how the system worked. How to put it into instruction step, how to display the value of registers and so on. I felt the same way watching the Deadmau5 video, I got to get me some playtime with one of these.

And yes, the guy in the picture above is me and the 360/40. It was taken in probably the spring of 1976 I’d guess, at Attwood Statistics in Berkhampstead, Herts. UK.

The power and capacity of the IBM 36/40 are easily exceeded by handheld devices such as the Dell Streak. Meanwhile, it’s clear that some music producers are headed in the opposite direction, moving from digital software to analog hardware. The new old.

70% of something is better than..

70% of nothing at all. [With apologies to Double Exposure]

As I’ve said before, I’m an avid reader of Robin Bloors Have Mac Will Blog, blog. I also follow him on twitter where he is @robinbloor. Sadly his blog doesn’t accept trackbacks, but I’ll leave a short comment so he gets to see this.

His latest blog entry, CA:Dancing with dinosaurs comes across as a bit of a puff piece in support of Computer Associates.

On the CA involvement with mainframes, Bloor seems to have overlooked the fact that CA has John Swainson as CEO, and Don Ferguson as Chief Architect. John was previously an IBM VP, Don an IBM Fellow and both Don and John were variously in charge of significant IBM Software Group projects/products.

Personally I’d like to see someone from IBM find/quote a source for that 70% data number. It’s been used for years and years with little or no foundation. Jim Porell quoted this number in some of his excellent and more recent System Z strategy presentations, It’s dated from, I think, 1995.

Secondly, I’d guess it depends what you can business critical data these days. If Google collapsed or had their data centers in Silicon Valley interrupted with the loss of Google docs, YouTube, Google search, Maps and similarly Microsoft and/or Yahoo went offline… I’d suspect the whole notion that 70% of business critical data resides of mainframes would be laughable. Yes, a large percentage of purely text based transactional data is on mainframes and yes the value of those transactions exceeds any other platform, but that is far from 70% of anything much these days… Increasingly these days startups, SME’s and Web 2.0 business don’t use mainframes for even their text based transactional data.

Finally on the Bloor/CA assertion that installing mainframe software is arcane. That maybe, but here I’m still in full agreement of the mainframe folks, especially if you are talking about real mainframe software as IBM would have it, installed by SMP/E. One of my few claims to fame was reverse engineering key parts of the IBM Mainframe VM service process nearly 20-years now. It was then, and SMP/E is now, still is years ahead of anything in the Windows and UNIX space for pre-req, co-req, if-req processing; the ability to build and maintain multiple non-trivial systems from a single data store using binary only program objects. CA are not the first to spot the need to provide an interface other than ISPF and JCL to build these jobs streams.

But really, continuing to label mainframes as dinosaurs is so 1990’s, it’s like describing Lance Armstrong as a push bike rider.

Simon Perry, Principal Associate Analyst – Sustainability, Quocirca, has written a similar piece with a little more detail entitled Mainframe management gets its swagger.

IBM Big Box quandary

In another follow-up from EMC World, the last session I went to was “EMC System z, z/OS, z/Linux and z/VM”. I thought it might be useful to hear what people were doing in the mainframe space, although largely unrelated to my current job. It was almost 10-years to the day that I was at IBM, were writing the z/Linux strategy, hearing about early successes etc. and strangely, current EMC CTO Jeff Nick and I were engaged in vigourous debate about implementation details of z/Linux the night before we went and told SAP about IBM’s plans.

The EMC World session demonstrated, that as much as things change, the they stay the same. It also reminded me, how borked the IT industry is, that we mostly force customers to choose by pricing rather than function. 10-12 years ago z/Linux on the mainframe was all about giving customers new function, a new way to exploit the technology that they’d already invested in. It was of course also to further establish the mainframes role as a server consolidation platform through virtualization and high levels of utilization.(1)

What I heard were two conflicting and confusing stories, at least they should be for IBM. The first was a customer who was moving all his Oracle workloads from a large IBM Power Systems server to z/Linux on the mainframe. Why? Becuase the licensing on the IBM Power server was too expensive. Using z/Linux, and the Integrated Facility for Linux (IFL) allows organizations to do a cost avoidance exercise. Processor capacity on the IFL doesn’t count towards the total installed, general processor capacity and hence doesn’t bump up the overall software licensing costs for all the other users. It’s a complex discussion and that wasn’t the purpose of this post, so I’ll leave it at that.

This might be considered a win for IBM, but actually it was a loss. It’s also a loss for the customer. IBM lost because the processing was being moved from it’s growth platform, IBM Power Systems, to the legacy System z. It’s good for z since it consolidates it’s hold in that organization, or probably does. Once the customer has done the migration and conversion, it will be interesting to see how they feel the performance compares. IBM often refers to IFL and it’s close relatives the ziip and zaap as speciality engines. Giving the impression that they perform faster than the normal System z processors. It’s largely an urban myth though, since these “specialty” engines really only deliver the same performance, they are just measured, monitored and priced differently.

The customer lost becuase they’ve spent time and effort to move from one architecture to another, really only to avoid software and server pricing issues. While the System z folks will argue the benefits of their platform, and I’m not about to “dis” them, actually the IBM Power server can pretty mouch deliver a good enough implementation as to make the difference, largely irrelavant.

The second confliction I heard about was from EMC themselves. The second main topic of the session was a discussion about moving some of the EMC Symmetrix products off the mainframe, as customers have reported that they are using too much mainframe capacity to run. The guys from EMC were thinking of moving the function of the products to commodity x86 processors and then linking those via high speed networking into the mainframe. This would move the function out of band and save mainframe processor cycles, which in turn would avoid an upgrade, which in turn would avoid bumping the software costs up for all users.

I was surprised how quickly I interjected and started talking about WLM SRM Enclaves and moving the EMC apps to run on z/Linux etc. This surely makes much more sense.

I was left with though a definate impression that there are still hard times ahead for IBM in large non-X86 virtualized servers. Not that they are not great pieces of engineering, they are. But getting to grips with software pricing once and for all should really be their prime focus, not a secondary or tertiary one. We were working towards pay per use once before, time to revist me thinks.

(1) Sport the irony of this statement given the preceeding “Nano, Nano” post!


About & Contact

I'm Mark Cathcart, formally a Senior Distinguished Engineer, in Dells Software Group; before that Director of Systems Engineering in the Enterprise Solutions Group at Dell. Prior to that, I was IBM Distinguished Engineer and member of the IBM Academy of Technology. I am a Fellow of the British Computer Society (bsc.org) I'm an information technology optimist.


I was a member of the Linux Foundation Core Infrastructure Initiative Steering committee. Read more about it here.

Subscribe to updates via rss:

Feed Icon

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 2,066 other subscribers

Blog Stats

  • 89,480 hits