Archive for the 'mainframe' Category

Back to the future

This week Dell announced 3x major acquisitions, Wyse, Clerity Solutions, and Make Technologies. These acquisitions, once complete, will offer an awesome combination to move apps and customers to the cloud.

  • Wyse provides application virtualization capability which in essence will allow PC based applications to run as terminals in the cloud, accessing them via thin clients, increasingly mobile devices like tablets.
  • Clerity delivers application modernization and re-hosting solutions and services. Clerity’s capabilities will enable Dell Services to help customers reduce the cost of transitioning business-critical applications and data from legacy computing systems and onto more modern architectures, including the cloud.
  • Make Technologies brings application modernization software and services that reduce the cost, risk and time required to re-engineer applications, helping companies modernize their applications portfolios so they can reduce legacy infrastructure operating costs. These applications run most effectively on open, standardized platforms including the cloud.

A great set of solutions to let organizations looking to really get  their older apps into a modern execution and device environment. Exciting times for the Dell team supporting these customers.

This very much reminds me of 14-15 years ago and a whole slew of projects where we were trying to drive similar modernization into applications. IBM Network Station was about to be launched; we had a useful first release of the CICS Transcation Gateway and their was a great start at integrating Java with COBOL based applications and some fledgling work on extending the COBOL language to support object oriented principles. My poster session at the IBM Academy of Technology was on legacy modernization. In those days it was obvious that customers needed tools to help them get from where they’d been to where they would be going.

Enough never really got there, the financial case wasn’t often enough. However, given the performance, scalability and reliability of today’s x86/x64 systems, the lack of progress and demand for change have passed compelling, it’s essential.

VM Master Class

As is the way, the older you get the more entangled your life becomes. My ex-Wife, Wendy Cathcart, nee Foster, died of cancer recently, such a waste, a fantastic, vibrant woman and great Mother to our children. After the funeral the kids were saying how they’d hardly got any video of her. I had on my shelf, unwatched for probably 10-years or more a stack of VCR tapes. I’d meant to do something with them, but never got around to it.

I put the tapes into Expressions in video here in Austin, they were ever so helpful and were able to go from UK PAL format VCR tapes to DVD, to MPEG-4. Two of the tapes contained the summary videos from the 1992 and 1993, IBM VM Master Class conferences. And, here’s were the entanglement comes in. Wendy never much got involved in my work, we went on many business trips together, one of the most memorable was driving from North London to Cannes in the South of France. I had a number of presentations to give, and the first one was after lunch on Monday, the first day. I went to do registration and other related stuff Monday morning. I came back to the room to get the car keys and go and collect my overhead transparencies and handout copies from the car. Unfortunately for me, Wendy had set off in the car with a number of the other wives to go visit Nice, France and my slides and handouts were in the trunk/boot. D’oh.

Unlike this week where my twitter stream has been tweet bombed by #VMWorld, back in the 1980’s there were almost no VM conferences. IBM had held a couple of internal conferences, and the SHARE User group in the USA had a very active virtual machine group, there really wasn’t anything in Europe except 1-day user group meetings. My UK VM User Group, had been inspirational for me and I wanted to give something back and give other virtual machine systems programmers and administrators and chance to get together over an extended period, talk with each other, learn about the latest technologies and hear from some of the masters in the field.

And so it was that I worked through 1990 and 1991 with Paul Maceke to plan, and deliver the first ever VM Master Class. We held it at an IBM Education facility, La Hulpe, which was in a forest outside of Brussels, Belgium. As I recall, we had people met at the airport and bused them in in Sunday and the conference ran through Friday lunchtime, when we bused them back to the airport. Everything was done on site, meals, classes and hotel rooms. Back in the 1970’s and 1980’s in was required for computer systems to be represented by something iconic, for VM it was the bear. You can read why and almost everything else about the history of VM here on Melinda Varians web page, heck you can even get kindle format version of the history.

So, when it came to the Master Class we needed a bear related logo. Thats where Wendy came in. She drew the “graduate bear”, for which Paul got not only included in the folders, but also metal pins, what a star. Come the 1993 VM Master Class, Wendy did the artwork for the VM Bear and it’s Client/Server Cousin sitting on top of the world and as I remember, this time Paul actually got real soft toy bears. Thanks for all the great memories Wendy, the videos on youtube also remind me of many great people from the community, who came you name? Please feel free to add with comments here to avoid the Youtube comment minefield.

I’ll start with Dick Newson, and John Hartman, couldn’t be two different people, both totally innovative, great software developers and designers.

Hot News: Paint drys

I’m guessing I’m not so different from most people, the first time someone explains groundhog day, you laugh, but don’t believe what you are seeing. It’s kinda “n’ah, your kidding right!” but some take it seriously.

The same for the pronouncement that IBM makes regularly about server migrations to the Power Systems platforms and mainframes, you take a step back and say seriously, you are kidding, you are taking this seriously?

And that was my reaction when I saw this weeks piece from Timothy Prickett Morgan at The Register aka Vulcher central under the tagline “IBM gloats over HP, Oracle takeouts” – really, seriously, you are kidding right? Prickett Morgan covers IBM’s most recent claims that they migrated “286 customers, 182 were using Oracle (formerly Sun Microsystems) servers and 95 were using machines from Hewlett-Packard” Unix to IBMs AIX.

What surprises me is not that IBM made the claims, hey paint drys, but Prickette Morgan felt it worth writing up(The Register, tag line “Biting the Hand that feeds IT”), really, seriously?

AIX and Power Systems are great, it’s just not newsworthy at those minuscule rates compared to the inexorable rise of the x86 architecture in both private and cloud data centers, it really won’t be long before IBM can no longer afford to design and manufacture those systems. And there’s the clue to the migrations.

You stick your neck and go with Sun, now Oracle, or HP Unix systems, it’s a battle but either genuinely believe you were right, or you were just hoodwinked or cajoled into doing it for one reason or another. So, now they are both in terminal declines, whats a Data Center manager to do? Yep, the easiest thing is to claim you were right with the platform, and by doing so were part of a movement that forced IBM to lower it’s prices, and now the right thing to do is migrate to IBM as they have the best Unix solution. Phew thats alright, no one noticed and everyone goes on collecting their paychecks.

Prickett Morgan ends by wondering “why Oracle, HP, and Fujitsu don’t hit back every time IBM opens its mouth with takeout figures of their own to show they are getting traction against Big Blue with their iron.” – because frankly, no one cares except IBM. Everyone else is too busy building resilient, innovative, and cost effective solutions based on x86 Linux, either in their own data center, or in the “cloud”.

Deviation: The new old

104 modules in a Doepher A-100PMD12 double case sitting on top of the A-100PMB case

Deadmau5 Analalog Modular setup

IBM 360/40 at Attwood Statistics

IBM 360/40 at Attwood Statistics

Anyone that knows me, knows that I’ve retained a high level of interest in dance music. I guess it stems from growing up in and around London in the early 70’s and the emergence of  funk, and especially Jazz Funk, especially through some of the new music put together by people like Johnny Hammond(Los Conquistadors Chocolate), Idris Muhammed(Could Heaven Ever Be Like This) which remain to this day two of my all time favorite tracks, along with many from Quincy Jones.

Later, my interest was retained by the further exploitation of electronics as disco became the plat de jour and although I, like most others became disenchanted once it became metronomic and formulaic, I’m convinced that the style, type and beat of music you like and listen to create pathways in your brain to activate feelings.

As so it was that with time, and energy on my hands over the past few years I’ve re-engaged with dance music. Mostly because I like it, it activates those pathways in my mind that release feel good endorphins, I enjoy the freedom of the dance.

I’ve been to some great live performances, Tiesto and Gareth Emery especially down in San Antonio and Houston, and anyone who thinks these guys are just DJ’s, playing other peoples music through a computer or off CD’s is just missing the point.

However, one electronic music producer more than any other has really piqued my interest, Deadmau5, aka Joel Zimmerman from Toronto. I first saw Deadmau5 during South by South West (SXSW) in 2008, when Joel played at the now defunct Sky Lounge on Congress Ave. The club was small enough that you could actually stand at the side of the stage and see what he was doing, it was a fascinating insight. [In this video on YouTube, one of many from that night, not only can you see Joel "producing" music, but if you stop the video on the right frame at 16-seconds, you can see me in the audience! Who knew...]

I saw him again in March 2009 at Bar Rio in Houston. This time I had clear line of sight to what he was doing from the VIP balcony. It was fascinating, I actually saw and heard him make mistakes, not significant mistakes but ones that proved he was actually making live music. [You can read my review from the time here including links to YouTube videos.] It turns out he was something he was using during that Houston concert was either a prototype or something similar to a monome.

Joel regularly posts and runs live video streams from his home studio, and recently posted this video of his latest analog modular system. It and some of the other videos are a great insight into how dance music producers work. Watching this, this morning, I was struck with the similarities to the IBM 360/40 mainframe which was the first computer I worked on, especially I can remember the first time I was shown by an IBM Hardware Engineer, who might have been Paul Badger or Geoff Chapman, how the system worked. How to put it into instruction step, how to display the value of registers and so on. I felt the same way watching the Deadmau5 video, I got to get me some playtime with one of these.

And yes, the guy in the picture above is me and the 360/40. It was taken in probably the spring of 1976 I’d guess, at Attwood Statistics in Berkhampstead, Herts. UK.

The power and capacity of the IBM 36/40 are easily exceeded by handheld devices such as the Dell Streak. Meanwhile, it’s clear that some music producers are headed in the opposite direction, moving from digital software to analog hardware. The new old.

70% of something is better than..

70% of nothing at all. [With apologies to Double Exposure]

As I’ve said before, I’m an avid reader of Robin Bloors Have Mac Will Blog, blog. I also follow him on twitter where he is @robinbloor. Sadly his blog doesn’t accept trackbacks, but I’ll leave a short comment so he gets to see this.

His latest blog entry, CA:Dancing with dinosaurs comes across as a bit of a puff piece in support of Computer Associates.

On the CA involvement with mainframes, Bloor seems to have overlooked the fact that CA has John Swainson as CEO, and Don Ferguson as Chief Architect. John was previously an IBM VP, Don an IBM Fellow and both Don and John were variously in charge of significant IBM Software Group projects/products.

Personally I’d like to see someone from IBM find/quote a source for that 70% data number. It’s been used for years and years with little or no foundation. Jim Porell quoted this number in some of his excellent and more recent System Z strategy presentations, It’s dated from, I think, 1995.

Secondly, I’d guess it depends what you can business critical data these days. If Google collapsed or had their data centers in Silicon Valley interrupted with the loss of Google docs, YouTube, Google search, Maps and similarly Microsoft and/or Yahoo went offline… I’d suspect the whole notion that 70% of business critical data resides of mainframes would be laughable. Yes, a large percentage of purely text based transactional data is on mainframes and yes the value of those transactions exceeds any other platform, but that is far from 70% of anything much these days… Increasingly these days startups, SME’s and Web 2.0 business don’t use mainframes for even their text based transactional data.

Finally on the Bloor/CA assertion that installing mainframe software is arcane. That maybe, but here I’m still in full agreement of the mainframe folks, especially if you are talking about real mainframe software as IBM would have it, installed by SMP/E. One of my few claims to fame was reverse engineering key parts of the IBM Mainframe VM service process nearly 20-years now. It was then, and SMP/E is now, still is years ahead of anything in the Windows and UNIX space for pre-req, co-req, if-req processing; the ability to build and maintain multiple non-trivial systems from a single data store using binary only program objects. CA are not the first to spot the need to provide an interface other than ISPF and JCL to build these jobs streams.

But really, continuing to label mainframes as dinosaurs is so 1990’s, it’s like describing Lance Armstrong as a push bike rider.

Simon Perry, Principal Associate Analyst – Sustainability, Quocirca, has written a similar piece with a little more detail entitled Mainframe management gets its swagger.

IBM Big Box quandary

In another follow-up from EMC World, the last session I went to was “EMC System z, z/OS, z/Linux and z/VM”. I thought it might be useful to hear what people were doing in the mainframe space, although largely unrelated to my current job. It was almost 10-years to the day that I was at IBM, were writing the z/Linux strategy, hearing about early successes etc. and strangely, current EMC CTO Jeff Nick and I were engaged in vigourous debate about implementation details of z/Linux the night before we went and told SAP about IBM’s plans.

The EMC World session demonstrated, that as much as things change, the they stay the same. It also reminded me, how borked the IT industry is, that we mostly force customers to choose by pricing rather than function. 10-12 years ago z/Linux on the mainframe was all about giving customers new function, a new way to exploit the technology that they’d already invested in. It was of course also to further establish the mainframes role as a server consolidation platform through virtualization and high levels of utilization.(1)

What I heard were two conflicting and confusing stories, at least they should be for IBM. The first was a customer who was moving all his Oracle workloads from a large IBM Power Systems server to z/Linux on the mainframe. Why? Becuase the licensing on the IBM Power server was too expensive. Using z/Linux, and the Integrated Facility for Linux (IFL) allows organizations to do a cost avoidance exercise. Processor capacity on the IFL doesn’t count towards the total installed, general processor capacity and hence doesn’t bump up the overall software licensing costs for all the other users. It’s a complex discussion and that wasn’t the purpose of this post, so I’ll leave it at that.

This might be considered a win for IBM, but actually it was a loss. It’s also a loss for the customer. IBM lost because the processing was being moved from it’s growth platform, IBM Power Systems, to the legacy System z. It’s good for z since it consolidates it’s hold in that organization, or probably does. Once the customer has done the migration and conversion, it will be interesting to see how they feel the performance compares. IBM often refers to IFL and it’s close relatives the ziip and zaap as speciality engines. Giving the impression that they perform faster than the normal System z processors. It’s largely an urban myth though, since these “specialty” engines really only deliver the same performance, they are just measured, monitored and priced differently.

The customer lost becuase they’ve spent time and effort to move from one architecture to another, really only to avoid software and server pricing issues. While the System z folks will argue the benefits of their platform, and I’m not about to “dis” them, actually the IBM Power server can pretty mouch deliver a good enough implementation as to make the difference, largely irrelavant.

The second confliction I heard about was from EMC themselves. The second main topic of the session was a discussion about moving some of the EMC Symmetrix products off the mainframe, as customers have reported that they are using too much mainframe capacity to run. The guys from EMC were thinking of moving the function of the products to commodity x86 processors and then linking those via high speed networking into the mainframe. This would move the function out of band and save mainframe processor cycles, which in turn would avoid an upgrade, which in turn would avoid bumping the software costs up for all users.

I was surprised how quickly I interjected and started talking about WLM SRM Enclaves and moving the EMC apps to run on z/Linux etc. This surely makes much more sense.

I was left with though a definate impression that there are still hard times ahead for IBM in large non-X86 virtualized servers. Not that they are not great pieces of engineering, they are. But getting to grips with software pricing once and for all should really be their prime focus, not a secondary or tertiary one. We were working towards pay per use once before, time to revist me thinks.

(1) Sport the irony of this statement given the preceeding “Nano, Nano” post!

Whither IBM, Sun and Sparc?

So the twitterati and blog space is alight with discussion that IBM is to buy Sun for $6.25 billion. The only way we’ll know if there is any truth to it is if it goes ahead, these rumors are never denied.

Everyone is of course focussed on the big questions which mostly are around hardware synergies(server, chips, storage) and Java. Since I don’t work at IBM I have no idea whats going on or if there is any truth to this. There some more interesting technical discussions to be had than those generally think they have an informed opinion.

IBM bought Transitive in 2008; Transitive has some innovative emulation software, called QuickTransit. It allows binaries created and compiled on one platform, to be run on another hardware platform without change or recompilation. There were some deficiencies, and you can read more into this in my terse summary blog post at the time of the acquisition announcement. Prior to acquisition QuickTransit supported a number of platforms including SPARC and PowerMac and had been licensed by a number of companies, including IBM.

I assume IBM is in the midst of their classic “blue rinse” process and this explains the almost complete elimination of the Transitive web site(1), and it’s nothing more sinister than they are getting ready to re-launch under the IBM branding umbrella of POwerVM or some such.

Now, one could speculate that by acquiring SUN, IBM would achieve three things that would enhance their PowerVM stratgey and build on their Transitive acquisition. First, they could reduce the platforms supported by QuickTransit and over time, not renegotiate their licensing agreements with 3rd parties. This would give IBM “leverage” in offering binary emulation for the architectures previsouly supported, on say, only the Power and Mainframe processor ranges.

Also, by further enhancing QuickTransit, and driving it into the IBM microcode/firmware layer, thus making it more reliable, providing higher performance by reducing duplicate instruction handling, they could effectively eliminate future SPARC based hardware utilising the UNIX based Power hardware, PowerVM virtualization. This would also have the effect taking this level of emulation mainstream and negating much of the transient(pun intended) nature typically associated with this sort of technology.

Finally, by acquiring SUN, IBM would eliminate any IP barriers that might occur due to the nature of the implementation of the SPARC instruction set.

That’s not to say that there are not any problems to overcome. First, as it currently stands the emulation tends to map calls from one OS into another, rather than operating at a pure architecture level. Pushing some of the emulation down into the firmware/microcode layer wouldn’t help emulation of CALL SOLARIS API with X, Y, even if it would emulate the machine architecture instructions that execute to do this. So, is IBM really committed to becoming a first class SOLARIS provider? I don’t see any proof of this since the earlier announcement. Solaris on Power is pretty non-existentThe alternative is that IBM is to use Transitive technology to map these calls into AIX, which is much more likely.

In economic downturns, big, cash rich companies are kings. Looking back over the last 150 years there are plenty of examples of big buying competitors and emerging from the downturn even more powerful. Ultimately I believe that the proprietary chip business is dead, it’s just a question of how long it takes for it to die and if regulators feel that by allowing mergers and acquisitions in this space is good or bad for the economy and the economic recovery.

So, there’s a thought. As I said, I don’t work at IBM.

(1) It is mildly ammusing to see that one of the few pages left extoles the virtues of the Transitive technology by one Mendel Rosenblum, formerly Chief Scientist and co-founder of VMWare.

What’s up with industry standard servers? – The IBM View

I finally had time to read through the IBM 4Q ’08 results yesterday evening, it is good to see that Power Systems saw a revenue growth for the 10th straight quarter,  and that the virtualization  and high utilization rates are driving sales of both mainframe and Power servers.

I was somewhat surprised though to see the significant decline(32%) in x86 servers sales, System x in IBM nomenclature, put down to the strong demand “virtualizing and consolidating workloads into more efficient platforms such as POWER and mainframe”.

I certainly didn’t see any significant spike in interest in Lx86 in the latter part of my time with IBM and as far as I know, IBM still doesn’t have a reference customer for it many references for it, despite a lot of good technical work going into it. The focus from sales just wasn’t there. So that means customers were porting, rewriting or buying new applications, not something that would usually show up in quarterly sales swings, more as long term trends.

Seems to me the more likely reason behind IBM’s decline in x86 was simply as Bob Moffat[IBM Senior Vice President and Group Executive, Systems & Technology Group] put it in his December ’08 interview with CRN’s ChannelWeb when referring to claims by HP’s Mark Hurd “The stuff that Mr. Hurd said was going away kicked his ass: Z Series [mainframe hardware] outgrew anything that he sells. [IBM] Power [servers] outgrew anything that he sells. So he didn’t gain share despite the fact that we screwed up execution in [x86 Intel-based server] X Series.”

Moffat is quoted as saying IBM screwed up x86 execution multiple times, so one assumes at least Moffat thinks it’s true. And yes, as I said on twitter yesterday was a brutal day in the tech industry, and certainly with the Intel and Microsoft layoffs, the poor AMD results, and the IBM screw-up in sales and Sun starting previously announced layoffs, as the IBM results say industry standard hardware is susceptible to the economic downtown. I’d disagree with the IBM results statement though in that industry standard hardware is “clearly more susceptible”.

My thoughts and best wishes go out to all those who found out yesterday that their jobs were riffed, surplused or rebalanced, many of those, including 10-people I know personally, did not work in the x86 or as IBM would have it, “industry standard” hardware business.

Virtualization, The Recession and The Mainframe

Robin Bloor has posted an interesting entry on his “Have mac will blog” blog on the above subject. He got a few small things wrong, well mostly, he got all the facts wrong but, right from a populist historical rewrite perspective. Of course I posted a comment, but as always made a few typos that I now cannot correct, so here is the corrected version(feel free to delete the original comments Robin… or just make fun of me for the mistakes, but know I was typing outdoors at the South Austin Trailer Park and Eatery, with open toe sandles on and it’s cold tonight in Austin, geeky I know!)

What do they say about a person who is always looking back to their successes? Well, in my case, it’s only becuase I can’t post on my future successes, they are considered too confidential for me to even leave slides with customers when I visit… 

VM revisited, enjoy:

 

Mark Cathcart (Should have) said,
on October 23rd, 2008 at 8:16 pm

Actually Robin, while it’s true that the S/360 operating systems were written in Assembler, and much of the 370 operating systems, PL/S was already in use for some of the large and complex components.

It is also widely known that virtualization, as you know it on the mainframe today, was first introduced on the S/360 model-67. This was a “bastard child” of the S/360 processors that had virtual memory extensions. At that point, the precursor to VM/370 used on the S/360-67 was CP-67.

I think you’ll also find that IBM never demonstrated 40,000 Linux virtual machines on a single VM system, it was David Boyes of Sine Nomine, who also recently ported Open Solaris to VM.

Also, there’s no such thing as pSeries Unix in the marketing nomenclature any more, it’s now Power Systems, whose virtualization now supports AIX aka IBM “Unix”, System i or IBM i to use the the modern vernacular and Linux on Power.

Wikipedia is a pretty decent source for information on mainframe virtualization, right up until VM/XA where there are some things that need correcting, I just have not had the time yet.

Oh yeah, by the way. While 2TB of memory on a mainframe gives pretty impressive virtualization capabilities, my favorite anecdote, and it’s true because I did it, was back in 1983. At Chemical Bank in New York. We virtualized a complete, production, high availability, online credit card authorization system, by adding just 4Mb of memory boosting the total system memory to a whopping 12Mb of memory! Try running any Intel hypervisor or operating system on just 12Mb of memory these days, a great example of how efficient the mainframe virtualization is!

 

Back in the day – way back

I suggested to @adamclyde we take a twitter conversation about the gray area between personal and corporate blogging offline, into email. In my response to him, like some “grumpy old man“, I started by recalling the good old days when my URL’s were emea.ibm.com/(something) then ibm.com/s390/corner and later ibm.com/servers/corner.

Later I went looking and found some of my webpages from 2000 on the Internet Archive. I was even more delighted find they had some of my old presentations. I didn’t check through all of them, but my V2 Corner is here. I’ve taken one of my better presentations from the Internet archive and posted it on slideshare.

Enterprise Workstation Management - From Chaos to Order

Enterprise Workstation Management - From Chaos to Order

The PDF version doesn’t have all the overlay colors right, and some of the embedded graphics are missing, but it’s still worth looking through for both content and style.

 

If Google can celebrate it’s 10th anniversary by reporting it’s 2001 index, well how about letting me get away with reposting a presentation from 1996 that originated in 1989! The presentation has it’s origins in 1989 as a Lotus Freelance presentation printed on real overheads via a plotter. It covers the management of workstations and PC’s in corporate environments.

This version is dated from June 1996 and was recovered from the Internet Archive. Some of the colored overlays are the wrong colors and some of the graphics missing. I still think its worth taking a look through for both style and content. I got the summary slide wrong, but not by much as we move to what some are calling Cloud Clients

Most Mainframe MIPS Installs are Linux

over on the ibmeye blog Greg makes this observation: “I found this surprising (if true): More than half the mainframe MIPS IBM sells are Linux” and “That seems to go against the trust of IBM’s marketing push.”

I have no idea if the numbers quoted are accurate, but I don’t see the inconsistency.

We’ve been on an Intel and general server consolidation drive for 15-years now. Back in the mid-90’s it was much harder, we were trying to convince organizations to move their Unix workloads to OS/390, aka MVS, aka z/OS, using the Unix Systems Services, but it was a tough sell. Even before that a few of us, primarily in Europe were driving to get customers to consolidated under utilized and unreliable file servers to MVS or VM using either LANRES(for Novell Netware) or the LAN File Services for MS and OS/2 LAN Servers.

I think the current trend to migrate to Linux on the mainframe is entirely consistent with organizations efforts to make the most of the environmental benefits of a large centralized server, along with the ease and openness of Linux. IBM has a massive internal effort, moving something like 3,500 servers.

Can you provide examples of where you think it’s inconsistent Greg?

Federal Reserve and Mainframes

Over on the Mainframe Executive blog, there is an open letter to the US Federal Reserve Bank, questioning the Fed’s apparent desire to move or switch their systems away from mainframes to distributed systems. Well you would expect less from the Mainframe Executive blog. I have a different take on why the Fed should not only keep their mainframe, but why they might want to move more work to it.

I worked on many of the early mainframe Internet applications. I did the high level design and oversaw the implementation of an Internet Banking Solution that the bank, Sun Microsystems and Microsoft had all failed to get to scale. Our design went from 3k users to I believe at the end of 2-years in production, close to 990k users without an upgrade, and without a system outage. It was built off two mainframe systems outside the firewall, running as a Sysplex. I also did a design review for a bank that had lost close to $60k from four accounts, the back end on the mainframe the mid-tiers and Internet servers distributed.

The point of this post though isn’t to gloat about my success, isn’t being a ‘mainframe bigot’ or even saying the Fed should use the mainframe. In the Mainframe Executive they raise the usual specter of security, yes security is a big deal for banks, even more so for the Fed. So yes, make a big deal of it.

However, the single most important thing to understand about building trusted computing systems, isn’t that you provide a 100% secure environment, in which applications aka business transactions, run. It is that you can show who did what, when, and how. Auditing is much more important than security. If you believe you have a 100% secure system and you lose some money but can’t audit it, what do you do, shrug your shoulders and say “oh well never mind”?

Auditing isn’t about just seeing that you have procedures in place. It is the ability to pick apart a debit transaction on a system that was executed at 4:05pm along with 30,000 others, show how that transaction was invoked, where from, under what security context, what ID, and the originating network address and more. That might require looging through logs of 7-10 distributed systems.

If like the bank I did the design review for, you can’t show the correlation of events leading up to the execution of the transaction, and you don’t know for certain where the user eneterd the network, what ID they used, and how that security context was passed from one system to another, then you don’t have security, no matter what they say.

When you are looking after the nation’s money, and despite the obvious current finicial position of the US, budgets not withstanding, I’d say that was pretty important. What does the Fed say?

I say “Show me the audit, show me the audit, show me the audit…” (repeat ad infinitum)

Time for dinner – The IBM Hydro-cluster

I got an email pointing out that I omitted a link to the youtube video of the IBM hydro-cluster. So, here it is.

Towards the end of the video, Jeff Gluck says “hot water can be moved off site”, “to heat your home or cook a family dinner”. In the famed Larry and Brin, “do no evil” context, I guess this is goodness. While I appreciate that there is a very serious side to the “greening” of the datacenter, I couldn’t help but laugh.

Back in the 1970’s on one of the first large scale computer servers, aka mainframes I worked on, we used to store takeaways inside the server for 4-5 hours to keep it warm on evening and night shift. The really scary thing, back in those days microwaves didn’t exist!

The IBM 370/145 was a T-shaped server, laying on its back, the whole back of the T was largely empty, ready in case you wanted to upgrade to a 370/148 or 155(I think). So it became common place to store stuff in there that you wanted to keep warm and dry. Ideal for takeaway and girlie magazines(so I’m told!).

IBM’s new Enterprise Data Center vision

IBM announced today our new Enterprise Data Center vision. There are lots of links from the new ibm.com/datacenter web page which split out into their various constituencies Virtualization, Energy Efficiency, Security, Business resiliency and IT service delivery.

To net it out from my perspective though, there is a lot of good technology behind this, and an interesting direction summarized nicely starting on page-10 on the POV paper linked from the new data center page or here.

What it lays out are the three main stages of adoption for the new data center, simplified, shared and dynamic. The Clabby analytics paper, also linked from the new data center page or here, puts the three stages in a more consumable practical tabular format.

They are really not new, many of our customers will have discussed these with us many times before. In fact, there’s no coincidence that the new Enterprise Data Center vision was launched the same day as the new IBM Z10 mainframe. We started discussing and talking about these these when I worked for Enterprise Systems in 1999, and we formally laid the groundwork in the on demand strategy in 2003. In fact, I see the Clabby paper has used the on demand operating environment block architecture to illustrate the service patterns. Who’d have guessed.

Simplify: reduce costs for infrastructure, operations and management

Share: for rapid deployment of infrastructure, at any scale

Dynamic: respond to new business requests across the company and beyond

However, the new Enterprise Data Center isn’t based on a mainframe, Z10 or otherwise. It’s about a style of computing, how to build, migrate and exploit a modern data center. Power Systems has some unique functions in both the Share and Dynamic stages, like partition mobility, with lots more to come.

For some further insight into the new data center vision, take a look at the presentation linked off my On a Clear day post from December.

Funeral for a friend

Long time friend, and former IBM VM and LAN Systems Director, now fellow Austin resident, Art Olbert point me to this video. It’s the University of Manitoba holding a funeral procession for their mainframe system after some 47-years of service. Nothing on their web site says what they’ve replaced it with, I’ve emailed them and asked. Their web site is currently running on Apache on Linux after migrating from Solaris some time in 2005. As always, Slashdot covers this with comments that range from the helpful to the absolutely bizarre.

Art is familiar with this type of stunt, Art is lovingly remembered for blowing up an IBM mainframe at the announcement of the IBM LAN Server in the 1990’s. Sorry Art, couldn’t avoid mentioning it :-) – Ahh the good old days.


About & Contact

I'm Mark Cathcart, Senior Distinguished Engineer, in Dells Software Group. I was formerly Director of Systems Engineering in the Enterprise Solutions Group at Dell, and an IBM Distinguished Engineer and member of the IBM Academy of Technology. I'm an information technology optimist.

Blog Stats

  • 74,828 hits

Subscribe to updates via rss:

Feed Icon

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 488 other followers


Follow

Get every new post delivered to your Inbox.

Join 488 other followers