Archive for the 'esb' Category

API’s and Mainframes

ab[1]

I like to try to read as many American Banker tech’ articles as I can. Since I don’t work anymore, I chose not to take out a subscription, so some I can read, others are behind their subscription paywall.

This one caught my eye. as it’s exactly what we did in circa 1998/99 at National Westminster Bank (NatWest) in the UK. The project was part of the rollout of a browser Intranet banking application, as a proof of concept, to be followed by a full blown Internet banking application. Previously both Microsoft and Sun had tackled the project and failed. Microsoft had scalability and reliability problems, and from memory, Sun just pushed too hard to move key components of the system to its servers, which in effect killed their attempt.

The key to any system design and architecture is being clear about what you are trying to achieve, and what the business needs to do. Yes, you need a forward looking API definition, one that can accept new business opportunities, and one that can grow with the business and the market. This is where old mainframe applications often failed.

Back in the 1960’s, applications were written to meet specific, and stringent taks, performance was key. Subsecond response times were almost always the norm’ as there would be hundreds or thousands of staff dependent on them for their jobs. The fact that many of those application has survived to this today, most still on the same mainframe platform is a tribute to their original design.

When looking at exploiting them from the web, if you let “imagineers” run away with what they “might” want, you’ll fail. You have to start with exposing the transaction and database as a set of core services based on the first application that will use them. Define your API structure to allow for growth and further exploitation. That’s what we successfully did for NatWest. The project rolled out on the internal IP network, and a year later, to the public via the Internet.

Of course we didn’t just expose the existing transactions, and yes, firewall, dispatching and other “normal” services as part of an Internet service were provided off platform. However, the core database and transaction monitor we behind a mainframe based webserver, which was “logically” firewalled from the production systems via an MPI that defined the API, and also routed requests.

So I read through the article to try to understand what the issue was that Shamir Karkal, the source for Barbas article, felt was the issue. Starting at the section “Will the legacy systems issue affect the industry’s ability to adopt an open API structure?” which began with a history lesson, I just didn’t find it.

The article wanders between a discussion of the apparent lack of a “service bus” style implementation, and the ability of Amazon to sell AWS and rapidly change the API to meet the needs of it’s users.

The only real technology discussion in the article that I found that had any merit, was where they talked about screen scraping. I guess I can’t argue with that, but surely we must be beyond that now? Do banks really still have applications that are bound by their greenscreen/3270/UI? That seems so 1996.

A much more interesting report is this one on more general Open Bank APIs. Especially since it takes the UK as a model and reflects on how poor US Banking is by comparison. I’ll be posting a summary on my ongoing frustrations with the ACH over on my personal blog sometime in the next few days. The key technology point here is that there is no way to have a realtime bank API, open, mainframe or otherwise, if the ACH system won’t process it. That’s America’s real problem.

Got ServiceMix?

If you’ve been keeping an eye on the news and job position listings at Dell you’ll have seen a number of positions open-up over the last 3-months for Java and Service Bus developers, not to mention our completed acquisition of Scalent. We are busy working on the first release of the Dell “soup to nuts” virtualization management, orchestration and deployment software, one of the core technologies of which is Apache ServiceMix.

One of the open positions we’ve got is for a Senior Software Engineer with solid ServiceMix skills from a programming perspective. This job listing is the position, the job description and skills will be updated over the next few days but if you’d like to join the team architecting, designing and programming Dell’s first real software product, that’s aiming at making the virtual data center easy to use, as well as open, capable and affordable to run, go ahead and apply now.

If you make it through the HR process, I’ll see you at the interview…

Is SOA dead?

There has been a lot of fuss since the start of the new year around the theme “SOA is dead”. Much of this has been attributed to Anne Thomas Manes blog entry on the Burton Groups blog, here.

Infoworlds Paul Krill jumper on the bandwagon with a SOA obituary, qouting Annes work and say “SOA is dead but services will live on”. A quick fire response came on a number of fronts, like this one from Duane Nickull at Adobe, and then this from James Governor at Redmonk, where he charismatically claims, “everything is dead”.

First up, many times in my career, and James touches on a few of the key ones, since we were there together, or rather, I took advantage of his newness and thirst for knowledge as a junior reporter, to explain to him how mainframes worked, and what the software could be made to do. I knew from 10-years before I met James that evangelists and those with an agenda, would often claim something was “dead”. It came from the early 1980’s mainframe “wars” – yes, before there was a PC, we were having our own internal battles, this was dead, that was dead, etc.

What I learned from that experience, is that technical people form crowds. Just like the public hangings in the middle ages, they are all too quick to stand around and shout “hang-him”. These days it’s a bit more complex, first off there’s Slashdot, then we have the modern equivalent of speakers corner, aka blogs, where often those who shout loudest and most frequently, get heard more often. However, what most people want is not a one sided rant, but to understand the issues. Claiming anything is dead often gives the claimer the right not to understand the thing that is supposedly “dead” but to just give reasons why that must be so and move on to give advice on what you should do instead. It was similar debate last year that motivated me to document my “evangelsim” years on the about page on my blog.

The first time I heard SOA is dead, wasn’t Annes blog, it wasn’t even as John Willis, aka botchagalupe on twitter, claims in his cloud drop #38 him and Michael Cote of Redmonk last year. No sir, it was back in June 2007, when theregister.co.uk reprinted a Clive Longbottom, Head of Research at Quocirca, under the headline SOA – Dead or Alive?

Clive got closest to the real reasons of why SOA came about, in my opinion, and thus why SOA will prevail, despite rumours of its’ demise. It is not just services, from my perspective, it is about truly transactional services, which are often part of a workflow process.

Not that I’m about to claim that IBM invited SOA, or that my role in either the IBM SWG SOA initiative, or the IBM STG services initiative was anything other than as a team player rather than as a lead. However, I did spend much of 2003/4 working across both divisions, trying to explain the differences and similarities between the two, and why one needed the other, or at least its relationships. And then IBM marketed the heck out of SOA.

One of the things we wanted to do was to unite the different server types around a common messaging and event architecture. There was  almost no requirement for this to be syncronous and a lot of reasons for it to be services based. Many of us had just come from the evolution of object technology inside IBM and from working on making Java efficient within our servers. Thus, as services based approach seemed for many reasons the best one. 

However, when you looked at the types of messages and events that would be sent between systems, many of them could be cruicial to effective and efficient running of the infrastructure, they had in effect, transactional charateristics. That is, given Event-a could initiate actions A, then b, then c and finally d. While action-d could be started before action-c, it couldn’t be started until action-b was completed, and this was dependent on action-a. Importantally, none of these actions should be performed more than once for each instance of an event.

Think failure of a database or transactional server. Create new virtual server, boot os, start application/database server, rollback incomplete transactions, take over network etc. Or similar.

Around the same time, inside IBM, Beth Hutchison and others at IBM Hursley, along with smart people like Steve Graham, now at EMC, and Mandy Chessell also of IBM Hursley were trying to solve similar trascational type problems over http and using web services.

While the Server group folks headed down the Grid, Grid Services and ultimately Web Service Resource  Framework, inside IBM we came to the same conclusion, incompatible messages, incompatible systems, different architectures, legacy systems etc. need to interoperate and for that you need a framework and set of guidelines. Build this out from an infrastructure layer, to an application level; add in customer applications and that framework; and then scale it in any meaningful, that need more than a few programmers working concurrently on the same code, or on the same set of services, and what you needed was a services oriented architecture.

Now, I completely get the REST style of implementation and programming. There is no doubt that it could take over the world. From the perspective of those frantically building web mashups and cloud designs, already has. In none of the “SOA is dead” articles has anyone effectively discussed syncronous transactions, in fact apart from Clive Longbottoms piece, no real discussion was given to workflow, let alone the atomic transaction.

I’m not in denial here of what Amazon and Google are doing. Sure both do transactions, both were built from the ground-up around a services based architecture. Now, many of those who argue that “SOA is dead” are often those who want to move onto the emporers new clothes. However, as fast as applications are being moved to the cloud, many businesses are nowhere in sight moving or exploiting the cloud. To help them get there, they’ll need to know how to do it and for that they’ll need a roadmap, a framework and set of guidelines, and if it includes their legacy applications and systems, how they get there, For that, they’ll likely need more than a strategy, they’ll need a services “oriented” architecture.

So, I guess we’ve arrived at the end, the same conclusion that many others have come to. But for me it is always about context.

I have to run now, literally. My weekly long run is Sunday afternoon and my running buddy @mstoonces will show up any minute. Also, given I’m starting my new job, I’m not sure how much time I’ll have to respond to your comments, but I welcome the discussion!

Power Systems and SOA Synergy

One of the things I pushed for when I first joined Power Systems(then System p) was for the IBM redbooks to focus more on software stacks, and to relate how the Power Systems hardware can be exploited to deliver a more extensive, and easier to use and more efficient hardware stack than many scale out solutions.

Scott Vetter, ITSO Austin project lead, who I first worked with back in probably 1992 in Poughkeepsie, and the Austin based ITSO team, including Monte Poppe from our System Test team, who has recently been focusing on SAP configurations, have just published a new IBM Redbook.

The Redbook, Power Systems and SOA Synergy, SG24-7607, is available free for download from the redbooks abstract page here.

The book was written by systems people, and will be useful to systems people. It contains as useful summary and overview of SOA applications, ESB’s, WebSphere etc. as well as some examples of how and what you can use Power Systems for, including things like WPARs in AIX.

The “L” Word

There’s an excellent analysis by Frank Dzubeck over on Network World today about the new Enterprise Data Center and that hoary old chestnut latency. I don’t know who briefed Frank, it wasn’t me, Jeff and I talked this afternoon and I asked, it wasn’t him, since the article covered also the z10 announcement, I have a good idea though 😉

Frank covers ensembles, data center utilization and the some of the new data center fabric issues extremely well. He also makes the point, that I’d like folks to be clear about, that this isn’t the resurgance of the mainframe, or everthing back to a central server.

We’ve grown use to indefinite waits, or unbelievably fast response times from certain popular websites, but the emerging problem is around latency in the data center. How to deliver service levels and response times in an increasingly rich and complex systems environment. It’s OK to build a data center or server subsystem focussed around a single business model, something like Amazons EC2 or S3, or Googles search and query engines; it’s another to integrate a vast array of different vendors IT equipment bought at different times for different business applications and services and integrate them all together and orchestrate them as business services. While MapReduce may or may not be as good as, or better than a database, not everything is going to be run in this fashion.

Fibre channel over ethernet is a going to happen, 10Gb ethernet opens up some real options in terms of both integrating systems, and distributing services. It will be almost as fast to connect to another server as it is to talk between cores and processors within the same server. This disclosure from IBM Research today shows the way to the next generation of interconnected infrastructure, working at 300-Gbit/second, the bus goes optical making the integration of rich data systems video, VOIP, total encryption of data, secure key based secure infrastructure services, integrated with more traditional transactional systems a real possibility.

The opportunity isn’t to take the same old stuff and distribute it because the fabric is faster, it’s about better integrating systems, exploiting new ways of doing things. Introducing a common event infrastructure, being more intelligent about WAN and Application routing, having a publish/subscribe/consume model for the infrastructure and genuinely opening it up and simplifying it.

Of course, there a re lots of blanks to be filled in, but the new Enterprise Data Center is taking shape.

IBM’s new Enterprise Data Center vision

IBM announced today our new Enterprise Data Center vision. There are lots of links from the new ibm.com/datacenter web page which split out into their various constituencies Virtualization, Energy Efficiency, Security, Business resiliency and IT service delivery.

To net it out from my perspective though, there is a lot of good technology behind this, and an interesting direction summarized nicely starting on page-10 on the POV paper linked from the new data center page or here.

What it lays out are the three main stages of adoption for the new data center, simplified, shared and dynamic. The Clabby analytics paper, also linked from the new data center page or here, puts the three stages in a more consumable practical tabular format.

They are really not new, many of our customers will have discussed these with us many times before. In fact, there’s no coincidence that the new Enterprise Data Center vision was launched the same day as the new IBM Z10 mainframe. We started discussing and talking about these these when I worked for Enterprise Systems in 1999, and we formally laid the groundwork in the on demand strategy in 2003. In fact, I see the Clabby paper has used the on demand operating environment block architecture to illustrate the service patterns. Who’d have guessed.

Simplify: reduce costs for infrastructure, operations and management

Share: for rapid deployment of infrastructure, at any scale

Dynamic: respond to new business requests across the company and beyond

However, the new Enterprise Data Center isn’t based on a mainframe, Z10 or otherwise. It’s about a style of computing, how to build, migrate and exploit a modern data center. Power Systems has some unique functions in both the Share and Dynamic stages, like partition mobility, with lots more to come.

For some further insight into the new data center vision, take a look at the presentation linked off my On a Clear day post from December.

On a clear day, can you see a cloud?

It’s not very often these days I get to escape my bunker in IBM Austin. On December the 6th I was asked to speak at the NCOIC Conference and work group in St Petersburg, Florida.

The invitation came in a roundabout way, via Massimo Re Ferre from IBM Italy, and Bob Marcus at SRI. The agenda and speakers looked interesting, and so I decided to take the opportunity and go run some of the current thinking by an influential audience. Speaking right before me was Roger Smith, CTO of the US Army PEO STRI division, and he gave a fascinating talk on warefare simulation and training.

I decided to talk about the evolution of Grid, On Demand, SOA and the Blue Cloud implementation of a Service Oriented Infrastructure. We had a useful discussion on what could be done now, net answer, pretty much all of it. You can’t buy it as a product or solution, but you can build it from either IBM or open standards/source parts now.

What’s made the difference is the ability to build around a common, composite infrastructure for management. Previously we’d tried to build and deliver this everywhere, now it’s much more focussed on platform by platform based implementation. Get it right in one place, move it to another.

I’ve posted the slides on slideshare.net here and I’ve also put the PDF on wordpress for download, here.

Zelenka on Open Source ESBs

RedNun’s report on Open Source ESBs is published and it has some useful updates and additional information from her earlier web piece which elicited my “I’m a centrprise architect” response.

In her intro, Anne says “Lightweight open source enterprise service bus (ESB) implementations offer a low cost, scalable, and practical approach to enterprise application integration.” – I’m so with that, but as always with a spin on it. Actually an ESB would make the perfect vehicle(bus geddit?) for system infrastructure integration.

Still much to do on standards, implementations, h/w runtimes etc. but I still firmly believe this is the direction we should be going to implement genuinely interoperable hardware based components. Common messages formats, industry schema, common messaging protocols and one or more buses to intermediate for the components and manage pub/sub etc.. Dynamic, autonomic, vendor neutral hardware, we’ll get there.


My 2003-2004 book on “Virtualization and the on demand Business”, Chapter-3, spells this out a little more…


About & Contact

I'm Mark Cathcart, formally a Senior Distinguished Engineer, in Dells Software Group; before that Director of Systems Engineering in the Enterprise Solutions Group at Dell. Prior to that, I was IBM Distinguished Engineer and member of the IBM Academy of Technology. I am a Fellow of the British Computer Society (bsc.org) I'm an information technology optimist.


I was a member of the Linux Foundation Core Infrastructure Initiative Steering committee. Read more about it here.

Subscribe to updates via rss:

Feed Icon

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 2,066 other subscribers

Blog Stats

  • 89,480 hits