Archive for the 'SOA' Category

App Internet and the FT

Picture of various walled gardens

Walled Gardens

former Colleague Simon Phipps reports on the FT(Financial Times) move to escape the app trap, that I discussed in my earlier App Internet post. Simon useful covers a number of points I didn’t get to, so it’s worth reading the full article here on ComputerWorld(UK).

This is a great example, they’ve clearly done a great job based on their web page feature list, but since I don’t have an iPhone or iPad, couldn’t try it out.

Simon makes an interesting point, that the FT is incurring some risk in that it is not “in” the app store, and therefore doesn’t get included in searches by users looking for solutions. This is another reason why app stores are just another variation of walled gardens. Jeff Atwood has a good summary of the arguments on why walled gardens are a bad thing here. In Jeffs 2007 blog, he says “we already have the world’s best public social networking tool right in front of us: it’s called the internet” and goes on to talk about publicly accessible web services in this instance rather than app stores.

One of the things that never really came to pass with SOA, was the idea of public directories. App stores, and their private catalogs, are directories, however they have a high price of entry as Simon points out. What we need now to encourage the move away from app stores is an HTML5 app store directory. It really is little more than an online shopping catalog for bookmarks. But it includes all the features and functions of walled garden app store catalogs, the only exception to which is the code itself. In place of the download link would be a launch, go, or run now button or link.

We’d only need a few simple, authorized REST based services to create, update, delete catalog entries, not another UDDI all encompassing effort, although it could learn from and perhaps adapt something like the UDDI Green Pages. This is way out of my space, anyone know if there are efforts in this area? @cote ? @Monkchips ? @webmink ?

Senior Architect – Enterprise Systems Management and more

With things really rolling here at Dell on the software front we are still in the process of hiring,and are looking for some key people to fit into, or lead teams working on current and future software projects. At least currently these are based with our team here in Round Rock, TX. However, I’d like to hear if you’d be interested in joining our Dell west coast software labs in Sunnyvale and Palo Alto.

Here are a few of the current vacancies:

Senior SOA Architect – Enterprise Systems Management
Performance Engineer – SOA Infrastructure Management
Senior Java Developer – Systems Management
Senior Software Engineer – Systems Management-10069ZNS

Depending on how you count, there are over a 100 of us now working on the VIS and AIM products, with a whole lot more to come in 2011. Come join me and help make a fundamental change at Dell and be in on the beginning of something big!

Dell’s Virtual Integrated System

Open, Capable, Affordable - Dell VIS

Open, Capable, Affordable - Dell VIS

It’s always interesting travel, you learn so many new things. And so it was today, we arrived in Bangalore yesterday to bring two of the sprint teams in our “Maverick” design and teams up to speed.

In an overview of the “product” and it’s packaging, we briefly discussed naming. I was under the impression that we’d not started publicly discussing Dell’s Virtual Intergrated System (VIS), well I was wrong as one of the team pointed out.

Turns out a Dell.com web site already has overview descriptions of three of the core VIS offerings, VIS Integration Suite; VIS Delivery Center; and VIS Deploy infrastructure. You can read the descriptions here.

Essentially, Maverick is a services oriented infrastructure (SOI), built from modular services, pluggable components, transports and protocols that will allow us to build various product implementations and solutions from a common management architecture. It’s an exciting departure from traditional monolithic systems management products, or the typically un-integrated products which use different consoles, different terms for the same things, and to get the best use out of them require complex and often long services projects, or for you to change your business to match their management.

More jobs news

We are making great progress on filling out the teams, and my 2nd pilot technology program started with a bang last week, building an embedded processor stack based on ServiceMix; my 3rd pilot, to test some key technologies like AMQP and Cassandra is taking shape. However, we need to backfill some of the work I’ve been doing, and for the consultants we’ve had on staff.

Amongst the vacancies we have open is “Senior SOA Architect – Enterprise Systems Management-1003LEFS“. Also a “Senior Software Engineer, Systems Management- 10069ZNS

A good place to get a list of Dell jobs in Round Rock is here on the cnn.com website. If you are interested in working with some of our recent acquisitions out on the west coast, including Scalent(Dell AIM) or Kace, check out this link.

Cote on Consumer to Enterprise

REST Interface slide from Cote presentation

REST Interface slide from Cote presentation

Over on his people over process blog, Redmonk Analyst, Michael Cote, has what is a great idea, a rehersal of an upcoming presentation including slides and audio.

The presentation covers what technology is making the jump from the consumer side of applications and IT into the enterprise. I’m delighted to report Cote has used a quote from me on REST.

For clarification, the work we are doing isn’t directly related to our PowerEdge C-servers, or our cloud services. For that, Dell customer Rackspace cloud has some good REST API‘s and is well ahead of us, in fact I read a lot of their documentation while working on our stuff.

On the other hand, I’m adamant that the work we are doing adding a REST-like set of interfaces to our embedded systems management, is not adding REST API’s. Also, since I did contribute requirements and participate in discussions around WS-* back when I was IBM, I’d say that we were trying to solve an entirely different set of problems, and hence now REST is the right answer, to externalize the data needed for a web based UI.

At the same time, we will also continue to offer a complete implementation of WS Management(WSMAN). WSMAN is a valuable tool to externalize the complexity of a server, in order for it to be managed by an external console or control point. Dell provides the Dell Management Console (DMC) which consumes WSMAN and provides one-to-many server management.

The point of the REST interfaces is to provide a simple way to get data needed to display in a Web UI, we don’t see having to expose all the same data, and can use a much more lightweight infrastructure to process it. At the same time, it’s the objective of this project to keep the UI simple for one-to-one management. Customers who want a more complex management platform will be able to use DMC, or exploit the WSMAN availability.

EMC World – standards?

Tucci and Maritz at EMC World 2009

Tucci and Maritz at EMC World 2009

I’ve been attending the annual EMC World conference in Orlando this week. A few early comments, there has been a massive 64,000ft shift to cloud computing in the messaging, but less so at ground level. There have been one or two technical sessions, but none on how to implement a cloud, or to put data in a cloud, or to manage data in a cloud. Maybe next year?

Yesterday in the keynote, Paul Maritz, President and CEO of VMware said that VMware is no longer in the business of individual hypervisors but in stitching together an entire infrastructure. In a single sentence laying out clearly where they are headed, if it wasn’t clear before. In his keynote this morning, Mark Lewis, President, Content Management and Archiving Division, was equally clear about the future of information virtualization, talking very specifically about federation and distributed data, with policy management. He compared that to a consolidated, centralized vision which he clearly said, hadn’t worked. I liked Lewis’s vision for EMC Documentum xCelerated Composition Platform (xCP) as a next generation information platform.

However, so far this week, and especially after this afternoons “Managing the Virtualized Data Center” BOF, where I had the first and last questions on standards, which didn’t get a decent discussion, there has been little real mention of standards or openness.

Generally, while vendors like to claim standards compliance and involvement, they don’t like them. Standards tend to slow down implementation historically. This hasn’t been the case with some of the newer technologies, but at least some level of openness is vital to allow fair competition. Competition almost always drives down end user costs.

Standards are of course not required if you can depend on a single vendor to implement everything you need, as you need it. However, as we’ve seen time and time again, that just doesn’t work, something gets left out, doesn’t get done, or gets a low priority from the implementing vendor, but it’s a high priority for you – stalemate.

I’ll give you an example: You are getting recoverable errors on a disk drive. Maybe it’s directly attached, maybe it’s part of a SAN or NAS. If you need to run multiple vendors server and/or storage/virtualization, who is going to standardize the error reporting, logging, alerting etc. ?

The vendors will give you one of a few canned answers. 1. It’s the hardware vendors job(ie. they pass the buck) 2. They’ll build agents that can monitor this for the most popular storage systems (ie. you are dependent on them, and they’ll do it for their own storage/disks first) 3. They’ll build a common interface through which they can consume the events(ie. you are dependent on the virtualization vendor AND the hardware vendor to cooperate) or 4. They are about managing across the infrastructure for servers, storage and network(ie. they are dodging the question).

There are literally hundreds of examples like this if you need anything except a dedicated, single vendor stack including hardware+virtualization. This seems to be where Cisco and Oracle are lining up. I don’t think this is a fruitful direction and can’t really see this as advantageous to customers or vendors. Notwithstanding cloud, Google, Amazon et al. where you don’t deal with hardware at all, but have a whole separate set of issues, and standards and openness are equally important.

In an early morning session today, Tom Maguire, Senior Director of Technology Architecture, Office of the CTO on EMC’s Service-Oriented Infrastructure Strategy: Providing Services, Policies, and Archictecture models. Tom talked about lose coupling, and defining stateful and REST interfaces that would allow EMC to build products that “snap” together and don’t require a services engagement to integrate them. He talked also talked about moving away from “everyone discovering what they need” to a common, federated fabric.

This is almost as powerful message as that of Lewis or Maritz, but will get little or no coverage. If EMC can deliver/execute on this, and do it in a de jure or de facto published standard way, this will indeed give them a powerful platform that companies like Dell can partner in, and bring innovation and competitive advantge for our customers.

Oh, Now it’s legacy IT that’s dead. Huh?

I got a pingback Dana Gardners ZDNet blog for my “Is SOA dead?” post. Dana, rather than addressing the issue I raised yesterday, just moved the goalposts, claiming “Legacy IT is dead“.

I agree with many of his comments, and after my post “Life is visceral“, which Dana so ably goes on to prove with his post. I liked some of the fine flowing language, some of it almost prosaic, especially this “We need to stop thinking of IT as an attached appendage of each and every specific and isolated enterprise. Yep, 2000 fully operational and massive appendages for the Global 2000. All costly, hugely redundant, unique largely only in how complex and costly they are all on their own.” – whatever that means?

However, thinking about a reasonable challenge for anyone considering jumping to a complete services or cloud/services, not migrating, not having a roadmap or architecture to get there, but as Dana suggests, grasping the nettle and just doing it.

One of the simplest and easiest examples I’ve given before for why I suspect as Dana would have it, “legacy  systems” exist, is becuase there are some problems you just can NOT be split apart a thousand times, whose data can NOT be replicated into a million pieces.

Let’s agree. Google handles millions of queries per seconds, as do ebay and Amazon, well probably. However, in the case of the odd goggle query not returning anything, as opposed to returning no results, no one really cares or understands, they just click the browser refresh button and wait. Pretty much the same for Amazon, the product is there, you click buy, and if every now and again there was one item of a product left at an Amazon store front, if someone else has bought it between the time you looked for it and decided to buy, you just suffer through the email that the item will be back in stock in 5-days after all, it will take longer than that to track down someone to discuss it with.

If you ran your banking or credit card systems this way, no one would much care when it came to queries. Sure, your partner is out shopping, you are home working on your investments. Your partner goes to get some cash, checks the balance and the money is there. You want to transfer a large amount of money into a money market account, you check and there amount is just there, you’ll transfer some more into the account overnight from your savings and loan and you know your partner only ever uses credit, right?. You both proceed, a real transactional system lets one of you proceed and the other fails, even if there is only 1-second, and possibly less difference between your transactions coming in.

In the google model, this doesn’t matter, it’s all only queries. If your partner does a balance check, a second or more after you’ve done a transfer, and see’s the the wrong balance, it will only matter when they are embarressed 20-seconds later trying to use that balance, that isn’t there anymore.

Of course, you can argue banks dont’ work like that, they reconcile balances at the end of the day. You will though when that exceptional balance charge kicks-in if both transactions work. Most banks systems are legacy systems from a different perspective, and should be dead. We, as customers, have been pushing for straight through processing for years, why should I wait for 3-days for a check to clear? 

So you can’t have it both ways, out of genuine professional understanding and interest, I’d like to see any genuine transaction based systems that are largely or wholly services based or that run in the cloud.

In order to what Dana advocates, move off ALL legacy systems, those transaction systems need to cope with 1000, and upto 2000 transactions per second. Oh yeah, it’s not just banks that use “legacy IT”, there are airlines, travel companies, anywhere where there is finite product and an almost infinite number of customers.

Remember, Amazon and ebay and paypal don’t do their own credit card processing as far as I’m aware, they are just merchants who pass the transaction on to a, err, legacy system.

Some background reading should include one that I used early in my career. Around the time I was advocating moving Chemical Bank, NY’s larger transaction systems to virtual machines, which we did. I was attending VM Internals education at Amdahl in Columbia, MD. One of the instructors thought I might find the paper useful.

It was written by a team at Tandem Computer and Amdahl, including the late, great Jim Gray. It was written in 1984. Early on in this paper they describe environments that support 800 transactions per second in 1984. Yes, 1984. These days, even in the current economic environment, 1000tps are common, and 2000tps are table stakes.

Their paper is preserved and online here on microsoft.com

And finally, since I’m all about context. I’m an employee of Dell, I started work there today. What is written here is my opinion, based on 34-years IT experience and much of it garned from the sharp end, designing an I/O subsystem to support an large NY banks transactional, inter-bank transfer system, as well as being responsible for the worlds first virtualized credit card authorization system etc. but I didn’t work for Dell, or for that matter, IBM then. 

Speakers corner anyone?

Life is visceral

After I posted my “Is SOA Dead?” entry, Joel Zimmerman, aka Deadmau5 reminded me of this by Spiro Agnew, in a speech, just down the road in Houston, Texas, on 22 May 1970, where VP Agnew said in response to the Vietnam War riots: “Subtlety is lost, and fine distinctions based on acute reasoning are carelessly ignored in a headlong jump to a predetermined conclusion.”

“Life is visceral rather than intellectual. And the most visceral practitioners of life are those who characterize themselves as intellectuals. Truth is to them revealed rather than logically proved. And the principal infatuations of today revolve around the social sciences, those subjects which can accommodate any opinion, and about which the most reckless conjecture cannot be discredited. Education is being redefined at the demand of the uneducated to suit the ideas of the uneducated.”

You can read the full text here or you can listen to it here. Why don’t people have diction like that anymore? Why have I taken to spelling everything the American way? And why are there no riots anymore over outrageous goverment actions?

Is SOA dead?

There has been a lot of fuss since the start of the new year around the theme “SOA is dead”. Much of this has been attributed to Anne Thomas Manes blog entry on the Burton Groups blog, here.

Infoworlds Paul Krill jumper on the bandwagon with a SOA obituary, qouting Annes work and say “SOA is dead but services will live on”. A quick fire response came on a number of fronts, like this one from Duane Nickull at Adobe, and then this from James Governor at Redmonk, where he charismatically claims, “everything is dead”.

First up, many times in my career, and James touches on a few of the key ones, since we were there together, or rather, I took advantage of his newness and thirst for knowledge as a junior reporter, to explain to him how mainframes worked, and what the software could be made to do. I knew from 10-years before I met James that evangelists and those with an agenda, would often claim something was “dead”. It came from the early 1980’s mainframe “wars” – yes, before there was a PC, we were having our own internal battles, this was dead, that was dead, etc.

What I learned from that experience, is that technical people form crowds. Just like the public hangings in the middle ages, they are all too quick to stand around and shout “hang-him”. These days it’s a bit more complex, first off there’s Slashdot, then we have the modern equivalent of speakers corner, aka blogs, where often those who shout loudest and most frequently, get heard more often. However, what most people want is not a one sided rant, but to understand the issues. Claiming anything is dead often gives the claimer the right not to understand the thing that is supposedly “dead” but to just give reasons why that must be so and move on to give advice on what you should do instead. It was similar debate last year that motivated me to document my “evangelsim” years on the about page on my blog.

The first time I heard SOA is dead, wasn’t Annes blog, it wasn’t even as John Willis, aka botchagalupe on twitter, claims in his cloud drop #38 him and Michael Cote of Redmonk last year. No sir, it was back in June 2007, when theregister.co.uk reprinted a Clive Longbottom, Head of Research at Quocirca, under the headline SOA – Dead or Alive?

Clive got closest to the real reasons of why SOA came about, in my opinion, and thus why SOA will prevail, despite rumours of its’ demise. It is not just services, from my perspective, it is about truly transactional services, which are often part of a workflow process.

Not that I’m about to claim that IBM invited SOA, or that my role in either the IBM SWG SOA initiative, or the IBM STG services initiative was anything other than as a team player rather than as a lead. However, I did spend much of 2003/4 working across both divisions, trying to explain the differences and similarities between the two, and why one needed the other, or at least its relationships. And then IBM marketed the heck out of SOA.

One of the things we wanted to do was to unite the different server types around a common messaging and event architecture. There was  almost no requirement for this to be syncronous and a lot of reasons for it to be services based. Many of us had just come from the evolution of object technology inside IBM and from working on making Java efficient within our servers. Thus, as services based approach seemed for many reasons the best one. 

However, when you looked at the types of messages and events that would be sent between systems, many of them could be cruicial to effective and efficient running of the infrastructure, they had in effect, transactional charateristics. That is, given Event-a could initiate actions A, then b, then c and finally d. While action-d could be started before action-c, it couldn’t be started until action-b was completed, and this was dependent on action-a. Importantally, none of these actions should be performed more than once for each instance of an event.

Think failure of a database or transactional server. Create new virtual server, boot os, start application/database server, rollback incomplete transactions, take over network etc. Or similar.

Around the same time, inside IBM, Beth Hutchison and others at IBM Hursley, along with smart people like Steve Graham, now at EMC, and Mandy Chessell also of IBM Hursley were trying to solve similar trascational type problems over http and using web services.

While the Server group folks headed down the Grid, Grid Services and ultimately Web Service Resource  Framework, inside IBM we came to the same conclusion, incompatible messages, incompatible systems, different architectures, legacy systems etc. need to interoperate and for that you need a framework and set of guidelines. Build this out from an infrastructure layer, to an application level; add in customer applications and that framework; and then scale it in any meaningful, that need more than a few programmers working concurrently on the same code, or on the same set of services, and what you needed was a services oriented architecture.

Now, I completely get the REST style of implementation and programming. There is no doubt that it could take over the world. From the perspective of those frantically building web mashups and cloud designs, already has. In none of the “SOA is dead” articles has anyone effectively discussed syncronous transactions, in fact apart from Clive Longbottoms piece, no real discussion was given to workflow, let alone the atomic transaction.

I’m not in denial here of what Amazon and Google are doing. Sure both do transactions, both were built from the ground-up around a services based architecture. Now, many of those who argue that “SOA is dead” are often those who want to move onto the emporers new clothes. However, as fast as applications are being moved to the cloud, many businesses are nowhere in sight moving or exploiting the cloud. To help them get there, they’ll need to know how to do it and for that they’ll need a roadmap, a framework and set of guidelines, and if it includes their legacy applications and systems, how they get there, For that, they’ll likely need more than a strategy, they’ll need a services “oriented” architecture.

So, I guess we’ve arrived at the end, the same conclusion that many others have come to. But for me it is always about context.

I have to run now, literally. My weekly long run is Sunday afternoon and my running buddy @mstoonces will show up any minute. Also, given I’m starting my new job, I’m not sure how much time I’ll have to respond to your comments, but I welcome the discussion!

Power Systems and SOA Synergy

One of the things I pushed for when I first joined Power Systems(then System p) was for the IBM redbooks to focus more on software stacks, and to relate how the Power Systems hardware can be exploited to deliver a more extensive, and easier to use and more efficient hardware stack than many scale out solutions.

Scott Vetter, ITSO Austin project lead, who I first worked with back in probably 1992 in Poughkeepsie, and the Austin based ITSO team, including Monte Poppe from our System Test team, who has recently been focusing on SAP configurations, have just published a new IBM Redbook.

The Redbook, Power Systems and SOA Synergy, SG24-7607, is available free for download from the redbooks abstract page here.

The book was written by systems people, and will be useful to systems people. It contains as useful summary and overview of SOA applications, ESB’s, WebSphere etc. as well as some examples of how and what you can use Power Systems for, including things like WPARs in AIX.

IBM Software and Power Systems Roadshow

In September and October 2007, the IBM Software Group Competitive Project office put on a short series of roadshows in North America and India to show some of the best aspects of IBM Middleware running on Power Systems. It’s not an out and out marketing event, but one designed and presented by some solid technical folks.

They’ve announced the first set of dates for 2008, and the events start next week. Strangely the workshop is listed on the Software/Linux web page but definitely covers AIX and Linux implementations. Here are the dates and locations, hope some of you new to Power or interested in IBM Middleware exploitation on Power can make it along.

Tampa, FL February 21, 2008
Charlotte, NC February 26, 2008
Philadelphia, PA February 28, 2008
Mohegan Sun, CT March 6, 2008
Hazelwood, MO March 11, 2008
Minneapolis, MN March 13, 2008

On a clear day, can you see a cloud?

It’s not very often these days I get to escape my bunker in IBM Austin. On December the 6th I was asked to speak at the NCOIC Conference and work group in St Petersburg, Florida.

The invitation came in a roundabout way, via Massimo Re Ferre from IBM Italy, and Bob Marcus at SRI. The agenda and speakers looked interesting, and so I decided to take the opportunity and go run some of the current thinking by an influential audience. Speaking right before me was Roger Smith, CTO of the US Army PEO STRI division, and he gave a fascinating talk on warefare simulation and training.

I decided to talk about the evolution of Grid, On Demand, SOA and the Blue Cloud implementation of a Service Oriented Infrastructure. We had a useful discussion on what could be done now, net answer, pretty much all of it. You can’t buy it as a product or solution, but you can build it from either IBM or open standards/source parts now.

What’s made the difference is the ability to build around a common, composite infrastructure for management. Previously we’d tried to build and deliver this everywhere, now it’s much more focussed on platform by platform based implementation. Get it right in one place, move it to another.

I’ve posted the slides on slideshare.net here and I’ve also put the PDF on wordpress for download, here.

SOA Entry – point by point

Colin Renouf from Lloyds TSB bank in London and one of the more active and vocal AIX Technical Collaboration Center members, just wrote me an email with a proposal for a joint work effort on patterns for SOA. It’s a great idea.

While we are fleshing that out, I thought I’d highlight the fact that Steve and Tommy, with Johns project management, have been solidly delivering on the System p configurations for SOA Entry points.

There are currently five papers and an overview in the series. You can find the launch page here. The papers are

Process:

IBM System p Planning & Configuration Guide for SOA Entry Point — Process
IBM System p Reference Architecture for SOA Entry Point — Process

People:

IBM System p Planning and Configuration Guide for SOA Entry Point — People
IBM System p Reference Architecture for SOA Entry Point — People

Reuse:
IBM System p Reference Architecture for SOA Entry Point – Reuse

More on complexity, configurability

One of my first posts in this blog, was on the subject of complexity. James Governor of Redmonk weighed in today on complexity with a trackback post called “What SOA needs to learn from Ruby On Rails“.

I noted, that while our software, and often our systems were complex, that was because our customers are, not because we design them to be complex. Our customers run a vast array of machines, in widely different environments, supporting a broad range of applications. Of course, this is chicken and egg, and is a difficult tightrope for established solutions to walk. We could just remove most of the configuration options and in a generation or two the complexity would have gone, but what about the customers?

Forced into a straightjacket of “our way or the highway”, would you take the later?

It’s easy for the new kid, in this case Ruby on Rails to come out and offer little or no configuration options, side files etc. It doesn’t have to, it has never made a significant change it what or how it does things. The same isn’t true for the old-timers. Comparing SOA to Ruby, is like comparing a transport system to a footpath.

It is a subject important to me though. At the moment I’m carefully trying to marshal the merger of the function in the System p Hardware Management Console with that of IBM Systems Director and Director console. My desire is to make one simple management platform that acts both as the local platform director, managing configuration, hardware and service management etc. and at the same time providing a set of programmable, function services based interfaces to provide both remote access, and remote management.

So, I’m all for simplicity but it has to be thought through. We are doing this with the System p Configurations for SOA Entry Points. The original SOA Entry points were pure software plays divided into five categories, People, Process, Information, Connectivity, and Reuse. We are taking the entry points one step further and mapping the software onto System p removing another layer of complexity by showing how they work, how you can configure them and testing them as a total solution.

You can read the System p Configurations for SOA Entry Points overview here, via FTP

John Lennon once sang “It’s been too long since we took the time, No-one’s to blame, I know time flies so quickly” … “It’ll be just like starting over, starting over”.

System p Entry Points for SOA

Well the wagon has wheels, one of the first visible results of the work I’ve been involved in System p was announced last week via press release.

The “System p Configurations for SOA Entry Points” are a collection of reference architectures, installation, system setup, configuration guides, as well as certification of the Software stack on System p, common integration patterns, best practices for problem prevention, role specific documentation, answers to common operational questions and appropriate customer-use cases. [BonusPak anyone?]

For me the benefit of a virtualised infrastructure to SOA and web services always seemed obvious and not just by virtualising at the middleware layer. Continue reading ‘System p Entry Points for SOA’


About & Contact

I'm Mark Cathcart, formally a Senior Distinguished Engineer, in Dells Software Group; before that Director of Systems Engineering in the Enterprise Solutions Group at Dell. Prior to that, I was IBM Distinguished Engineer and member of the IBM Academy of Technology. I am a Fellow of the British Computer Society (bsc.org) I'm an information technology optimist.


I was a member of the Linux Foundation Core Infrastructure Initiative Steering committee. Read more about it here.

Subscribe to updates via rss:

Feed Icon

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 915 other followers

Blog Stats

  • 88,758 hits