Archive for the 'web services' Category

Serverless computing

I’ve been watching and reading on developments around serverless computing. I’ve never used it myself so only have limited understanding. However, given my extensive knowledge of servers, firmware, OS, Middleware and business applications, I’ve had a bunch of questions.

serverlessnyc

Many of my questions are echoed in this excellent write-up by Jeremy Daly on the recent Serverless NYC event.

For traditional enterprise type customers, it’s well worth reviewing the notes of the issues highlighted by Jason Katzer, Director of Software Engineering at Capital One. While some attendees talk about “upwards of a BILLION transactions per month” using serverlesss, that’s impressive, that’s still short of many enterprise requirements, it translates to 34.5-million transactions per day.

Katzer notes that there are always bottlenecks and often services that don’t scale the same way that your serverless apps do. Worth a read, thanks for posting Jeremy.

API’s and Mainframes

ab[1]

I like to try to read as many American Banker tech’ articles as I can. Since I don’t work anymore, I chose not to take out a subscription, so some I can read, others are behind their subscription paywall.

This one caught my eye. as it’s exactly what we did in circa 1998/99 at National Westminster Bank (NatWest) in the UK. The project was part of the rollout of a browser Intranet banking application, as a proof of concept, to be followed by a full blown Internet banking application. Previously both Microsoft and Sun had tackled the project and failed. Microsoft had scalability and reliability problems, and from memory, Sun just pushed too hard to move key components of the system to its servers, which in effect killed their attempt.

The key to any system design and architecture is being clear about what you are trying to achieve, and what the business needs to do. Yes, you need a forward looking API definition, one that can accept new business opportunities, and one that can grow with the business and the market. This is where old mainframe applications often failed.

Back in the 1960’s, applications were written to meet specific, and stringent taks, performance was key. Subsecond response times were almost always the norm’ as there would be hundreds or thousands of staff dependent on them for their jobs. The fact that many of those application has survived to this today, most still on the same mainframe platform is a tribute to their original design.

When looking at exploiting them from the web, if you let “imagineers” run away with what they “might” want, you’ll fail. You have to start with exposing the transaction and database as a set of core services based on the first application that will use them. Define your API structure to allow for growth and further exploitation. That’s what we successfully did for NatWest. The project rolled out on the internal IP network, and a year later, to the public via the Internet.

Of course we didn’t just expose the existing transactions, and yes, firewall, dispatching and other “normal” services as part of an Internet service were provided off platform. However, the core database and transaction monitor we behind a mainframe based webserver, which was “logically” firewalled from the production systems via an MPI that defined the API, and also routed requests.

So I read through the article to try to understand what the issue was that Shamir Karkal, the source for Barbas article, felt was the issue. Starting at the section “Will the legacy systems issue affect the industry’s ability to adopt an open API structure?” which began with a history lesson, I just didn’t find it.

The article wanders between a discussion of the apparent lack of a “service bus” style implementation, and the ability of Amazon to sell AWS and rapidly change the API to meet the needs of it’s users.

The only real technology discussion in the article that I found that had any merit, was where they talked about screen scraping. I guess I can’t argue with that, but surely we must be beyond that now? Do banks really still have applications that are bound by their greenscreen/3270/UI? That seems so 1996.

A much more interesting report is this one on more general Open Bank APIs. Especially since it takes the UK as a model and reflects on how poor US Banking is by comparison. I’ll be posting a summary on my ongoing frustrations with the ACH over on my personal blog sometime in the next few days. The key technology point here is that there is no way to have a realtime bank API, open, mainframe or otherwise, if the ACH system won’t process it. That’s America’s real problem.

Woe are apps

As a follow-on to my recent app post, a couple of interesting udates. First up, marketplace.org ran an interesting piece on apps on June 9th. Sabri Ben-Achour covered the Apple iTunes announcement by saying:

  • It’s hard for app developers to get noticed(thats a “no shit sherlock” moment)
  • It’s hard to make money (thats NSS #2)
  • There are 1.6 million apps on the Apple store, the search function isn’t that great
  • There have been 75 billion app downloads, but the average user downloads zero apps per month.

Apples answer? Paid promotion within the iTunes store. Of course if apps didn’t exist and companies and developers were using the power of mobile through web, css etc. their sites would be found in context of content and SEO. They could focus their efforts in a single way to promote their content and the web UI to access it.

Also new, to me, I went to use Skype to contact one of my kids in Europe the other day and was surprised, and more than a little disappointed to find the Skype app was no longer working and no longer available. It’s not clear if this was a business decision, or a technology one. The app was the only one I ever used on the Samsung SmartTV that used the camera. Yeah, I know I should have taped over the camera.

That’s the problem with apps, you wait for ages for a platform that makes sense, and then two or more come along at the same time. You better hope you pick the right one. There are some 137 pages on a single thread on the Skype Community forums debating if either Skype or Samsung was the wrong platform.

Apps

App Internet and the FT

Picture of various walled gardens

Walled Gardens

former Colleague Simon Phipps reports on the FT(Financial Times) move to escape the app trap, that I discussed in my earlier App Internet post. Simon useful covers a number of points I didn’t get to, so it’s worth reading the full article here on ComputerWorld(UK).

This is a great example, they’ve clearly done a great job based on their web page feature list, but since I don’t have an iPhone or iPad, couldn’t try it out.

Simon makes an interesting point, that the FT is incurring some risk in that it is not “in” the app store, and therefore doesn’t get included in searches by users looking for solutions. This is another reason why app stores are just another variation of walled gardens. Jeff Atwood has a good summary of the arguments on why walled gardens are a bad thing here. In Jeffs 2007 blog, he says “we already have the world’s best public social networking tool right in front of us: it’s called the internet” and goes on to talk about publicly accessible web services in this instance rather than app stores.

One of the things that never really came to pass with SOA, was the idea of public directories. App stores, and their private catalogs, are directories, however they have a high price of entry as Simon points out. What we need now to encourage the move away from app stores is an HTML5 app store directory. It really is little more than an online shopping catalog for bookmarks. But it includes all the features and functions of walled garden app store catalogs, the only exception to which is the code itself. In place of the download link would be a launch, go, or run now button or link.

We’d only need a few simple, authorized REST based services to create, update, delete catalog entries, not another UDDI all encompassing effort, although it could learn from and perhaps adapt something like the UDDI Green Pages. This is way out of my space, anyone know if there are efforts in this area? @cote ? @Monkchips ? @webmink ?

How do you find features available in WSMAN ?

Chris Poblete has published the 2nd in his series on how to use WSMAN with our PowerEdge 11g servers, it can be found on Dell Techcenter, here.

In the 2nd post, Chris shows how to use openweman CLI tool in Linux to enumerate through the profile registration classes (CIM_RegisteredProfile) to find out what features are available. His 1st post, and introduction, can be found here.

Oh, Now it’s legacy IT that’s dead. Huh?

I got a pingback Dana Gardners ZDNet blog for my “Is SOA dead?” post. Dana, rather than addressing the issue I raised yesterday, just moved the goalposts, claiming “Legacy IT is dead“.

I agree with many of his comments, and after my post “Life is visceral“, which Dana so ably goes on to prove with his post. I liked some of the fine flowing language, some of it almost prosaic, especially this “We need to stop thinking of IT as an attached appendage of each and every specific and isolated enterprise. Yep, 2000 fully operational and massive appendages for the Global 2000. All costly, hugely redundant, unique largely only in how complex and costly they are all on their own.” – whatever that means?

However, thinking about a reasonable challenge for anyone considering jumping to a complete services or cloud/services, not migrating, not having a roadmap or architecture to get there, but as Dana suggests, grasping the nettle and just doing it.

One of the simplest and easiest examples I’ve given before for why I suspect as Dana would have it, “legacy  systems” exist, is becuase there are some problems you just can NOT be split apart a thousand times, whose data can NOT be replicated into a million pieces.

Let’s agree. Google handles millions of queries per seconds, as do ebay and Amazon, well probably. However, in the case of the odd goggle query not returning anything, as opposed to returning no results, no one really cares or understands, they just click the browser refresh button and wait. Pretty much the same for Amazon, the product is there, you click buy, and if every now and again there was one item of a product left at an Amazon store front, if someone else has bought it between the time you looked for it and decided to buy, you just suffer through the email that the item will be back in stock in 5-days after all, it will take longer than that to track down someone to discuss it with.

If you ran your banking or credit card systems this way, no one would much care when it came to queries. Sure, your partner is out shopping, you are home working on your investments. Your partner goes to get some cash, checks the balance and the money is there. You want to transfer a large amount of money into a money market account, you check and there amount is just there, you’ll transfer some more into the account overnight from your savings and loan and you know your partner only ever uses credit, right?. You both proceed, a real transactional system lets one of you proceed and the other fails, even if there is only 1-second, and possibly less difference between your transactions coming in.

In the google model, this doesn’t matter, it’s all only queries. If your partner does a balance check, a second or more after you’ve done a transfer, and see’s the the wrong balance, it will only matter when they are embarressed 20-seconds later trying to use that balance, that isn’t there anymore.

Of course, you can argue banks dont’ work like that, they reconcile balances at the end of the day. You will though when that exceptional balance charge kicks-in if both transactions work. Most banks systems are legacy systems from a different perspective, and should be dead. We, as customers, have been pushing for straight through processing for years, why should I wait for 3-days for a check to clear? 

So you can’t have it both ways, out of genuine professional understanding and interest, I’d like to see any genuine transaction based systems that are largely or wholly services based or that run in the cloud.

In order to what Dana advocates, move off ALL legacy systems, those transaction systems need to cope with 1000, and upto 2000 transactions per second. Oh yeah, it’s not just banks that use “legacy IT”, there are airlines, travel companies, anywhere where there is finite product and an almost infinite number of customers.

Remember, Amazon and ebay and paypal don’t do their own credit card processing as far as I’m aware, they are just merchants who pass the transaction on to a, err, legacy system.

Some background reading should include one that I used early in my career. Around the time I was advocating moving Chemical Bank, NY’s larger transaction systems to virtual machines, which we did. I was attending VM Internals education at Amdahl in Columbia, MD. One of the instructors thought I might find the paper useful.

It was written by a team at Tandem Computer and Amdahl, including the late, great Jim Gray. It was written in 1984. Early on in this paper they describe environments that support 800 transactions per second in 1984. Yes, 1984. These days, even in the current economic environment, 1000tps are common, and 2000tps are table stakes.

Their paper is preserved and online here on microsoft.com

And finally, since I’m all about context. I’m an employee of Dell, I started work there today. What is written here is my opinion, based on 34-years IT experience and much of it garned from the sharp end, designing an I/O subsystem to support an large NY banks transactional, inter-bank transfer system, as well as being responsible for the worlds first virtualized credit card authorization system etc. but I didn’t work for Dell, or for that matter, IBM then. 

Speakers corner anyone?

Is SOA dead?

There has been a lot of fuss since the start of the new year around the theme “SOA is dead”. Much of this has been attributed to Anne Thomas Manes blog entry on the Burton Groups blog, here.

Infoworlds Paul Krill jumper on the bandwagon with a SOA obituary, qouting Annes work and say “SOA is dead but services will live on”. A quick fire response came on a number of fronts, like this one from Duane Nickull at Adobe, and then this from James Governor at Redmonk, where he charismatically claims, “everything is dead”.

First up, many times in my career, and James touches on a few of the key ones, since we were there together, or rather, I took advantage of his newness and thirst for knowledge as a junior reporter, to explain to him how mainframes worked, and what the software could be made to do. I knew from 10-years before I met James that evangelists and those with an agenda, would often claim something was “dead”. It came from the early 1980’s mainframe “wars” – yes, before there was a PC, we were having our own internal battles, this was dead, that was dead, etc.

What I learned from that experience, is that technical people form crowds. Just like the public hangings in the middle ages, they are all too quick to stand around and shout “hang-him”. These days it’s a bit more complex, first off there’s Slashdot, then we have the modern equivalent of speakers corner, aka blogs, where often those who shout loudest and most frequently, get heard more often. However, what most people want is not a one sided rant, but to understand the issues. Claiming anything is dead often gives the claimer the right not to understand the thing that is supposedly “dead” but to just give reasons why that must be so and move on to give advice on what you should do instead. It was similar debate last year that motivated me to document my “evangelsim” years on the about page on my blog.

The first time I heard SOA is dead, wasn’t Annes blog, it wasn’t even as John Willis, aka botchagalupe on twitter, claims in his cloud drop #38 him and Michael Cote of Redmonk last year. No sir, it was back in June 2007, when theregister.co.uk reprinted a Clive Longbottom, Head of Research at Quocirca, under the headline SOA – Dead or Alive?

Clive got closest to the real reasons of why SOA came about, in my opinion, and thus why SOA will prevail, despite rumours of its’ demise. It is not just services, from my perspective, it is about truly transactional services, which are often part of a workflow process.

Not that I’m about to claim that IBM invited SOA, or that my role in either the IBM SWG SOA initiative, or the IBM STG services initiative was anything other than as a team player rather than as a lead. However, I did spend much of 2003/4 working across both divisions, trying to explain the differences and similarities between the two, and why one needed the other, or at least its relationships. And then IBM marketed the heck out of SOA.

One of the things we wanted to do was to unite the different server types around a common messaging and event architecture. There was  almost no requirement for this to be syncronous and a lot of reasons for it to be services based. Many of us had just come from the evolution of object technology inside IBM and from working on making Java efficient within our servers. Thus, as services based approach seemed for many reasons the best one. 

However, when you looked at the types of messages and events that would be sent between systems, many of them could be cruicial to effective and efficient running of the infrastructure, they had in effect, transactional charateristics. That is, given Event-a could initiate actions A, then b, then c and finally d. While action-d could be started before action-c, it couldn’t be started until action-b was completed, and this was dependent on action-a. Importantally, none of these actions should be performed more than once for each instance of an event.

Think failure of a database or transactional server. Create new virtual server, boot os, start application/database server, rollback incomplete transactions, take over network etc. Or similar.

Around the same time, inside IBM, Beth Hutchison and others at IBM Hursley, along with smart people like Steve Graham, now at EMC, and Mandy Chessell also of IBM Hursley were trying to solve similar trascational type problems over http and using web services.

While the Server group folks headed down the Grid, Grid Services and ultimately Web Service Resource  Framework, inside IBM we came to the same conclusion, incompatible messages, incompatible systems, different architectures, legacy systems etc. need to interoperate and for that you need a framework and set of guidelines. Build this out from an infrastructure layer, to an application level; add in customer applications and that framework; and then scale it in any meaningful, that need more than a few programmers working concurrently on the same code, or on the same set of services, and what you needed was a services oriented architecture.

Now, I completely get the REST style of implementation and programming. There is no doubt that it could take over the world. From the perspective of those frantically building web mashups and cloud designs, already has. In none of the “SOA is dead” articles has anyone effectively discussed syncronous transactions, in fact apart from Clive Longbottoms piece, no real discussion was given to workflow, let alone the atomic transaction.

I’m not in denial here of what Amazon and Google are doing. Sure both do transactions, both were built from the ground-up around a services based architecture. Now, many of those who argue that “SOA is dead” are often those who want to move onto the emporers new clothes. However, as fast as applications are being moved to the cloud, many businesses are nowhere in sight moving or exploiting the cloud. To help them get there, they’ll need to know how to do it and for that they’ll need a roadmap, a framework and set of guidelines, and if it includes their legacy applications and systems, how they get there, For that, they’ll likely need more than a strategy, they’ll need a services “oriented” architecture.

So, I guess we’ve arrived at the end, the same conclusion that many others have come to. But for me it is always about context.

I have to run now, literally. My weekly long run is Sunday afternoon and my running buddy @mstoonces will show up any minute. Also, given I’m starting my new job, I’m not sure how much time I’ll have to respond to your comments, but I welcome the discussion!

And so on Amazon and clouds

Here is the post I mentioned in yesterday’s Clouds and the governor post. I’ve deleted some duplicate comment but wanted to publish some of the things left over.

It was an unexpected pleasure to catch-up with Redmonk maestro and declarative liver(?) James Governor over Christmas, while back in the UK. It wasn’t a tale of Christmas past, but certainly good to see him at Dopplr mansions in East London. Sorry to Matt and the Dopplr guys for busting in on them in my xmas hat and not introducing myself.

James and I didn’t have much time together, I’d just got through handing in my IBM UK badge, and returning all three of their laptops, bidding fairwell to Larry, Colin and Paul, and wanted to head off to see my parents. We squeezed in a quick coffee and a chat, James was keen to discuss his theory on Linux distributions, I didn’t have any reason to really pitch for, or against this and just told him what I knew. We didn’t have time for much else, we did discuss erlang briefly both as a language, but also on explotation of multi-core, multi-threaded chips, and I’ll come back to that one day. What we didn’t get to discuss was Amazon, cloud computing and James on/off theory on IBM and Amazon.

There is no doubt in my mind that on demand computing, cloud, ensembles, call it what you will computing is happening and will continue apace. I’ve been convinced since circa ’98, and spent 6-weeks one summer in 1999 with now StorageTek/Sun VP, then IBM System z marketing guy, Nigel Dessau getting me in to see IBM Execs to discuss the role of utility computing. After that I did a stint in the early Grid days, and then on demand architecture and design.

So, whats this with Amazon? Yes, their EC2 and S2 offering are pretty neat; yes Google is doing some fascinating things building their own datacenters and machines, so is Microsoft and plenty of others. One day, is it likely that most computing will come over the wire, over the air, from the utility? Yes.

Thats not just a client statement, there is plenty of proof that is happening already, but a server or applications statement. Amazon API’s are really useful. I wish we had some application interfaces, and systems that worked the same way, or perhaps as James might have it, we had Amazons web services, perhaps without the business that goes behind it. Are we interested in Amazon, don’t know, I’m neither in corporate or IBM Software group business development.

It comes back to actionable items, buying, partnering or otherwise adopting Amazons web services, really wouldn’t move the ball forward for the bulk of our customers.

Sure, it would open up a whole new field of customers who are looking for innovative ways to get computing at lower cost, so are our existing customers. This would be of little use short term as there are few tools built around. I work at a company that helps customers. There are some things we are doing that are very interesting for the future, but what is more interesting is bridging from the current world and the challenges of doing that. Like every new technology, cloud computing will have to be eased into. We can’t suddenly expect customers to drop what they have and get up into the clouds and so that means integration.

Clouds and the governor

I’ve been meaning to respond to Monkchips speculation over IBM and Amazon from last year his follow-up why Amazon don’t need IBM. James and I met-up briefly before Christmas, the day I resigned from IBM UK but we ran out of time to discuss that. I wrote and posted a draft and never got around to finishing it, I was missing context. Then yesterday James published a blog entry entitled “15 Ways to Tell Its Not Cloud Computing”.

The straw that broke the camels back was today, on chinposin Friday, James was clearly hustling for a bite when he tweeted “amazed i didn’t get more play for cloud computing blog”.

Well here you go James. Your analysis and simple list of 15-reasons why it is not a cloud is entertaining, but it’s not analysis, it’s cheerleading.

I’m not going to trawl through the list and dissect it one by one, I’ll just go with the first entry and then revert to discussing the bigger issue. James says “If you peel back the label and its says “Grid” or “OGSA” underneath… its not a cloud.” – Why is that James? How do you advocate organizations build clouds?
Continue reading ‘Clouds and the governor’

IBM’s new Enterprise Data Center vision

IBM announced today our new Enterprise Data Center vision. There are lots of links from the new ibm.com/datacenter web page which split out into their various constituencies Virtualization, Energy Efficiency, Security, Business resiliency and IT service delivery.

To net it out from my perspective though, there is a lot of good technology behind this, and an interesting direction summarized nicely starting on page-10 on the POV paper linked from the new data center page or here.

What it lays out are the three main stages of adoption for the new data center, simplified, shared and dynamic. The Clabby analytics paper, also linked from the new data center page or here, puts the three stages in a more consumable practical tabular format.

They are really not new, many of our customers will have discussed these with us many times before. In fact, there’s no coincidence that the new Enterprise Data Center vision was launched the same day as the new IBM Z10 mainframe. We started discussing and talking about these these when I worked for Enterprise Systems in 1999, and we formally laid the groundwork in the on demand strategy in 2003. In fact, I see the Clabby paper has used the on demand operating environment block architecture to illustrate the service patterns. Who’d have guessed.

Simplify: reduce costs for infrastructure, operations and management

Share: for rapid deployment of infrastructure, at any scale

Dynamic: respond to new business requests across the company and beyond

However, the new Enterprise Data Center isn’t based on a mainframe, Z10 or otherwise. It’s about a style of computing, how to build, migrate and exploit a modern data center. Power Systems has some unique functions in both the Share and Dynamic stages, like partition mobility, with lots more to come.

For some further insight into the new data center vision, take a look at the presentation linked off my On a Clear day post from December.

WSDM Collectors and Tivoli Agents

One of the emerging and in-use technologies for doing both systems and platform management is Web Services Distributed Management (WSDM). In this article on IBM Developerworks, Kyle Croutwater gives a good intro and example of how to use the IBM Tivoli® Monitoring (ITM) Universal Agent® to consume and monitor a Web Services for Distributed Management (WSDM)-compliant interface for a manageable resource using the WSDM Generic Collector Engine (WGCE).

Kyles’ article can be found on developerworks here .

Get online with WSDM

I like scheduling calls for my morning drive to the office. It is a good time for me, I’m alert, I can be focussed on two things at once, and despite the complaints about the traffic in and around Austin, I can make from my downtown SoCo house to the IBM Office on Burnet Road in north Austin comfortably in 30-mins, usually 20-mins.

This mornings call was with Trevor to go over a number of virtualization related topics, including things like partition migration, hosting partitions or as a prefer to call them, service partitions, virtualization management, blade virtualization and more. I pointed him to the Virtualization white paper, and before I knew it I was sitting in the building 045 parking lot.

What we didn’t get to discuss was WSDM and its use to manage and monitor virtualized enviornments including partition, virtual machines and service partitions. Mike Baskey, another Distinguished Engineer and I used to work together in the on demand team, Mike has now moved over to SWG and is leading the Infrastructure Solutions, Networking and Management Standards effort. He is also the current chair of the Distributed Management Task Force (DMTF).

Mike and I had a long conversation about WSDM last night and recent developments, plans etc. It reminded me of a number of customer related projects and how WSDM is a great solution to expose all sorts of system information via a vendor independent way. WSDM can be implemented easily, it can expose native resources or can be used with a WS-CIM bridge which allows existing CIM instrumented resource to by exposed and managed through WSDM.

If the industry gets behind WSDM it would be great. No more proprietary interfaces to hardware and software for management. The ability to manage and monitor devices, servers, storage and virtualised resources irrespective of platform or vendor. Management apps can be built in a modular fashion by linking services, in fact this is a key way that operating systems and systems management products can be built as independant services, much like a service oriented architecture for infrastructure.

When I had a few minutes this evening, I went away to find a good source of education, and some samples and examples of WSDM that I can have a refreshed on over the weekend. I found this excellent page on IBM Developerworks.

Written by Dan Jemiolo, an advisory software engineer at IBM, it looks like a great place to start. If you are involved with more than one systems platform, have systems from multiple vendors including IBM and Cisco it might well be worth taking a look at this and installing the samples and creating a WSDM server interface for an HTTP server with Apache Muse.

Let me know how you get on, or if you have any comments.

ps. Tomorows drive and talk is with an account team on how their customer can exploit the IBM Dynamic Infrastructure for mySAP and their System p servers.

Complexity or completeness?

I was asked again this morning about complexity, in relation to my view on both hardware and software. It would all be so simple if we were a start-up provided we gave you the “Power to leave” you could have it our way, or no way.

When I got back to my desk I went looking for a blog comment I wrote on complexity. For completeness and because it came up this morning, here is my response.

“The real challenge though that IBM faces, is not the complexity of our products, but the complexity of our customers.

If we were &Ampersand. small software company period, or an organisation we could do just a single product and say “there, thats SOA/ESB” its great like it or lump it.

However, that wouldn’t be much use for the millions of customers over dozens of OS’s, and four hardware platforms, built up over 30-years, who want to embrace SOA. Sure, many of them can and will do it without our help. Heck, some of them even do it without our products ;-( but generally while we have often intimate knowledge and understanding of their systems, they still want a shopping list of options rather than just do what we say.

So, that leads not to complexity, but rather to completeness. Many products with interfaces to, and programability for services based applications and infrastructure.

As always, people would like a single message, a single voice, but mostly customers don’t want a single product unless it’s the one they currently heavily exploiting. Even then they want something else to integrate to it, with it, or from it.

This is why open is key. Embracing web services, getting involved and implementing WSRF, WSDM et al. will pay off in the mid-term for both the customers and for IBM. The ability to implement applications around a services base, with a strong mediation engine, that participates in and can support a robust set of open industry standards is key.”


About & Contact

I'm Mark Cathcart, formally a Senior Distinguished Engineer, in Dells Software Group; before that Director of Systems Engineering in the Enterprise Solutions Group at Dell. Prior to that, I was IBM Distinguished Engineer and member of the IBM Academy of Technology. I am a Fellow of the British Computer Society (bsc.org) I'm an information technology optimist.


I was a member of the Linux Foundation Core Infrastructure Initiative Steering committee. Read more about it here.

Subscribe to updates via rss:

Feed Icon

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 2,066 other subscribers

Blog Stats

  • 89,480 hits