And so on Amazon and clouds

Here is the post I mentioned in yesterday’s Clouds and the governor post. I’ve deleted some duplicate comment but wanted to publish some of the things left over.

It was an unexpected pleasure to catch-up with Redmonk maestro and declarative liver(?) James Governor over Christmas, while back in the UK. It wasn’t a tale of Christmas past, but certainly good to see him at Dopplr mansions in East London. Sorry to Matt and the Dopplr guys for busting in on them in my xmas hat and not introducing myself.

James and I didn’t have much time together, I’d just got through handing in my IBM UK badge, and returning all three of their laptops, bidding fairwell to Larry, Colin and Paul, and wanted to head off to see my parents. We squeezed in a quick coffee and a chat, James was keen to discuss his theory on Linux distributions, I didn’t have any reason to really pitch for, or against this and just told him what I knew. We didn’t have time for much else, we did discuss erlang briefly both as a language, but also on explotation of multi-core, multi-threaded chips, and I’ll come back to that one day. What we didn’t get to discuss was Amazon, cloud computing and James on/off theory on IBM and Amazon.

There is no doubt in my mind that on demand computing, cloud, ensembles, call it what you will computing is happening and will continue apace. I’ve been convinced since circa ’98, and spent 6-weeks one summer in 1999 with now StorageTek/Sun VP, then IBM System z marketing guy, Nigel Dessau getting me in to see IBM Execs to discuss the role of utility computing. After that I did a stint in the early Grid days, and then on demand architecture and design.

So, whats this with Amazon? Yes, their EC2 and S2 offering are pretty neat; yes Google is doing some fascinating things building their own datacenters and machines, so is Microsoft and plenty of others. One day, is it likely that most computing will come over the wire, over the air, from the utility? Yes.

Thats not just a client statement, there is plenty of proof that is happening already, but a server or applications statement. Amazon API’s are really useful. I wish we had some application interfaces, and systems that worked the same way, or perhaps as James might have it, we had Amazons web services, perhaps without the business that goes behind it. Are we interested in Amazon, don’t know, I’m neither in corporate or IBM Software group business development.

It comes back to actionable items, buying, partnering or otherwise adopting Amazons web services, really wouldn’t move the ball forward for the bulk of our customers.

Sure, it would open up a whole new field of customers who are looking for innovative ways to get computing at lower cost, so are our existing customers. This would be of little use short term as there are few tools built around. I work at a company that helps customers. There are some things we are doing that are very interesting for the future, but what is more interesting is bridging from the current world and the challenges of doing that. Like every new technology, cloud computing will have to be eased into. We can’t suddenly expect customers to drop what they have and get up into the clouds and so that means integration.

4 Responses to “And so on Amazon and clouds”


  1. 1 Ewan March 17, 2008 at 5:09 am

    Hi Mark,

    I agree with a lot of your thinking here, and I do think EC2, S3 and the rest aren’t particularly relevant yet for existing companies who need integration services with legacy systems, probably have a lot of historical data sat on SAN disk arrays, and who want to move data back and forth between the old and the new systems, possibly for years after.

    But I do think IBM, Sun and HP are missing a trick by not providing publically available equivalent services to Amazon’s web services yet (I know there are internal equivalent services at IBM and I’m sure Sun and HP have similar systems in place).

    My reasoning is that IBM, Sun and HP have all essentially missed out on Google and Facebook as significant customers, when traditionally speaking a company wanting to purchase 1000+ servers would have gone to the 3 of them and asked for a bid. Instead Google build their own systems, and I believe Rackable supply Facebook (along with supplying Amazon).

    By providing these easily available systems (got a credit card? here’s a server), Amazon are building the platform to have the next Facebook as one of their own customers, at least up until the point where they need their own data centers. Even then if the deal offered by Amazon to stick with them was good enough, would The Next Facebook ™ switch over?

    IBM make some fantastic systems – I’m a big fan of the system P hardware after working with it for the last 7 or 8 years and the BladeCenters are fantastic for companies like lime mine where we need to supply “data center in a box” systems for customers with CRM implementations, but if a client came to me today and said “We need 10 servers tomorrow our Facebook app just went crazy”, the short-term answer right now is Amazon EC2, which could well turn into a long-term answer if the price is right, and that’s not good for IBM.

  2. 2 Paul Wallis March 17, 2008 at 5:44 pm

    Mark,

    Your posts on “The Cloud” make interesting reading.

    During 2003, the late Jim Gray made an analysis of Distributed Computing Economics:

    “’On Demand’ computing is only economical for very cpu-intensive (100,000 instructions per byte or a cpu-day-per gigabyte of network traffic) applications. Pre-provisioned computing is likely to be more economical for most applications – especially data-intensive ones.”

    And

    “If telecom prices drop faster than Moore’s law, the analysis fails. If telecom prices drop slower than Moore’s law, the analysis becomes stronger.”

    Since then, telecom prices have fallen and bandwidth has increased, but more slowly than processing power, leaving the economics worse than in 2003.

    By 2012, the proposed Blue Gene/Q will operate at about 10,000 TFLOPS outstripping Moore’s law by a factor of about 10.

    I’ve tried to put The Cloud in historical context and discussed some of its forerunners here. My take is that:

    “I’m sure that advances will appear over the coming years to bring us closer, but at the moment there are too many issues and costs with network traffic and data movements to allow it to happen for all but select processor intensive applications, such as image rendering and finite modelling.”

    I don’t know when enterprises will switch to “The Cloud” but given current technological circumstances, and recent events like The Gulf cables being cut and Amazon S3 failing, today the business is being asked to take a leap of faith to put mission critical applications in The Cloud.

  3. 3 cathcam March 25, 2008 at 3:25 pm

    Paul, this is clearly one of the areas where there is much confusion. You are right in so much of what you say and quote from Jim Gray. The question though is where is the cloud?

    Given the emergence of 10gb Ethernet, fiber channel over Ethernet (FCoE) and similar technologies, the application and data issues of running a “cloud” inside your organization become much less of an issue. I chose my Hewitt Assoc. example carefully, to illustrate just that.

    IBM is as always in a position of providing clouds via multiple channels. First we have a couple of groups providing “cloud” services. Then we have an initiative, through one of those groups and with Google of building and providing clouds; there are others. These clouds are though more like the electricity generator model, for delivering green-field “wattage” to people who need juice.

    I’m much more focused at the moment of helping customers build their own generators. It’s in this sphere that I think clouds, if you will, are more usable and a more interesting business proposition for now. The first thing we need to do in this space is provide tools that let customers build, manage and deploy utility services based around a single architecture, and possibly delivering a common service such as database clouds, web server, email server and so on.

    Next up would be delivering general services out of a common architecture or collection of machines, systems, storage and network. After that we can see if we can fit these collections or ensembles into a model where they can be x-architecture and general services.

    We have some technology already in the latter space, it is currently too complex and too services intensive to implement, the trick will be to use the best assets of those technologies in a more simple, more consumable way. In that way we can help customer short-term get better use from their compute power, more efficient use from their electrical power and then it is up to them to decide if and when they want to become a “public utility” and provide pure wattage.

    I agree with you, for most existing applications and databases that’s where the action is for the next 3-5 years. It’s dull and unexciting, just plumbing, but good business. For the foreseeable future though, utility/public clouds will continue to be a gold rush that some will make money on, many will heap on all their aspirations, and some will lose their shirts and possibly everything else!


  1. 1 Cloud commentry « Adventures in systems land Trackback on March 25, 2008 at 3:28 pm

Leave a comment




About & Contact

I'm Mark Cathcart, formally a Senior Distinguished Engineer, in Dells Software Group; before that Director of Systems Engineering in the Enterprise Solutions Group at Dell. Prior to that, I was IBM Distinguished Engineer and member of the IBM Academy of Technology. I am a Fellow of the British Computer Society (bsc.org) I'm an information technology optimist.


I was a member of the Linux Foundation Core Infrastructure Initiative Steering committee. Read more about it here.

Subscribe to updates via rss:

Feed Icon

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 2,066 other subscribers

Blog Stats

  • 90,299 hits