Archive for the 'SOA' Category

App Internet and the FT

Picture of various walled gardens

Walled Gardens

former Colleague Simon Phipps reports on the FT(Financial Times) move to escape the app trap, that I discussed in my earlier App Internet post. Simon useful covers a number of points I didn’t get to, so it’s worth reading the full article here on ComputerWorld(UK).

This is a great example, they’ve clearly done a great job based on their web page feature list, but since I don’t have an iPhone or iPad, couldn’t try it out.

Simon makes an interesting point, that the FT is incurring some risk in that it is not “in” the app store, and therefore doesn’t get included in searches by users looking for solutions. This is another reason why app stores are just another variation of walled gardens. Jeff Atwood has a good summary of the arguments on why walled gardens are a bad thing here. In Jeffs 2007 blog, he says “we already have the world’s best public social networking tool right in front of us: it’s called the internet” and goes on to talk about publicly accessible web services in this instance rather than app stores.

One of the things that never really came to pass with SOA, was the idea of public directories. App stores, and their private catalogs, are directories, however they have a high price of entry as Simon points out. What we need now to encourage the move away from app stores is an HTML5 app store directory. It really is little more than an online shopping catalog for bookmarks. But it includes all the features and functions of walled garden app store catalogs, the only exception to which is the code itself. In place of the download link would be a launch, go, or run now button or link.

We’d only need a few simple, authorized REST based services to create, update, delete catalog entries, not another UDDI all encompassing effort, although it could learn from and perhaps adapt something like the UDDI Green Pages. This is way out of my space, anyone know if there are efforts in this area? @cote ? @Monkchips ? @webmink ?

Senior Architect – Enterprise Systems Management and more

With things really rolling here at Dell on the software front we are still in the process of hiring,and are looking for some key people to fit into, or lead teams working on current and future software projects. At least currently these are based with our team here in Round Rock, TX. However, I’d like to hear if you’d be interested in joining our Dell west coast software labs in Sunnyvale and Palo Alto.

Here are a few of the current vacancies:

Senior SOA Architect – Enterprise Systems Management
Performance Engineer – SOA Infrastructure Management
Senior Java Developer – Systems Management
Senior Software Engineer – Systems Management-10069ZNS

Depending on how you count, there are over a 100 of us now working on the VIS and AIM products, with a whole lot more to come in 2011. Come join me and help make a fundamental change at Dell and be in on the beginning of something big!

Dell’s Virtual Integrated System

Open, Capable, Affordable - Dell VIS

Open, Capable, Affordable - Dell VIS

It’s always interesting travel, you learn so many new things. And so it was today, we arrived in Bangalore yesterday to bring two of the sprint teams in our “Maverick” design and teams up to speed.

In an overview of the “product” and it’s packaging, we briefly discussed naming. I was under the impression that we’d not started publicly discussing Dell’s Virtual Intergrated System (VIS), well I was wrong as one of the team pointed out.

Turns out a Dell.com web site already has overview descriptions of three of the core VIS offerings, VIS Integration Suite; VIS Delivery Center; and VIS Deploy infrastructure. You can read the descriptions here.

Essentially, Maverick is a services oriented infrastructure (SOI), built from modular services, pluggable components, transports and protocols that will allow us to build various product implementations and solutions from a common management architecture. It’s an exciting departure from traditional monolithic systems management products, or the typically un-integrated products which use different consoles, different terms for the same things, and to get the best use out of them require complex and often long services projects, or for you to change your business to match their management.

More jobs news

We are making great progress on filling out the teams, and my 2nd pilot technology program started with a bang last week, building an embedded processor stack based on ServiceMix; my 3rd pilot, to test some key technologies like AMQP and Cassandra is taking shape. However, we need to backfill some of the work I’ve been doing, and for the consultants we’ve had on staff.

Amongst the vacancies we have open is “Senior SOA Architect – Enterprise Systems Management-1003LEFS“. Also a “Senior Software Engineer, Systems Management- 10069ZNS

A good place to get a list of Dell jobs in Round Rock is here on the cnn.com website. If you are interested in working with some of our recent acquisitions out on the west coast, including Scalent(Dell AIM) or Kace, check out this link.

Cote on Consumer to Enterprise

REST Interface slide from Cote presentation

REST Interface slide from Cote presentation

Over on his people over process blog, Redmonk Analyst, Michael Cote, has what is a great idea, a rehersal of an upcoming presentation including slides and audio.

The presentation covers what technology is making the jump from the consumer side of applications and IT into the enterprise. I’m delighted to report Cote has used a quote from me on REST.

For clarification, the work we are doing isn’t directly related to our PowerEdge C-servers, or our cloud services. For that, Dell customer Rackspace cloud has some good REST API‘s and is well ahead of us, in fact I read a lot of their documentation while working on our stuff.

On the other hand, I’m adamant that the work we are doing adding a REST-like set of interfaces to our embedded systems management, is not adding REST API’s. Also, since I did contribute requirements and participate in discussions around WS-* back when I was IBM, I’d say that we were trying to solve an entirely different set of problems, and hence now REST is the right answer, to externalize the data needed for a web based UI.

At the same time, we will also continue to offer a complete implementation of WS Management(WSMAN). WSMAN is a valuable tool to externalize the complexity of a server, in order for it to be managed by an external console or control point. Dell provides the Dell Management Console (DMC) which consumes WSMAN and provides one-to-many server management.

The point of the REST interfaces is to provide a simple way to get data needed to display in a Web UI, we don’t see having to expose all the same data, and can use a much more lightweight infrastructure to process it. At the same time, it’s the objective of this project to keep the UI simple for one-to-one management. Customers who want a more complex management platform will be able to use DMC, or exploit the WSMAN availability.

EMC World – standards?

Tucci and Maritz at EMC World 2009

Tucci and Maritz at EMC World 2009

I’ve been attending the annual EMC World conference in Orlando this week. A few early comments, there has been a massive 64,000ft shift to cloud computing in the messaging, but less so at ground level. There have been one or two technical sessions, but none on how to implement a cloud, or to put data in a cloud, or to manage data in a cloud. Maybe next year?

Yesterday in the keynote, Paul Maritz, President and CEO of VMware said that VMware is no longer in the business of individual hypervisors but in stitching together an entire infrastructure. In a single sentence laying out clearly where they are headed, if it wasn’t clear before. In his keynote this morning, Mark Lewis, President, Content Management and Archiving Division, was equally clear about the future of information virtualization, talking very specifically about federation and distributed data, with policy management. He compared that to a consolidated, centralized vision which he clearly said, hadn’t worked. I liked Lewis’s vision for EMC Documentum xCelerated Composition Platform (xCP) as a next generation information platform.

However, so far this week, and especially after this afternoons “Managing the Virtualized Data Center” BOF, where I had the first and last questions on standards, which didn’t get a decent discussion, there has been little real mention of standards or openness.

Generally, while vendors like to claim standards compliance and involvement, they don’t like them. Standards tend to slow down implementation historically. This hasn’t been the case with some of the newer technologies, but at least some level of openness is vital to allow fair competition. Competition almost always drives down end user costs.

Standards are of course not required if you can depend on a single vendor to implement everything you need, as you need it. However, as we’ve seen time and time again, that just doesn’t work, something gets left out, doesn’t get done, or gets a low priority from the implementing vendor, but it’s a high priority for you – stalemate.

I’ll give you an example: You are getting recoverable errors on a disk drive. Maybe it’s directly attached, maybe it’s part of a SAN or NAS. If you need to run multiple vendors server and/or storage/virtualization, who is going to standardize the error reporting, logging, alerting etc. ?

The vendors will give you one of a few canned answers. 1. It’s the hardware vendors job(ie. they pass the buck) 2. They’ll build agents that can monitor this for the most popular storage systems (ie. you are dependent on them, and they’ll do it for their own storage/disks first) 3. They’ll build a common interface through which they can consume the events(ie. you are dependent on the virtualization vendor AND the hardware vendor to cooperate) or 4. They are about managing across the infrastructure for servers, storage and network(ie. they are dodging the question).

There are literally hundreds of examples like this if you need anything except a dedicated, single vendor stack including hardware+virtualization. This seems to be where Cisco and Oracle are lining up. I don’t think this is a fruitful direction and can’t really see this as advantageous to customers or vendors. Notwithstanding cloud, Google, Amazon et al. where you don’t deal with hardware at all, but have a whole separate set of issues, and standards and openness are equally important.

In an early morning session today, Tom Maguire, Senior Director of Technology Architecture, Office of the CTO on EMC’s Service-Oriented Infrastructure Strategy: Providing Services, Policies, and Archictecture models. Tom talked about lose coupling, and defining stateful and REST interfaces that would allow EMC to build products that “snap” together and don’t require a services engagement to integrate them. He talked also talked about moving away from “everyone discovering what they need” to a common, federated fabric.

This is almost as powerful message as that of Lewis or Maritz, but will get little or no coverage. If EMC can deliver/execute on this, and do it in a de jure or de facto published standard way, this will indeed give them a powerful platform that companies like Dell can partner in, and bring innovation and competitive advantge for our customers.

Oh, Now it’s legacy IT that’s dead. Huh?

I got a pingback Dana Gardners ZDNet blog for my “Is SOA dead?” post. Dana, rather than addressing the issue I raised yesterday, just moved the goalposts, claiming “Legacy IT is dead“.

I agree with many of his comments, and after my post “Life is visceral“, which Dana so ably goes on to prove with his post. I liked some of the fine flowing language, some of it almost prosaic, especially this “We need to stop thinking of IT as an attached appendage of each and every specific and isolated enterprise. Yep, 2000 fully operational and massive appendages for the Global 2000. All costly, hugely redundant, unique largely only in how complex and costly they are all on their own.” – whatever that means?

However, thinking about a reasonable challenge for anyone considering jumping to a complete services or cloud/services, not migrating, not having a roadmap or architecture to get there, but as Dana suggests, grasping the nettle and just doing it.

One of the simplest and easiest examples I’ve given before for why I suspect as Dana would have it, “legacy  systems” exist, is becuase there are some problems you just can NOT be split apart a thousand times, whose data can NOT be replicated into a million pieces.

Let’s agree. Google handles millions of queries per seconds, as do ebay and Amazon, well probably. However, in the case of the odd goggle query not returning anything, as opposed to returning no results, no one really cares or understands, they just click the browser refresh button and wait. Pretty much the same for Amazon, the product is there, you click buy, and if every now and again there was one item of a product left at an Amazon store front, if someone else has bought it between the time you looked for it and decided to buy, you just suffer through the email that the item will be back in stock in 5-days after all, it will take longer than that to track down someone to discuss it with.

If you ran your banking or credit card systems this way, no one would much care when it came to queries. Sure, your partner is out shopping, you are home working on your investments. Your partner goes to get some cash, checks the balance and the money is there. You want to transfer a large amount of money into a money market account, you check and there amount is just there, you’ll transfer some more into the account overnight from your savings and loan and you know your partner only ever uses credit, right?. You both proceed, a real transactional system lets one of you proceed and the other fails, even if there is only 1-second, and possibly less difference between your transactions coming in.

In the google model, this doesn’t matter, it’s all only queries. If your partner does a balance check, a second or more after you’ve done a transfer, and see’s the the wrong balance, it will only matter when they are embarressed 20-seconds later trying to use that balance, that isn’t there anymore.

Of course, you can argue banks dont’ work like that, they reconcile balances at the end of the day. You will though when that exceptional balance charge kicks-in if both transactions work. Most banks systems are legacy systems from a different perspective, and should be dead. We, as customers, have been pushing for straight through processing for years, why should I wait for 3-days for a check to clear? 

So you can’t have it both ways, out of genuine professional understanding and interest, I’d like to see any genuine transaction based systems that are largely or wholly services based or that run in the cloud.

In order to what Dana advocates, move off ALL legacy systems, those transaction systems need to cope with 1000, and upto 2000 transactions per second. Oh yeah, it’s not just banks that use “legacy IT”, there are airlines, travel companies, anywhere where there is finite product and an almost infinite number of customers.

Remember, Amazon and ebay and paypal don’t do their own credit card processing as far as I’m aware, they are just merchants who pass the transaction on to a, err, legacy system.

Some background reading should include one that I used early in my career. Around the time I was advocating moving Chemical Bank, NY’s larger transaction systems to virtual machines, which we did. I was attending VM Internals education at Amdahl in Columbia, MD. One of the instructors thought I might find the paper useful.

It was written by a team at Tandem Computer and Amdahl, including the late, great Jim Gray. It was written in 1984. Early on in this paper they describe environments that support 800 transactions per second in 1984. Yes, 1984. These days, even in the current economic environment, 1000tps are common, and 2000tps are table stakes.

Their paper is preserved and online here on microsoft.com

And finally, since I’m all about context. I’m an employee of Dell, I started work there today. What is written here is my opinion, based on 34-years IT experience and much of it garned from the sharp end, designing an I/O subsystem to support an large NY banks transactional, inter-bank transfer system, as well as being responsible for the worlds first virtualized credit card authorization system etc. but I didn’t work for Dell, or for that matter, IBM then. 

Speakers corner anyone?


About & Contact

I'm Mark Cathcart, formally a Senior Distinguished Engineer, in Dells Software Group; before that Director of Systems Engineering in the Enterprise Solutions Group at Dell. Prior to that, I was IBM Distinguished Engineer and member of the IBM Academy of Technology. I'm an information technology optimist.


I was a member of the Linux Foundation Core Infrastructure Initiative Steering committee. Read more about it here.

Subscribe to updates via rss:

Feed Icon

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 998 other followers

Blog Stats

  • 83,300 hits