Archive for the 'fabric' Category

vStart 200 announced – Pre-packaged private cloud

We’ve announced the details of our vStart 200 virtualization solution offering. As with the other vStart offerings, the vStart 200 is ready to run with servers, storage and networking, supports upto 200 virtual machines and includes Integrated management via the VMware vCenter with Dell’s management plug-in to display inventory; choice of hypervisors  and validated to run on both vSphere from VMware and Hyper-V from Microsoft®.

  • The formal Dell vStart 200 details are here.
  • David Chernicoff has a summary over on zdnet here.

If you have any questions, feel free to ask.

Silicon Valley Engineering jobs – Engineering Technologist Distinguished Engineer at Dell hmm looks like we are getting serious about networking… 🙂

EMC World – standards?

Tucci and Maritz at EMC World 2009

Tucci and Maritz at EMC World 2009

I’ve been attending the annual EMC World conference in Orlando this week. A few early comments, there has been a massive 64,000ft shift to cloud computing in the messaging, but less so at ground level. There have been one or two technical sessions, but none on how to implement a cloud, or to put data in a cloud, or to manage data in a cloud. Maybe next year?

Yesterday in the keynote, Paul Maritz, President and CEO of VMware said that VMware is no longer in the business of individual hypervisors but in stitching together an entire infrastructure. In a single sentence laying out clearly where they are headed, if it wasn’t clear before. In his keynote this morning, Mark Lewis, President, Content Management and Archiving Division, was equally clear about the future of information virtualization, talking very specifically about federation and distributed data, with policy management. He compared that to a consolidated, centralized vision which he clearly said, hadn’t worked. I liked Lewis’s vision for EMC Documentum xCelerated Composition Platform (xCP) as a next generation information platform.

However, so far this week, and especially after this afternoons “Managing the Virtualized Data Center” BOF, where I had the first and last questions on standards, which didn’t get a decent discussion, there has been little real mention of standards or openness.

Generally, while vendors like to claim standards compliance and involvement, they don’t like them. Standards tend to slow down implementation historically. This hasn’t been the case with some of the newer technologies, but at least some level of openness is vital to allow fair competition. Competition almost always drives down end user costs.

Standards are of course not required if you can depend on a single vendor to implement everything you need, as you need it. However, as we’ve seen time and time again, that just doesn’t work, something gets left out, doesn’t get done, or gets a low priority from the implementing vendor, but it’s a high priority for you – stalemate.

I’ll give you an example: You are getting recoverable errors on a disk drive. Maybe it’s directly attached, maybe it’s part of a SAN or NAS. If you need to run multiple vendors server and/or storage/virtualization, who is going to standardize the error reporting, logging, alerting etc. ?

The vendors will give you one of a few canned answers. 1. It’s the hardware vendors job(ie. they pass the buck) 2. They’ll build agents that can monitor this for the most popular storage systems (ie. you are dependent on them, and they’ll do it for their own storage/disks first) 3. They’ll build a common interface through which they can consume the events(ie. you are dependent on the virtualization vendor AND the hardware vendor to cooperate) or 4. They are about managing across the infrastructure for servers, storage and network(ie. they are dodging the question).

There are literally hundreds of examples like this if you need anything except a dedicated, single vendor stack including hardware+virtualization. This seems to be where Cisco and Oracle are lining up. I don’t think this is a fruitful direction and can’t really see this as advantageous to customers or vendors. Notwithstanding cloud, Google, Amazon et al. where you don’t deal with hardware at all, but have a whole separate set of issues, and standards and openness are equally important.

In an early morning session today, Tom Maguire, Senior Director of Technology Architecture, Office of the CTO on EMC’s Service-Oriented Infrastructure Strategy: Providing Services, Policies, and Archictecture models. Tom talked about lose coupling, and defining stateful and REST interfaces that would allow EMC to build products that “snap” together and don’t require a services engagement to integrate them. He talked also talked about moving away from “everyone discovering what they need” to a common, federated fabric.

This is almost as powerful message as that of Lewis or Maritz, but will get little or no coverage. If EMC can deliver/execute on this, and do it in a de jure or de facto published standard way, this will indeed give them a powerful platform that companies like Dell can partner in, and bring innovation and competitive advantge for our customers.


A funny thing happened on the way to the forum…

Ahh yes, Nathan Lane and Frankie Howerd, they represent the differences between the UK and US, in many ways so different, but in many ways, so the same. I’ve been bemoaning the fact that I can’t blog about what I’ve been doing mostly for the last 5-years as it’s all design and development work, all considered by IBM to be confidential, and since none of it is open source, it’s hard to point to projects and give hints.

And so it is with the project I’m currently working on. Only this time, not only is it IBM Confidential, but it is being worked with a partner and based on a lot of their intellectual property, so even less chance to discuss in public. I’ve been doing some customer validation sessions over the last 3-months and got concrete feedback on key data center directions around data center fabric, 10gb ethernet, converged enhanced ethernet (CEE) and more. There are certainly big gains to be made in reducing capital expenditure and operational expenditure in this space,  but thats really only the start. The real benefit comes from having an enabled fabric that rather than forcing centralization around a server, which is much of what we’ve been doing for the last 20-years, or forcing centralization around an ever more complex switch, which is where Cisco have been headed, the fabric is in and of itself the hub and the switches just provide any to any connectivity, low latency and enable both existing and new applications, both virtualized and enabled, to exploit the fabric.

So following one of my customer validation sessions in the UK, I was searching around on the Internet for a link. And I came across this one. It discusses a strategic partnership between IBM and Juniper for custom ASICS for a new class of Intenet backbone devices, only it is from 1997, who’da guessed. A funny thing happened on the way to the forum…

Designing the high-performance data center

Network World / Juniper event

Network World / Juniper event

I’ll be doing the opening IBM keynote at the Atlanta event of September 16th, as I was reminded by Dave, that I used to post notices and presentations on my Corner right from the start, through the 3rd

Not sure when I’ll be able to post the slides, but will as soon as I can. If you are in the Atlanta area, we have a great shared story on what we are doing in this space, and this is the first of the public roll-out of that roadmap, and then stop by and ask me any follow-up questions.

Event and registration details are here.

Spaghetti cabling

Andrew McKaskill

Racks and Racks of Spaghetti, photo by: Andrew McKaskill

As always, I’ve been focusing on the positive and forward looking aspects of unified fabrics for data centers. I got a few interesting emails after my last blog entry on the problems people have now that need solving, not least the cost, reliability and sheer complexity caused by the current situation.

One of the emails included a link to this blog entry on, which has a great collection of cable mess pictures. In his email to me Chris wrote “most of our racks are carefully organised and formally tied off in bundles. Server replacement is relatively easy if you just want to exchange one with another that fits in the same space. The problem comes when you need to reconfigure a few servers, add some appliances, maybe remove an email backup or firewall appliance and relocate in a different rack, you undo the cable ties and everything starts falling apart. While none of our rows looks this bad, many of the racks within the rows end up looking like this.”

He goes on to discuss some of the related problems this causes and often the complete lack of momentum in solving the problem because of how labor intensive and expensive it can be, both in cost and downtime to deal with these issues. Another email included a link to this discussion forum on techrepublic.

I have to admit, this is an area I have little or no experience with. Nancy has yet to take me on a tour of the test bed we use across the hall for the high-end Power systems, and as I’ve been locked away working on design mostly for the past 5-years, I’ve not witnessed the explosive growth in large scale data centers. Since I’m doing customer validation sessions now, don’t be surprised if when I come to your office, I ask to see the machine room.

About & Contact

I'm Mark Cathcart, formally a Senior Distinguished Engineer, in Dells Software Group; before that Director of Systems Engineering in the Enterprise Solutions Group at Dell. Prior to that, I was IBM Distinguished Engineer and member of the IBM Academy of Technology. I am a Fellow of the British Computer Society ( I'm an information technology optimist.

I was a member of the Linux Foundation Core Infrastructure Initiative Steering committee. Read more about it here.

Subscribe to updates via rss:

Feed Icon

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 915 other followers

Blog Stats

  • 89,184 hits