Archive for the 'SOI' Category

Senior Architect – Enterprise Systems Management and more

With things really rolling here at Dell on the software front we are still in the process of hiring,and are looking for some key people to fit into, or lead teams working on current and future software projects. At least currently these are based with our team here in Round Rock, TX. However, I’d like to hear if you’d be interested in joining our Dell west coast software labs in Sunnyvale and Palo Alto.

Here are a few of the current vacancies:

Senior SOA Architect – Enterprise Systems Management
Performance Engineer – SOA Infrastructure Management
Senior Java Developer – Systems Management
Senior Software Engineer – Systems Management-10069ZNS

Depending on how you count, there are over a 100 of us now working on the VIS and AIM products, with a whole lot more to come in 2011. Come join me and help make a fundamental change at Dell and be in on the beginning of something big!

Dell’s Virtual Integrated System

Open, Capable, Affordable - Dell VIS

Open, Capable, Affordable - Dell VIS

It’s always interesting travel, you learn so many new things. And so it was today, we arrived in Bangalore yesterday to bring two of the sprint teams in our “Maverick” design and teams up to speed.

In an overview of the “product” and it’s packaging, we briefly discussed naming. I was under the impression that we’d not started publicly discussing Dell’s Virtual Intergrated System (VIS), well I was wrong as one of the team pointed out.

Turns out a Dell.com web site already has overview descriptions of three of the core VIS offerings, VIS Integration Suite; VIS Delivery Center; and VIS Deploy infrastructure. You can read the descriptions here.

Essentially, Maverick is a services oriented infrastructure (SOI), built from modular services, pluggable components, transports and protocols that will allow us to build various product implementations and solutions from a common management architecture. It’s an exciting departure from traditional monolithic systems management products, or the typically un-integrated products which use different consoles, different terms for the same things, and to get the best use out of them require complex and often long services projects, or for you to change your business to match their management.

Blades a go-go in Austin

We’ve been working on some interesting technology prototypes of our common software architecture. It forms the core of the “Maverick” virtualization solution, the orchestrator for the Dell Virtual Integrated System(VIS).[More on this in a follow-on post].

We have a far reaching outlook for the common software architecture including embedded systems. One thing I’ve been looking at is creating a top-of-rack switch, with an embedded management server. We demonstrated it to Michael Dell and the Executive Leadership Team on Monday to show them where we are with software.

The same stack and applications for the next generation Blade Chasis Management Controller (CMC). For VIS, we are building a set of “adjacency” services so that it can scale to thousands of physical servers. So it was with some interest when I saw this piece in the Austin American Statesman, our “local” paper. It covers the new $9 million supercomputer at the J.J. Pickle Research Campus of the University of Texas, to be installed next year.

The newest “Lonestar” system will be built and deployed by the Texas Advanced Computing Center; it’s expected to be operational by February 2011 and will will include 1,888 M610 PowerEdge Blade servers from Dell Inc., each with two six-core Intel X5600 Westmere processors.

Our VP of Global higher education, John Mullen, was quoted as saying “The system will be built on open-system architecture, which means it can be expanded as needed, that’s a cost-effective switch from proprietary systems of the past.”

Another coincidence for me, the entrance to the J.J. Pickles campus is right opposite the entrance to my old IBM office on Braker Lance, proving once again that old adage, as one do closes, another opens.

More jobs news

We are making great progress on filling out the teams, and my 2nd pilot technology program started with a bang last week, building an embedded processor stack based on ServiceMix; my 3rd pilot, to test some key technologies like AMQP and Cassandra is taking shape. However, we need to backfill some of the work I’ve been doing, and for the consultants we’ve had on staff.

Amongst the vacancies we have open is “Senior SOA Architect – Enterprise Systems Management-1003LEFS“. Also a “Senior Software Engineer, Systems Management- 10069ZNS

A good place to get a list of Dell jobs in Round Rock is here on the cnn.com website. If you are interested in working with some of our recent acquisitions out on the west coast, including Scalent(Dell AIM) or Kace, check out this link.

Got ServiceMix?

If you’ve been keeping an eye on the news and job position listings at Dell you’ll have seen a number of positions open-up over the last 3-months for Java and Service Bus developers, not to mention our completed acquisition of Scalent. We are busy working on the first release of the Dell “soup to nuts” virtualization management, orchestration and deployment software, one of the core technologies of which is Apache ServiceMix.

One of the open positions we’ve got is for a Senior Software Engineer with solid ServiceMix skills from a programming perspective. This job listing is the position, the job description and skills will be updated over the next few days but if you’d like to join the team architecting, designing and programming Dell’s first real software product, that’s aiming at making the virtual data center easy to use, as well as open, capable and affordable to run, go ahead and apply now.

If you make it through the HR process, I’ll see you at the interview…

EMC World – standards?

Tucci and Maritz at EMC World 2009

Tucci and Maritz at EMC World 2009

I’ve been attending the annual EMC World conference in Orlando this week. A few early comments, there has been a massive 64,000ft shift to cloud computing in the messaging, but less so at ground level. There have been one or two technical sessions, but none on how to implement a cloud, or to put data in a cloud, or to manage data in a cloud. Maybe next year?

Yesterday in the keynote, Paul Maritz, President and CEO of VMware said that VMware is no longer in the business of individual hypervisors but in stitching together an entire infrastructure. In a single sentence laying out clearly where they are headed, if it wasn’t clear before. In his keynote this morning, Mark Lewis, President, Content Management and Archiving Division, was equally clear about the future of information virtualization, talking very specifically about federation and distributed data, with policy management. He compared that to a consolidated, centralized vision which he clearly said, hadn’t worked. I liked Lewis’s vision for EMC Documentum xCelerated Composition Platform (xCP) as a next generation information platform.

However, so far this week, and especially after this afternoons “Managing the Virtualized Data Center” BOF, where I had the first and last questions on standards, which didn’t get a decent discussion, there has been little real mention of standards or openness.

Generally, while vendors like to claim standards compliance and involvement, they don’t like them. Standards tend to slow down implementation historically. This hasn’t been the case with some of the newer technologies, but at least some level of openness is vital to allow fair competition. Competition almost always drives down end user costs.

Standards are of course not required if you can depend on a single vendor to implement everything you need, as you need it. However, as we’ve seen time and time again, that just doesn’t work, something gets left out, doesn’t get done, or gets a low priority from the implementing vendor, but it’s a high priority for you – stalemate.

I’ll give you an example: You are getting recoverable errors on a disk drive. Maybe it’s directly attached, maybe it’s part of a SAN or NAS. If you need to run multiple vendors server and/or storage/virtualization, who is going to standardize the error reporting, logging, alerting etc. ?

The vendors will give you one of a few canned answers. 1. It’s the hardware vendors job(ie. they pass the buck) 2. They’ll build agents that can monitor this for the most popular storage systems (ie. you are dependent on them, and they’ll do it for their own storage/disks first) 3. They’ll build a common interface through which they can consume the events(ie. you are dependent on the virtualization vendor AND the hardware vendor to cooperate) or 4. They are about managing across the infrastructure for servers, storage and network(ie. they are dodging the question).

There are literally hundreds of examples like this if you need anything except a dedicated, single vendor stack including hardware+virtualization. This seems to be where Cisco and Oracle are lining up. I don’t think this is a fruitful direction and can’t really see this as advantageous to customers or vendors. Notwithstanding cloud, Google, Amazon et al. where you don’t deal with hardware at all, but have a whole separate set of issues, and standards and openness are equally important.

In an early morning session today, Tom Maguire, Senior Director of Technology Architecture, Office of the CTO on EMC’s Service-Oriented Infrastructure Strategy: Providing Services, Policies, and Archictecture models. Tom talked about lose coupling, and defining stateful and REST interfaces that would allow EMC to build products that “snap” together and don’t require a services engagement to integrate them. He talked also talked about moving away from “everyone discovering what they need” to a common, federated fabric.

This is almost as powerful message as that of Lewis or Maritz, but will get little or no coverage. If EMC can deliver/execute on this, and do it in a de jure or de facto published standard way, this will indeed give them a powerful platform that companies like Dell can partner in, and bring innovation and competitive advantge for our customers.

Robin Bloor asks what is dynamic infrastructure

Over on his have mac will blog blog, Robin Bloor asks What Does IBM Mean By Dynamic Infrastructure?

Rather than burden his comments section with a long trail of corrections, based on my suppositions, I thought I’d post my answer here and correct it as appropriate.

Robin, You might want to google for IBM Dynamic Infrastructure for MY SAP. or similar, or go look at this redbook. There is also a useful overview PowerPoint from Gerd Breiter, one of the architects and development leads, here

I’d guess the architects/development team for IDI have been moved internally from Systems Group to Tivoli. IDI was an early implementation of on demand and was developed in Boeblingen. As initially envisaged, IDI was a Systems Group initive and the bulk of the early implementation done before on demand, and then carried over and modified as and when possible.

Of course, I’m sure now that this mission is over in Tivoli the thinking and delivery will have evolved. Obviously cloud computing has become as major focus area in the industry since then, and would have to be factored in.

Unless you know better 😉


About & Contact

I'm Mark Cathcart, formally a Senior Distinguished Engineer, in Dells Software Group; before that Director of Systems Engineering in the Enterprise Solutions Group at Dell. Prior to that, I was IBM Distinguished Engineer and member of the IBM Academy of Technology. I'm an information technology optimist.


I was a member of the Linux Foundation Core Infrastructure Initiative Steering committee. Read more about it here.

Subscribe to updates via rss:

Feed Icon

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 887 other followers

Blog Stats

  • 83,850 hits