Archive for the 'EMC' Category

Dell and EMC together

I’ve been asked a few times about the Dell/EMC merger/acquisition, I can say nothing, not because of financial or security regulations, but because I know nothing at all. Although it was clear some changes were afoot at Dell, the announcement came as a surprise to me.

A couple of things are amusing though in the industry analysis. The most amusing is the quotes coming out of other industry based organizations and their CEO’s. This is a classic of it’s kind, on the Register about Meg Whitman at HP, and then this one from Dietzen the CEO at Pure.

This moves comes out of ‘weakness, not strength’, claims CE Dietzen

Wouldn’t it be great if instead of this entirely predictable FUD a confident CEO would say something to the effect of

The acquisition will be challenging, but we welcome the increased competition and are sure customers and businesses will recognize and continue to benefit from the great products we already have, and those on our roadmap.

Of course no one would ever actually say that, one it doesn’t make headlines, and two because well…

The other thing that’s been disappointing is that other Dell trope, you can’t use Apple products. See as an example The Register:

I have one thing to say to MacBook users at EMC: Whoops

I have to say, I’m always surprised when I hear this kind of thing. Seriously, while I’m sure Michael Dell would prefer everyone use a dell tablet or laptop, I’m sure he’d rather have the most talented, productive people and being acquired and having to use new apps is enough of a productivity hit. Why on earth would he want to want to make it worse by enforcing a move of hardware, software and app paradigms. FYI there are a number of people in Dell Software Group, especially from the Quest acquisition, that have been using Apple products since the Quest acquisition.

Plug-in, turn off..

Plug-in, turn-off

Plug-in, turn-off

Work on the full VIS Unified automation and orchestration engine continues apace here in Round Rock, one of the first fruits on the Dev. teams efforts was announced this week, the Dell Management Plug-In for VMware vCenter.

In essence what it does is rather than require an additional console to manage, monitor hardware, it integrates the management for Dell PowerEdge servers directly into VMware vCenter so it can be access directly from there.

We’ll be leveraging this technology as a core component in VIS Unified, it’s got a solid delivery off a well thought through programming spec. and has already received numerous positive reviews. There is a good review here, along with some screen shots that will give you an idea of what the product does. Any questions, let me know.

Last weeks roundup

Yep, the Texas Rodeo has left town(literally it started leaving Monday), and so it’s time for a roundup. I don’t usually post link lists(yes, my old mainframe chums will smile quietly at that) but since I wasn’t at the Dell PowerEdge launch in San Fransico last week, I thought these links would both give a little more clarity as well as some perspective.

First up, at a corporate level, and as one of his last blog posts as principal IT adviser at Illuminata, Gordon Haff, wrote this blog post on the “Real Dell 2.0” and his thoughts on the general direction.

Next is a Server Watch overview of the product content and direction from Andy Patrizio, a senior editor at, the news service of

And then if the proof is in the pudding, here is a blog entry from Dave Graham, a Technical Consultant Cloud Customer Advocate with EMC Corporation, excitedly took delivery of a Dell PowerEdge C6100 the day after the announcement in San Francisco. You can see some pictures and an early write-up here on Dave’s blog, in an entry called “Something cloudy this way comes”.

Finally, I under played the importance of the partner side of the announcement, but then  what do you expect, I’m a product guy. Dells own Barton George, who was live blogging, tweeting, and on the Dell Yammer group live from the event, posted the following as part of his blog entry on the event.

The Cloud Partner Program Working with cloud ISVs we will be offering easy-to-buy and deploy cloud solutions and blueprints optimized for and validated on Dell platforms.  The first three partners we are announcing are Aster Data(providing web analytics), Canonical (offering an open source Infrastructure as a Service private cloud) and Greenplum(self-service data warehousing).   (On the Evolutionary cloud side we will continue to work with VMware and Microsoft  and stay tuned for news on what’s happening on the Windows Azure front :)

Appliances – Good, bad or virtual ?

So, in another prime example of “Why do Analysts blogs make it so hard to have a conversation?” , Gordon Haff of Illuminata today tweeted a link to a new blog post of his on appliances. No comments allowed, no trackbacks provided.

He takes Chuck Hollis (EMC) post and opines various positions on it. It’s not clear what the notion of “big appliance” is as Chuck uses it. Personally, I think he’s talking about solutions. Yes, I know it’s a fine line, but a large all purpose data mining solution with its’own storage, own server, own console, etc. is no more an appliance than a kitchen is. The kitchen will contain appliances but it is not one itself. If thats not what Chuck is describing, then his post has some confusion, very few organizations will have a large number of these “solutions”.

On the generally accepted view of appliances, I think both Gordon and Chuck are being a little naive when they think that all compute appliances can be made virtual and run on shared resource machines.

While at IBM I spent a lot of time, and learned some valuable lessons about appliances. I was looking at the potential for the first generation of IBM designed WebSphere DataPower appliances. At first, it seemd to me even 3-years ago that turning them into a virtual appliance would be a good idea. However, I’d made the same mistake that Hollis and Haff make. They assume that the type of processing done in an appliance can be transparently replaced by the onward march of Moores Law on Intel and IBM Power processors.

The same can be said for most appliances I’ve looked at. They have unique hardware design, which often includes numerous specialized processing functions, such as encryption, key management and even environmental monitoring. Appliances though real value add is that they are designed with a very specific market opportunity in mind. That design will require complex workload analysis, and reviewing the balance between general purpose compute, graphics, security, I/O and much more, and producing a balanced design and most importantly, a complete user experience to support it. Thats often the key.

Some appliances offer the sort of hardware based security and tamper protection that can never be replaced by general purpose machines.

Yes Hollis and Haff make a fair point that these appliances need separate management, the real point is that many of these appliances need NO management at all. You set them up, then run them. Because the workload is tested and integrated the software rarely, if ever fails. Since the hardware isn’t generally extensible, aka as Chuck would have it, you are locked into what you buy, updating drivers and introducing incompatibility isn’t an issue as it is with most general purpose servers.

As for trading one headache for another, while it’s a valid point, my experience so far with live migration and pools of virtual servers, network switches, SAN setup etc. is that you are once again trading one headache for another. While in a limited fashion it’s fairly straight forward to do live migration of a virtual workload from one system to another. Doing it at scale, which is what is required if you’ve reached the “headache”point that Chuck is positing, is far from simple.

Chuck closes his blog entry with:

Will we see a best-of-both-worlds approach in the future?

Well I’d say that was more than likely, in fact it’s happening and has been for a while. The beauty of an appliance is that the end user is not exposed to the internal workings. They don’t have to worry about most configuration options and setup, management is often minimised or eliminated, and many appliances today offer “phone home” like features for upgrade and maintenance. I know, we build many of them here at Dell for our customers, including EMC, Google etc.

One direction that we are likely to see, is that in the same current form factor of an appliance, it will become a fault tolerant appliance by replicating key parts of the h/w, virtualizing the appliance and running multiple copies of the appliance workload within a single physical appliance, all once again delivering that workload and deployment specific features and functions. This in turn reduces the number of physical appliance a customer will need. So the best of both worlds, although I suspect that not what Chuck was hinting at.

While there is definitely a market for virtual software stacks, complete application and OS instances, presuming that you can move all h/w appliances to this model, is missing the point.

Let’s not forget, SANs are often just another form of appliance, as are TOR/EOR network switches, and things like the Cisco Nexus. Haff says that appliances have been around since the late 1990’s, well at least as far as I can recall, in the category of “big appliances”, the IBM Parallel Query Server which ran a customized mainframe DB2 workload, and attached to an IBM S/390 Enterprise Server was around in the early 1990’s.

Before that many devices were in fact sold as appliances, they were just not called that, but by todays definition, thats exactly what they were. My all time favorite was the IBM 3704, part of the IBM 3705 communications controller family. The 3704 was all about integrated function and a unique user experience, with at the time(1976) an almost space age touch panel user interface.

IBM update on Power 7

For those interested, IBM has apparently revealed some details on the upcoming Power 7 processors. Gordon Haff an analyst has written two blog entries on aspects of the disclosure meeting. The first on the size, capacity, performance and the second, on the design, threading and cache etc. Nice to see Gordon picked up on x86 Transitive, no word on any new developments though.

I suspect that given the state of the industry now, the Power Systems folks are feeling pretty pleased with the decisions we made on the threading design and processor threading requirements almost over two years ago, no point in chasing rocks if you have virtualization. Best not rest on your laurels though guys. You’ve got some really significant software pricing issues to deal with, and it will be interesting to see if you took my advice on an intentional architecture for the Power server platform management.

In a interesting, karmic sort of way, I’m doing an “Avoiding Accidental Architecture” pitch here at Dell this afternoon, I’ll be using the current Power 6 state of affairs as a good, or rather bad example. Thanks as always to Tom Maguire of EMC, and Grady Booch at IBM for the architecture meme.

IBM Big Box quandary

In another follow-up from EMC World, the last session I went to was “EMC System z, z/OS, z/Linux and z/VM”. I thought it might be useful to hear what people were doing in the mainframe space, although largely unrelated to my current job. It was almost 10-years to the day that I was at IBM, were writing the z/Linux strategy, hearing about early successes etc. and strangely, current EMC CTO Jeff Nick and I were engaged in vigourous debate about implementation details of z/Linux the night before we went and told SAP about IBM’s plans.

The EMC World session demonstrated, that as much as things change, the they stay the same. It also reminded me, how borked the IT industry is, that we mostly force customers to choose by pricing rather than function. 10-12 years ago z/Linux on the mainframe was all about giving customers new function, a new way to exploit the technology that they’d already invested in. It was of course also to further establish the mainframes role as a server consolidation platform through virtualization and high levels of utilization.(1)

What I heard were two conflicting and confusing stories, at least they should be for IBM. The first was a customer who was moving all his Oracle workloads from a large IBM Power Systems server to z/Linux on the mainframe. Why? Becuase the licensing on the IBM Power server was too expensive. Using z/Linux, and the Integrated Facility for Linux (IFL) allows organizations to do a cost avoidance exercise. Processor capacity on the IFL doesn’t count towards the total installed, general processor capacity and hence doesn’t bump up the overall software licensing costs for all the other users. It’s a complex discussion and that wasn’t the purpose of this post, so I’ll leave it at that.

This might be considered a win for IBM, but actually it was a loss. It’s also a loss for the customer. IBM lost because the processing was being moved from it’s growth platform, IBM Power Systems, to the legacy System z. It’s good for z since it consolidates it’s hold in that organization, or probably does. Once the customer has done the migration and conversion, it will be interesting to see how they feel the performance compares. IBM often refers to IFL and it’s close relatives the ziip and zaap as speciality engines. Giving the impression that they perform faster than the normal System z processors. It’s largely an urban myth though, since these “specialty” engines really only deliver the same performance, they are just measured, monitored and priced differently.

The customer lost becuase they’ve spent time and effort to move from one architecture to another, really only to avoid software and server pricing issues. While the System z folks will argue the benefits of their platform, and I’m not about to “dis” them, actually the IBM Power server can pretty mouch deliver a good enough implementation as to make the difference, largely irrelavant.

The second confliction I heard about was from EMC themselves. The second main topic of the session was a discussion about moving some of the EMC Symmetrix products off the mainframe, as customers have reported that they are using too much mainframe capacity to run. The guys from EMC were thinking of moving the function of the products to commodity x86 processors and then linking those via high speed networking into the mainframe. This would move the function out of band and save mainframe processor cycles, which in turn would avoid an upgrade, which in turn would avoid bumping the software costs up for all users.

I was surprised how quickly I interjected and started talking about WLM SRM Enclaves and moving the EMC apps to run on z/Linux etc. This surely makes much more sense.

I was left with though a definate impression that there are still hard times ahead for IBM in large non-X86 virtualized servers. Not that they are not great pieces of engineering, they are. But getting to grips with software pricing once and for all should really be their prime focus, not a secondary or tertiary one. We were working towards pay per use once before, time to revist me thinks.

(1) Sport the irony of this statement given the preceeding “Nano, Nano” post!

EMC World – standards?

Tucci and Maritz at EMC World 2009

Tucci and Maritz at EMC World 2009

I’ve been attending the annual EMC World conference in Orlando this week. A few early comments, there has been a massive 64,000ft shift to cloud computing in the messaging, but less so at ground level. There have been one or two technical sessions, but none on how to implement a cloud, or to put data in a cloud, or to manage data in a cloud. Maybe next year?

Yesterday in the keynote, Paul Maritz, President and CEO of VMware said that VMware is no longer in the business of individual hypervisors but in stitching together an entire infrastructure. In a single sentence laying out clearly where they are headed, if it wasn’t clear before. In his keynote this morning, Mark Lewis, President, Content Management and Archiving Division, was equally clear about the future of information virtualization, talking very specifically about federation and distributed data, with policy management. He compared that to a consolidated, centralized vision which he clearly said, hadn’t worked. I liked Lewis’s vision for EMC Documentum xCelerated Composition Platform (xCP) as a next generation information platform.

However, so far this week, and especially after this afternoons “Managing the Virtualized Data Center” BOF, where I had the first and last questions on standards, which didn’t get a decent discussion, there has been little real mention of standards or openness.

Generally, while vendors like to claim standards compliance and involvement, they don’t like them. Standards tend to slow down implementation historically. This hasn’t been the case with some of the newer technologies, but at least some level of openness is vital to allow fair competition. Competition almost always drives down end user costs.

Standards are of course not required if you can depend on a single vendor to implement everything you need, as you need it. However, as we’ve seen time and time again, that just doesn’t work, something gets left out, doesn’t get done, or gets a low priority from the implementing vendor, but it’s a high priority for you – stalemate.

I’ll give you an example: You are getting recoverable errors on a disk drive. Maybe it’s directly attached, maybe it’s part of a SAN or NAS. If you need to run multiple vendors server and/or storage/virtualization, who is going to standardize the error reporting, logging, alerting etc. ?

The vendors will give you one of a few canned answers. 1. It’s the hardware vendors job(ie. they pass the buck) 2. They’ll build agents that can monitor this for the most popular storage systems (ie. you are dependent on them, and they’ll do it for their own storage/disks first) 3. They’ll build a common interface through which they can consume the events(ie. you are dependent on the virtualization vendor AND the hardware vendor to cooperate) or 4. They are about managing across the infrastructure for servers, storage and network(ie. they are dodging the question).

There are literally hundreds of examples like this if you need anything except a dedicated, single vendor stack including hardware+virtualization. This seems to be where Cisco and Oracle are lining up. I don’t think this is a fruitful direction and can’t really see this as advantageous to customers or vendors. Notwithstanding cloud, Google, Amazon et al. where you don’t deal with hardware at all, but have a whole separate set of issues, and standards and openness are equally important.

In an early morning session today, Tom Maguire, Senior Director of Technology Architecture, Office of the CTO on EMC’s Service-Oriented Infrastructure Strategy: Providing Services, Policies, and Archictecture models. Tom talked about lose coupling, and defining stateful and REST interfaces that would allow EMC to build products that “snap” together and don’t require a services engagement to integrate them. He talked also talked about moving away from “everyone discovering what they need” to a common, federated fabric.

This is almost as powerful message as that of Lewis or Maritz, but will get little or no coverage. If EMC can deliver/execute on this, and do it in a de jure or de facto published standard way, this will indeed give them a powerful platform that companies like Dell can partner in, and bring innovation and competitive advantge for our customers.

About & Contact

I'm Mark Cathcart, formally a Senior Distinguished Engineer, in Dells Software Group; before that Director of Systems Engineering in the Enterprise Solutions Group at Dell. Prior to that, I was IBM Distinguished Engineer and member of the IBM Academy of Technology. I'm an information technology optimist.

I was a member of the Linux Foundation Core Infrastructure Initiative Steering committee. Read more about it here.

Subscribe to updates via rss:

Feed Icon

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 924 other followers

Blog Stats

  • 83,958 hits