Physicalization at work – software pricing at bay

This is an unashamed take from an Arstechnica.com article, and I certainly can’t take credit for the term. I’m just back from a week of touring around Silicon valley talking about our thinking for Dell 12G servers, to Dell customers and especially to those that take our products and integrate them into their own product offerings. It was a great learning experience, and if you took time to see me and the team, thank you!

One of the more interesting discussions both amongst the Dell team, and with the customers and integrators, was around the concept of physicalization. Instead of building bigger and faster servers, based around more and more cores and sockets, why not have a general purpose, low power, low complexity physical server that is boxed up, aggregated and multiplexed into a physicalization offering?

For example, as discussed in the arstechnica article, using a very simplified, atom based server, eliminate many of the older software and hardware additions that make motherboards more complex and more expensive to build, which in turn with the reduced power and heat, makes them even more reliable. Putting twelve, or more in a single 2U server makes a lot of sense.

They also, typically don’t need a lot of complex virtualization software to make full use of the servers. That might sound like heresy in these days when virtualization is assumed and the major driver behind much of the marketing spend, and much of the technology spend.

So what’s driving this? Well mostly, if you think about it, the amount of complexity needed in the x86 marketplace these days, and also in mainframe and Power/UNIX marketplace is through complex software and systems management. That complexity is driven by two needs.

  1. Server utilization – in order to utilize the increasing processor power, sockets and cores, you need to virtualize the server and split into consumable, useful chunks. This would normal require a complex discussion about multi-threaded programming and complexity, but I’ll ignore that this time. Net, net there are very few workloads and applications that can use the effective capacity offered by current top-end Intel and AMD x86 processors.
  2. Software Pricing – Since the hardware vendors, including Dell, sell these larger virtualized servers as great business opportunities to simplify IT and server deployment by consolidating disperate, and often distributed server workloads into a single, larger, more manageable server, the software vendors want in on the act. Otherwise they lose out on revenue as the customer deploys fewer and fewer servers. On eploy to combat this, to to charge by core or socket. Ultimately their software software does little and sometimes nothing to exploit these features, they just charge, well, because they can. In a virtualized server environment, the same is true. The software vendors don’t exploit the virtualization layer, heck in some cases they are even reluctant to support their software running in this environment and require customers to recreate any problems in a non-virtualized environment before looking at them.

And so it is that physicalization is starting to become attractive. I’ve discussed both the software pricing and virtualization topics many times in the past. In fact, I’ve expressed my frustration that software pricing still seems to drive our industry and, more importantly, our customers to do things that they otherwise wouldn’t. Does your company make radical changes to your IT infrastructure just to get around uncompetitive and often restrictive software pricing practices? Is physicalization interesting or just another dead-end IT trend?

4 Responses to “Physicalization at work – software pricing at bay”


  1. 1 Ewan June 30, 2009 at 3:13 pm

    You’re right physicalisation is a big opportunity, but once advantage of virtualisation is resource sharing which obviously this approach pretty much rules out

    Unless you need 100 or 1000 boxes all with the same approximate level of memory and cpu usage, even the most low end server is going to end up either over or under utilised

  2. 2 cathcam July 21, 2009 at 11:23 am

    Ewan, of course I agree. For the average small-medium sized business, a couple of large x86 servers fully virtualized, providing fail over for each other as well as workload management and workload balancing between the servers, is ideal. However, this is relatively hard to do well, and is subject to increasingly complex software.

    I’m sure VMware and Hyper-V will get there, but they are not yet. There is still lots of room to innovate on top of the hypervisors to provide this function. We’ve been thinking about and working on some trick design stuff that allows server messaging at the firmware level to track and help in this respect.

    I have to say though, if you take a step back and look at the requirements for many organisations that have requirements for a large collection of rack-em and stack-em x86 servers, trying to consolidated those into a few big boxes is way too complicated and opens up too many areas for complication.

    Instead, if you could get all the benefits of consolidation and higher utilization but reduce the number of moving parts, the overall energy consumption, etc. by consolidating into a large box that was subdivided into smaller individual servers, it seems to me that it would be much simpler. Especially since in that environment, either virtually or physically consolidated you still need workload dispatchers, and much of the same network infrastructure and duplication.

    I’m looking now at what else could be eliminated from the typical server hardware box. Seems to me the next thing to go is the local Keyboard, Video and Mouse is kind of redundant just to do pre-boot configuration and BIOS set-up.

  3. 3 Peter A July 22, 2009 at 6:41 am

    I found an IBM tech guru who’s a household name. Literally — he has a tweeting house 🙂 Andy Stanford-Clark is featured in NYT: http://is.gd/1HyTY


  1. 1 People Over Process » Links for June 29th through June 30th Trackback on June 30, 2009 at 1:59 pm

Leave a comment




About & Contact

I'm Mark Cathcart, formally a Senior Distinguished Engineer, in Dells Software Group; before that Director of Systems Engineering in the Enterprise Solutions Group at Dell. Prior to that, I was IBM Distinguished Engineer and member of the IBM Academy of Technology. I am a Fellow of the British Computer Society (bsc.org) I'm an information technology optimist.


I was a member of the Linux Foundation Core Infrastructure Initiative Steering committee. Read more about it here.

Subscribe to updates via rss:

Feed Icon

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 2,066 other subscribers

Blog Stats

  • 90,350 hits