This is an unashamed take from an Arstechnica.com article, and I certainly can’t take credit for the term. I’m just back from a week of touring around Silicon valley talking about our thinking for Dell 12G servers, to Dell customers and especially to those that take our products and integrate them into their own product offerings. It was a great learning experience, and if you took time to see me and the team, thank you!
One of the more interesting discussions both amongst the Dell team, and with the customers and integrators, was around the concept of physicalization. Instead of building bigger and faster servers, based around more and more cores and sockets, why not have a general purpose, low power, low complexity physical server that is boxed up, aggregated and multiplexed into a physicalization offering?
For example, as discussed in the arstechnica article, using a very simplified, atom based server, eliminate many of the older software and hardware additions that make motherboards more complex and more expensive to build, which in turn with the reduced power and heat, makes them even more reliable. Putting twelve, or more in a single 2U server makes a lot of sense.
They also, typically don’t need a lot of complex virtualization software to make full use of the servers. That might sound like heresy in these days when virtualization is assumed and the major driver behind much of the marketing spend, and much of the technology spend.
So what’s driving this? Well mostly, if you think about it, the amount of complexity needed in the x86 marketplace these days, and also in mainframe and Power/UNIX marketplace is through complex software and systems management. That complexity is driven by two needs.
- Server utilization – in order to utilize the increasing processor power, sockets and cores, you need to virtualize the server and split into consumable, useful chunks. This would normal require a complex discussion about multi-threaded programming and complexity, but I’ll ignore that this time. Net, net there are very few workloads and applications that can use the effective capacity offered by current top-end Intel and AMD x86 processors.
- Software Pricing – Since the hardware vendors, including Dell, sell these larger virtualized servers as great business opportunities to simplify IT and server deployment by consolidating disperate, and often distributed server workloads into a single, larger, more manageable server, the software vendors want in on the act. Otherwise they lose out on revenue as the customer deploys fewer and fewer servers. On eploy to combat this, to to charge by core or socket. Ultimately their software software does little and sometimes nothing to exploit these features, they just charge, well, because they can. In a virtualized server environment, the same is true. The software vendors don’t exploit the virtualization layer, heck in some cases they are even reluctant to support their software running in this environment and require customers to recreate any problems in a non-virtualized environment before looking at them.
And so it is that physicalization is starting to become attractive. I’ve discussed both the software pricing and virtualization topics many times in the past. In fact, I’ve expressed my frustration that software pricing still seems to drive our industry and, more importantly, our customers to do things that they otherwise wouldn’t. Does your company make radical changes to your IT infrastructure just to get around uncompetitive and often restrictive software pricing practices? Is physicalization interesting or just another dead-end IT trend?