Posts Tagged 'fabric'

IBM AND JUNIPER NETWORKS FORM STRATEGIC TECHNOLOGY RELATIONSHIP

A funny thing happened on the way to the forum…

Ahh yes, Nathan Lane and Frankie Howerd, they represent the differences between the UK and US, in many ways so different, but in many ways, so the same. I’ve been bemoaning the fact that I can’t blog about what I’ve been doing mostly for the last 5-years as it’s all design and development work, all considered by IBM to be confidential, and since none of it is open source, it’s hard to point to projects and give hints.

And so it is with the project I’m currently working on. Only this time, not only is it IBM Confidential, but it is being worked with a partner and based on a lot of their intellectual property, so even less chance to discuss in public. I’ve been doing some customer validation sessions over the last 3-months and got concrete feedback on key data center directions around data center fabric, 10gb ethernet, converged enhanced ethernet (CEE) and more. There are certainly big gains to be made in reducing capital expenditure and operational expenditure in this space,  but thats really only the start. The real benefit comes from having an enabled fabric that rather than forcing centralization around a server, which is much of what we’ve been doing for the last 20-years, or forcing centralization around an ever more complex switch, which is where Cisco have been headed, the fabric is in and of itself the hub and the switches just provide any to any connectivity, low latency and enable both existing and new applications, both virtualized and enabled, to exploit the fabric.

So following one of my customer validation sessions in the UK, I was searching around on the Internet for a link. And I came across this one. It discusses a strategic partnership between IBM and Juniper for custom ASICS for a new class of Intenet backbone devices, only it is from 1997, who’da guessed. A funny thing happened on the way to the forum…

Spaghetti cabling

Andrew McKaskill

Racks and Racks of Spaghetti, photo by: Andrew McKaskill

As always, I’ve been focusing on the positive and forward looking aspects of unified fabrics for data centers. I got a few interesting emails after my last blog entry on the problems people have now that need solving, not least the cost, reliability and sheer complexity caused by the current situation.

One of the emails included a link to this blog entry on vibrant.com, which has a great collection of cable mess pictures. In his email to me Chris wrote “most of our racks are carefully organised and formally tied off in bundles. Server replacement is relatively easy if you just want to exchange one with another that fits in the same space. The problem comes when you need to reconfigure a few servers, add some appliances, maybe remove an email backup or firewall appliance and relocate in a different rack, you undo the cable ties and everything starts falling apart. While none of our rows looks this bad, many of the racks within the rows end up looking like this.”

He goes on to discuss some of the related problems this causes and often the complete lack of momentum in solving the problem because of how labor intensive and expensive it can be, both in cost and downtime to deal with these issues. Another email included a link to this discussion forum on techrepublic.

I have to admit, this is an area I have little or no experience with. Nancy has yet to take me on a tour of the test bed we use across the hall for the high-end Power systems, and as I’ve been locked away working on design mostly for the past 5-years, I’ve not witnessed the explosive growth in large scale data centers. Since I’m doing customer validation sessions now, don’t be surprised if when I come to your office, I ask to see the machine room.

Any to any fabric

I’ve spent the last few months working on IBMs’ plans for next generation data center fabric. It is a fascinating area, one ripe for innovation and some radical new thinking. When we were architecting on demand, and even before that, working on the Grid Toolbox, one of the interesting futures options was InfiniBand or IB.

What made IB interesting was that you could put logic in either end of the IB connection. Thus turning a standard IB connection into a custom switched connector by dropping your own code into the host channel adapter (HCA) or target channel adapter (TCA). Anyway, I’m getting off course. The point was that we could use an industry standard protocol and connection to do some funky platform specific things like specific cluster support, quality of service assertion, or security delegation without compromising the standard connection. This could be done between racks at the same speed and latency as between systems in the same rack. This could open up a whole new avenue of applications and would help to distribute work inside the enterprise, hence the Grid hookup. It never played out that way for many reasons.

Over in the Cisco Datacenter blog, Douglas Gourlay is considering changes to his “theory” on server disaggregation and network evolution – he theorises that over time everything will move to the network, including memory. Remember, the network is the computer?

He goes on to speculate that “The faster and more capable the network the more disaggregated the server becomes. The faster and more capable a network is the more the network consolidates other network types.” and wants time to sit down and “mull over if there is an end state”.

Well nope, there isn’t and end state. First off, the dynamics of server design and environmental considerations mean that larger and larger centralized computers will still be in vogue for a long time to come. Take for example iDataplex. It isn’t a single computer, but what is these days? In their own class are also the high end Power 6 595 Servers, again not really single servers but intended to multi-process, to virtualise etc. There is a definite trend for row scale computing, where additional capacity is dynamically enabled off a single set of infrastructure components and while you could argue these are distributed computers, just within the row, they are really composite computers.

As we start to see fabric settle down and become true fabrics, rather than either storage/data connections or network connections, new classes of use, new classes of aggregated systems will be designed. This is what really changes computing landscape, how they are used, not how they are built. The idea that you can construct a virtual computer from a network was first discussed by former IBM guru Irving Wladawsky-Berger. His Internet computer illustration was legend inside IBM and used and re-used in presentations throughout the late 1990s.

However, just like the client/server vision of the early ’90s, the distributed computing vision of the mid 90’s, and Irvings’ Internet computer of the late 1990s, plus all those those that came before and since, the real issue is how to use what you have, and what can be done better. That for me is the crux of the emerging world of 10Gb Ethernet, Converged Enhanced Ethernet, fibre channel over Ethernet et al. Don’t take existing systems and merely break them apart and network them, because you can.

As data center fabrics allow low latency, non-blocking, any to any and point to point communication, why force traffic through a massive switch and lift system to enable this to happen? Enabling storage to talk to tapes, for networks to access storage without going via a network switch or a server, enabling server to server, server to client, device to device surely has some powerful new uses. The live dynamic streaming and analysis of all sorts of data, without having to have it pass through a server. Appliances which dynamically vet, validate and operate on packets as they pass through from one point to another.

It’s this combination of powerful server computers, distributed network appliances, and secure fabric services that

Since Douglas ended his post with a qoute, I thought this apropo “And each day I learn just a little bit more, I don’t know why but I do know what for, If we’re all going somewhere let’s get there soon, Oh this song’s got no title just words and a tune”. – Bernie Taupin.


About & Contact

I'm Mark Cathcart, formally a Senior Distinguished Engineer, in Dells Software Group; before that Director of Systems Engineering in the Enterprise Solutions Group at Dell. Prior to that, I was IBM Distinguished Engineer and member of the IBM Academy of Technology. I am a Fellow of the British Computer Society (bsc.org) I'm an information technology optimist.


I was a member of the Linux Foundation Core Infrastructure Initiative Steering committee. Read more about it here.

Subscribe to updates via rss:

Feed Icon

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 2,066 other subscribers

Blog Stats

  • 90,344 hits