Archive for the 'dcf' Category

IBM AND JUNIPER NETWORKS FORM STRATEGIC TECHNOLOGY RELATIONSHIP

A funny thing happened on the way to the forum…

Ahh yes, Nathan Lane and Frankie Howerd, they represent the differences between the UK and US, in many ways so different, but in many ways, so the same. I’ve been bemoaning the fact that I can’t blog about what I’ve been doing mostly for the last 5-years as it’s all design and development work, all considered by IBM to be confidential, and since none of it is open source, it’s hard to point to projects and give hints.

And so it is with the project I’m currently working on. Only this time, not only is it IBM Confidential, but it is being worked with a partner and based on a lot of their intellectual property, so even less chance to discuss in public. I’ve been doing some customer validation sessions over the last 3-months and got concrete feedback on key data center directions around data center fabric, 10gb ethernet, converged enhanced ethernet (CEE) and more. There are certainly big gains to be made in reducing capital expenditure and operational expenditure in this space,  but thats really only the start. The real benefit comes from having an enabled fabric that rather than forcing centralization around a server, which is much of what we’ve been doing for the last 20-years, or forcing centralization around an ever more complex switch, which is where Cisco have been headed, the fabric is in and of itself the hub and the switches just provide any to any connectivity, low latency and enable both existing and new applications, both virtualized and enabled, to exploit the fabric.

So following one of my customer validation sessions in the UK, I was searching around on the Internet for a link. And I came across this one. It discusses a strategic partnership between IBM and Juniper for custom ASICS for a new class of Intenet backbone devices, only it is from 1997, who’da guessed. A funny thing happened on the way to the forum…

Designing the high-performance data center

Network World / Juniper event

Network World / Juniper event

I’ll be doing the opening IBM keynote at the Atlanta event of September 16th, as I was reminded by Dave, that I used to post notices and presentations on my Corner right from the start, through the 3rd ibm.com/servers/corner

Not sure when I’ll be able to post the slides, but will as soon as I can. If you are in the Atlanta area, we have a great shared story on what we are doing in this space, and this is the first of the public roll-out of that roadmap, and then stop by and ask me any follow-up questions.

Event and registration details are here.

Any to any fabric

I’ve spent the last few months working on IBMs’ plans for next generation data center fabric. It is a fascinating area, one ripe for innovation and some radical new thinking. When we were architecting on demand, and even before that, working on the Grid Toolbox, one of the interesting futures options was InfiniBand or IB.

What made IB interesting was that you could put logic in either end of the IB connection. Thus turning a standard IB connection into a custom switched connector by dropping your own code into the host channel adapter (HCA) or target channel adapter (TCA). Anyway, I’m getting off course. The point was that we could use an industry standard protocol and connection to do some funky platform specific things like specific cluster support, quality of service assertion, or security delegation without compromising the standard connection. This could be done between racks at the same speed and latency as between systems in the same rack. This could open up a whole new avenue of applications and would help to distribute work inside the enterprise, hence the Grid hookup. It never played out that way for many reasons.

Over in the Cisco Datacenter blog, Douglas Gourlay is considering changes to his “theory” on server disaggregation and network evolution – he theorises that over time everything will move to the network, including memory. Remember, the network is the computer?

He goes on to speculate that “The faster and more capable the network the more disaggregated the server becomes. The faster and more capable a network is the more the network consolidates other network types.” and wants time to sit down and “mull over if there is an end state”.

Well nope, there isn’t and end state. First off, the dynamics of server design and environmental considerations mean that larger and larger centralized computers will still be in vogue for a long time to come. Take for example iDataplex. It isn’t a single computer, but what is these days? In their own class are also the high end Power 6 595 Servers, again not really single servers but intended to multi-process, to virtualise etc. There is a definite trend for row scale computing, where additional capacity is dynamically enabled off a single set of infrastructure components and while you could argue these are distributed computers, just within the row, they are really composite computers.

As we start to see fabric settle down and become true fabrics, rather than either storage/data connections or network connections, new classes of use, new classes of aggregated systems will be designed. This is what really changes computing landscape, how they are used, not how they are built. The idea that you can construct a virtual computer from a network was first discussed by former IBM guru Irving Wladawsky-Berger. His Internet computer illustration was legend inside IBM and used and re-used in presentations throughout the late 1990s.

However, just like the client/server vision of the early ’90s, the distributed computing vision of the mid 90’s, and Irvings’ Internet computer of the late 1990s, plus all those those that came before and since, the real issue is how to use what you have, and what can be done better. That for me is the crux of the emerging world of 10Gb Ethernet, Converged Enhanced Ethernet, fibre channel over Ethernet et al. Don’t take existing systems and merely break them apart and network them, because you can.

As data center fabrics allow low latency, non-blocking, any to any and point to point communication, why force traffic through a massive switch and lift system to enable this to happen? Enabling storage to talk to tapes, for networks to access storage without going via a network switch or a server, enabling server to server, server to client, device to device surely has some powerful new uses. The live dynamic streaming and analysis of all sorts of data, without having to have it pass through a server. Appliances which dynamically vet, validate and operate on packets as they pass through from one point to another.

It’s this combination of powerful server computers, distributed network appliances, and secure fabric services that

Since Douglas ended his post with a qoute, I thought this apropo “And each day I learn just a little bit more, I don’t know why but I do know what for, If we’re all going somewhere let’s get there soon, Oh this song’s got no title just words and a tune”. – Bernie Taupin.

The “L” Word

There’s an excellent analysis by Frank Dzubeck over on Network World today about the new Enterprise Data Center and that hoary old chestnut latency. I don’t know who briefed Frank, it wasn’t me, Jeff and I talked this afternoon and I asked, it wasn’t him, since the article covered also the z10 announcement, I have a good idea though 😉

Frank covers ensembles, data center utilization and the some of the new data center fabric issues extremely well. He also makes the point, that I’d like folks to be clear about, that this isn’t the resurgance of the mainframe, or everthing back to a central server.

We’ve grown use to indefinite waits, or unbelievably fast response times from certain popular websites, but the emerging problem is around latency in the data center. How to deliver service levels and response times in an increasingly rich and complex systems environment. It’s OK to build a data center or server subsystem focussed around a single business model, something like Amazons EC2 or S3, or Googles search and query engines; it’s another to integrate a vast array of different vendors IT equipment bought at different times for different business applications and services and integrate them all together and orchestrate them as business services. While MapReduce may or may not be as good as, or better than a database, not everything is going to be run in this fashion.

Fibre channel over ethernet is a going to happen, 10Gb ethernet opens up some real options in terms of both integrating systems, and distributing services. It will be almost as fast to connect to another server as it is to talk between cores and processors within the same server. This disclosure from IBM Research today shows the way to the next generation of interconnected infrastructure, working at 300-Gbit/second, the bus goes optical making the integration of rich data systems video, VOIP, total encryption of data, secure key based secure infrastructure services, integrated with more traditional transactional systems a real possibility.

The opportunity isn’t to take the same old stuff and distribute it because the fabric is faster, it’s about better integrating systems, exploiting new ways of doing things. Introducing a common event infrastructure, being more intelligent about WAN and Application routing, having a publish/subscribe/consume model for the infrastructure and genuinely opening it up and simplifying it.

Of course, there a re lots of blanks to be filled in, but the new Enterprise Data Center is taking shape.


About & Contact

I'm Mark Cathcart, formally a Senior Distinguished Engineer, in Dells Software Group; before that Director of Systems Engineering in the Enterprise Solutions Group at Dell. Prior to that, I was IBM Distinguished Engineer and member of the IBM Academy of Technology. I am a Fellow of the British Computer Society (bsc.org) I'm an information technology optimist.


I was a member of the Linux Foundation Core Infrastructure Initiative Steering committee. Read more about it here.

Subscribe to updates via rss:

Feed Icon

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 915 other followers

Blog Stats

  • 88,758 hits