Posts Tagged 'redmonk'

Cote on Consumer to Enterprise

REST Interface slide from Cote presentation

REST Interface slide from Cote presentation

Over on his people over process blog, Redmonk Analyst, Michael Cote, has what is a great idea, a rehersal of an upcoming presentation including slides and audio.

The presentation covers what technology is making the jump from the consumer side of applications and IT into the enterprise. I’m delighted to report Cote has used a quote from me on REST.

For clarification, the work we are doing isn’t directly related to our PowerEdge C-servers, or our cloud services. For that, Dell customer Rackspace cloud has some good REST API‘s and is well ahead of us, in fact I read a lot of their documentation while working on our stuff.

On the other hand, I’m adamant that the work we are doing adding a REST-like set of interfaces to our embedded systems management, is not adding REST API’s. Also, since I did contribute requirements and participate in discussions around WS-* back when I was IBM, I’d say that we were trying to solve an entirely different set of problems, and hence now REST is the right answer, to externalize the data needed for a web based UI.

At the same time, we will also continue to offer a complete implementation of WS Management(WSMAN). WSMAN is a valuable tool to externalize the complexity of a server, in order for it to be managed by an external console or control point. Dell provides the Dell Management Console (DMC) which consumes WSMAN and provides one-to-many server management.

The point of the REST interfaces is to provide a simple way to get data needed to display in a Web UI, we don’t see having to expose all the same data, and can use a much more lightweight infrastructure to process it. At the same time, it’s the objective of this project to keep the UI simple for one-to-one management. Customers who want a more complex management platform will be able to use DMC, or exploit the WSMAN availability.

Short DNS and brand ownership

I cycle home Wednesday evenings and back in on Thursday morning, it’s a 22-mile drag from Round Rock to Down Town Austin, with some quiet bits, some busy bits and some dangerous bits. While spinning up North Lamar heading south  towards 183, I was thinking about the rise of web URL shortnening websites such as tinyurl.com, which was the first I was aware of that offered a free service to take a long url such as this blog entry https://cathcam.wordpress.com/2009/03/05/short-dns-and-brand-ownership/ and turn it into http://snipurl.com/shorterdns

The main reason these became really popular was becuase some systems, such as Lotus Notes used to produce bizzare, very, very long URL’s for pages in Notes databases. It was easier to remember tinyurl.com/ae5ny than it would be to remember the page name, try it… Now, people these days know these services for twitter.com where every character counts, but thats not how or why they started.

There are a bunch of these services, tinyurl.com, snurl.com, is.gd, bit.ly etc. I tend to use snurl as it allows you to save specific names, I’m sure other shortners do too. What I was thinking about last night was the ownership, rights etc. to shortened URLs.

When my son wanted some cards from http://moo.com to help him promote his DJ work, I created them for him, but his myspace URL didn’t easily fit and flow, and what if later he wanted to create a website, he’d have to get new cards.

Answer, use snurl. So Oli and his alter ego Kaewan are now http://snurl.com/kaewan – It currently points to his myspace profile, but I can change it whenever I want. 

So these services have become, in some way, analgous to Domain Registrars. Sure a short URL isn’t a domain, but effectively it’s the same as one, except you don’t own it, and you didn’t have to pay for it. For fun I created http://snurl.com/redmonk – It actually points to Redmonks home page. But it could easily point elsewhere. And there’s the rub. With a traditional Name regsitrar there is an established right of review and appeal if you believe that someone has registered a domain that impinges on your brand and trademarks.

Not long after I created this blog, original DNS http://ibmcorner.com _ I got a “cease and desist” call from IBM legal pointing out that this wasn’t allowed and I should stop using it and not re-register the domain when it expired. SO where does http://snurl.com/ibm point to? Well not IBM and is was nothing to do with me.

RedMonk IT Management PodCast #10 thoughts

I’ve been working on slides this afternoon for a couple of projects, and wondering why producing slides hasn’t really gotten any easier in 20-years since Freelance under DOS? Why is it I’ve got a 22 flatscreen monitor as an extended desktop, and I’m using a trackpoint and mouse to move things around, and waiting for Windows to move pixel by pixel…

Anyway, I clicked on the LIBSyn link for the RedMonk IT Management Podcast #10 from back in April for some background noise. In the first 20-mins or so, Cote and John get into some interesting discussion about Power Systems, especially in relation to some projects Johns’ working on. As they joke and laugh their way through an easy discussion, they get a bit confused about naming and training.

First, the servers are called IBM Power Systems, or Power. The servers span from blades to high-end scalable monster servers. They all use the Power PC architecture, instruction set RISC chip. Formally there had been two versions of the same servers, System p and System i.

Three operating systems can run natively on Power Systems, AIX, IBM i (formally i5/OS and OS/400) and Linux. You can run these concurrently in any combination using the native virtualization, PowerVM. Amongst the features of PowerVM is the ability to create Logical Partitions. These are a hardware implementation and hardware protected Type-1 Hypervisor. So, it’s like VMware but not at all. You can get more on this in this white paper. For a longer read, see the IBM Systems Software Information Center.

John then discussed the need for training and the complexity of setting up a Power System. Sure, if you want to run a highly flexible, dynamically configurable, highly virtualized server, then you need to do training. Look at the massive market for Microsoft Windows, VMware and Cisco Networking certifications. Is there any question that running complex systems would require similar skills and training?

Of course, John would say that though, as someone who makes a living doing training and consulting, and obviously has a great deal of experience monitoring and managing systems.

However, many of our customers don’t have such a need, they do trust the tools and will configure and run systems without 4-6 months of training. Our autonomic computing may not have achieved everything we envisaged, but it has made a significant difference. You can use the System Config tool at order time, either alone, with your business partner or IBMer, and do the definition for the system, have it installed and provisioned and up and running within half a day.

When I first started in Power Systems, I didn’t take any classes, was not proficient in AIX or anything else Power related. I was able to get a server up and running from scratch and get WebSphere running business applications having read a couple of redbooks. Monitoring and debugging would have taken more time, another book. Clearly becoming an expert always takes longer, see the wikipedia definition of expert.

ps. John, if you drop out of the sky from 25k ft, it doesn’t matter if the flight was a mile or a thousand miles… you’ll hit the ground at the same speed 😉

pps. Cote I assume your exciting editing session on episode 11, wasn’t so exiciting…

ppps. 15-minutes on travel on Episode #11, time for RedmOnk Travel Podcast


About & Contact

I'm Mark Cathcart, formally a Senior Distinguished Engineer, in Dells Software Group; before that Director of Systems Engineering in the Enterprise Solutions Group at Dell. Prior to that, I was IBM Distinguished Engineer and member of the IBM Academy of Technology. I am a Fellow of the British Computer Society (bsc.org) I'm an information technology optimist.


I was a member of the Linux Foundation Core Infrastructure Initiative Steering committee. Read more about it here.

Subscribe to updates via rss:

Feed Icon

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 2,066 other subscribers

Blog Stats

  • 89,480 hits