Archive for the 'REST' Category

App Internet and the FT

Picture of various walled gardens

Walled Gardens

former Colleague Simon Phipps reports on the FT(Financial Times) move to escape the app trap, that I discussed in my earlier App Internet post. Simon useful covers a number of points I didn’t get to, so it’s worth reading the full article here on ComputerWorld(UK).

This is a great example, they’ve clearly done a great job based on their web page feature list, but since I don’t have an iPhone or iPad, couldn’t try it out.

Simon makes an interesting point, that the FT is incurring some risk in that it is not “in” the app store, and therefore doesn’t get included in searches by users looking for solutions. This is another reason why app stores are just another variation of walled gardens. Jeff Atwood has a good summary of the arguments on why walled gardens are a bad thing here. In Jeffs 2007 blog, he says “we already have the world’s best public social networking tool right in front of us: it’s called the internet” and goes on to talk about publicly accessible web services in this instance rather than app stores.

One of the things that never really came to pass with SOA, was the idea of public directories. App stores, and their private catalogs, are directories, however they have a high price of entry as Simon points out. What we need now to encourage the move away from app stores is an HTML5 app store directory. It really is little more than an online shopping catalog for bookmarks. But it includes all the features and functions of walled garden app store catalogs, the only exception to which is the code itself. In place of the download link would be a launch, go, or run now button or link.

We’d only need a few simple, authorized REST based services to create, update, delete catalog entries, not another UDDI all encompassing effort, although it could learn from and perhaps adapt something like the UDDI Green Pages. This is way out of my space, anyone know if there are efforts in this area? @cote ? @Monkchips ? @webmink ?

What’s on your glass?

James Governor, @monkchips, makes some great points about UI design in his latest blog post. James discusses how Adobe is changing it’s toolchain to better support, endorse HTML5 and how open is a growth accelerator, not just a philosophical perspective. He get’s a useful plug in for the Dell Streak, and it as a piece of glass too 😉

I’ve alluded to it here before, we are heading in the same direction for both our PowerEdge 12g Lifecycle Controller and iDrac UI for one to one management of our servers; also for the simplified UI for the Virtual Integrated System, aka VIS. Flash/Flex/Silverlight had their time, they solved problems that at the time couldn’t be solved any other way. However, it was clear to me and I suspect to all those involved in the HTML5 standards efforts, that we were headed down a dead end of walled gardens“. What put this in perspective for me wasn’t James’ post, but one from fellow Redmonk, Cote, last year in which he discussed the web UI landscape.

Web UI Landscape by Cote of Redmonk

The details actually were not important, Cote ostentatiously discussing Apache Pivot, summarizes by saying “Closed source GUI frameworks have a tough time at it now-a-days, where-as open source ones by virtue of being free and open, potentially have an easier time to dig into the minds of Java developers.”

 

But really, it was the diagram that accompanied the article for me. It laid it the options as a flower, and as we know, flowers are grown in gardens, in this case, each was being cultivated in its’ own walled garden.

I cancelled the FLASH/WSMAN[1] proof of concept we’d built for the gen-next UI, and decided the right move was to adopt a more traditional MVC-like approach using open standards for our UI strategy.

We don’t have a commitment yet to deliver or exploit HTML5, but we’ve already adopted a REST style using HTTP for browser and HTML clients to interact with a number of our products, using Javascript and JSON and building towards having a foundation of re-useable UI artifacts. Off the back of this we’ve already seen some useful Android pilots.

Which takes us back to James post. He summarizes with “If the world of the API-driven web has taught us anything its that you can’t second guess User Interfaces. Someone else will always build one better. If you’re only allowing for deployment on one platform that cuts you off from innovation.” – Right on the money.

DISCLOSURE:
Redmonk are providing technology analysis for Dells Virtual Integrated System; James and I have professional contacts since 1996.

NOTES:
[1]WSMAN remains our key technology implementation for external partners and consoles to use to get information from the servers, and to send updates etc.

REST, UI and embedded systems managent

I’ve been busy for the last week or so on the corporate re-inventing Dell initiative, but was in early this morning for the last of a long set of demos, and socialization efforts internally, where I’ve been showing people the early results of the REST Systems Management design we’ve been working on, plus the new embedded User Interface and Dell UI Framework that we are developing to exploit it. I plan to start sharing some information on that in the coming weeks as well as to get feedback and input. It’s been another great week here in Round Rock!

Leave a comment or send me an email if your are really interested in the REST project, I’ll send you something before I can post here.

EMC World – standards?

Tucci and Maritz at EMC World 2009

Tucci and Maritz at EMC World 2009

I’ve been attending the annual EMC World conference in Orlando this week. A few early comments, there has been a massive 64,000ft shift to cloud computing in the messaging, but less so at ground level. There have been one or two technical sessions, but none on how to implement a cloud, or to put data in a cloud, or to manage data in a cloud. Maybe next year?

Yesterday in the keynote, Paul Maritz, President and CEO of VMware said that VMware is no longer in the business of individual hypervisors but in stitching together an entire infrastructure. In a single sentence laying out clearly where they are headed, if it wasn’t clear before. In his keynote this morning, Mark Lewis, President, Content Management and Archiving Division, was equally clear about the future of information virtualization, talking very specifically about federation and distributed data, with policy management. He compared that to a consolidated, centralized vision which he clearly said, hadn’t worked. I liked Lewis’s vision for EMC Documentum xCelerated Composition Platform (xCP) as a next generation information platform.

However, so far this week, and especially after this afternoons “Managing the Virtualized Data Center” BOF, where I had the first and last questions on standards, which didn’t get a decent discussion, there has been little real mention of standards or openness.

Generally, while vendors like to claim standards compliance and involvement, they don’t like them. Standards tend to slow down implementation historically. This hasn’t been the case with some of the newer technologies, but at least some level of openness is vital to allow fair competition. Competition almost always drives down end user costs.

Standards are of course not required if you can depend on a single vendor to implement everything you need, as you need it. However, as we’ve seen time and time again, that just doesn’t work, something gets left out, doesn’t get done, or gets a low priority from the implementing vendor, but it’s a high priority for you – stalemate.

I’ll give you an example: You are getting recoverable errors on a disk drive. Maybe it’s directly attached, maybe it’s part of a SAN or NAS. If you need to run multiple vendors server and/or storage/virtualization, who is going to standardize the error reporting, logging, alerting etc. ?

The vendors will give you one of a few canned answers. 1. It’s the hardware vendors job(ie. they pass the buck) 2. They’ll build agents that can monitor this for the most popular storage systems (ie. you are dependent on them, and they’ll do it for their own storage/disks first) 3. They’ll build a common interface through which they can consume the events(ie. you are dependent on the virtualization vendor AND the hardware vendor to cooperate) or 4. They are about managing across the infrastructure for servers, storage and network(ie. they are dodging the question).

There are literally hundreds of examples like this if you need anything except a dedicated, single vendor stack including hardware+virtualization. This seems to be where Cisco and Oracle are lining up. I don’t think this is a fruitful direction and can’t really see this as advantageous to customers or vendors. Notwithstanding cloud, Google, Amazon et al. where you don’t deal with hardware at all, but have a whole separate set of issues, and standards and openness are equally important.

In an early morning session today, Tom Maguire, Senior Director of Technology Architecture, Office of the CTO on EMC’s Service-Oriented Infrastructure Strategy: Providing Services, Policies, and Archictecture models. Tom talked about lose coupling, and defining stateful and REST interfaces that would allow EMC to build products that “snap” together and don’t require a services engagement to integrate them. He talked also talked about moving away from “everyone discovering what they need” to a common, federated fabric.

This is almost as powerful message as that of Lewis or Maritz, but will get little or no coverage. If EMC can deliver/execute on this, and do it in a de jure or de facto published standard way, this will indeed give them a powerful platform that companies like Dell can partner in, and bring innovation and competitive advantge for our customers.

Is SOA dead?

There has been a lot of fuss since the start of the new year around the theme “SOA is dead”. Much of this has been attributed to Anne Thomas Manes blog entry on the Burton Groups blog, here.

Infoworlds Paul Krill jumper on the bandwagon with a SOA obituary, qouting Annes work and say “SOA is dead but services will live on”. A quick fire response came on a number of fronts, like this one from Duane Nickull at Adobe, and then this from James Governor at Redmonk, where he charismatically claims, “everything is dead”.

First up, many times in my career, and James touches on a few of the key ones, since we were there together, or rather, I took advantage of his newness and thirst for knowledge as a junior reporter, to explain to him how mainframes worked, and what the software could be made to do. I knew from 10-years before I met James that evangelists and those with an agenda, would often claim something was “dead”. It came from the early 1980’s mainframe “wars” – yes, before there was a PC, we were having our own internal battles, this was dead, that was dead, etc.

What I learned from that experience, is that technical people form crowds. Just like the public hangings in the middle ages, they are all too quick to stand around and shout “hang-him”. These days it’s a bit more complex, first off there’s Slashdot, then we have the modern equivalent of speakers corner, aka blogs, where often those who shout loudest and most frequently, get heard more often. However, what most people want is not a one sided rant, but to understand the issues. Claiming anything is dead often gives the claimer the right not to understand the thing that is supposedly “dead” but to just give reasons why that must be so and move on to give advice on what you should do instead. It was similar debate last year that motivated me to document my “evangelsim” years on the about page on my blog.

The first time I heard SOA is dead, wasn’t Annes blog, it wasn’t even as John Willis, aka botchagalupe on twitter, claims in his cloud drop #38 him and Michael Cote of Redmonk last year. No sir, it was back in June 2007, when theregister.co.uk reprinted a Clive Longbottom, Head of Research at Quocirca, under the headline SOA – Dead or Alive?

Clive got closest to the real reasons of why SOA came about, in my opinion, and thus why SOA will prevail, despite rumours of its’ demise. It is not just services, from my perspective, it is about truly transactional services, which are often part of a workflow process.

Not that I’m about to claim that IBM invited SOA, or that my role in either the IBM SWG SOA initiative, or the IBM STG services initiative was anything other than as a team player rather than as a lead. However, I did spend much of 2003/4 working across both divisions, trying to explain the differences and similarities between the two, and why one needed the other, or at least its relationships. And then IBM marketed the heck out of SOA.

One of the things we wanted to do was to unite the different server types around a common messaging and event architecture. There was  almost no requirement for this to be syncronous and a lot of reasons for it to be services based. Many of us had just come from the evolution of object technology inside IBM and from working on making Java efficient within our servers. Thus, as services based approach seemed for many reasons the best one. 

However, when you looked at the types of messages and events that would be sent between systems, many of them could be cruicial to effective and efficient running of the infrastructure, they had in effect, transactional charateristics. That is, given Event-a could initiate actions A, then b, then c and finally d. While action-d could be started before action-c, it couldn’t be started until action-b was completed, and this was dependent on action-a. Importantally, none of these actions should be performed more than once for each instance of an event.

Think failure of a database or transactional server. Create new virtual server, boot os, start application/database server, rollback incomplete transactions, take over network etc. Or similar.

Around the same time, inside IBM, Beth Hutchison and others at IBM Hursley, along with smart people like Steve Graham, now at EMC, and Mandy Chessell also of IBM Hursley were trying to solve similar trascational type problems over http and using web services.

While the Server group folks headed down the Grid, Grid Services and ultimately Web Service Resource  Framework, inside IBM we came to the same conclusion, incompatible messages, incompatible systems, different architectures, legacy systems etc. need to interoperate and for that you need a framework and set of guidelines. Build this out from an infrastructure layer, to an application level; add in customer applications and that framework; and then scale it in any meaningful, that need more than a few programmers working concurrently on the same code, or on the same set of services, and what you needed was a services oriented architecture.

Now, I completely get the REST style of implementation and programming. There is no doubt that it could take over the world. From the perspective of those frantically building web mashups and cloud designs, already has. In none of the “SOA is dead” articles has anyone effectively discussed syncronous transactions, in fact apart from Clive Longbottoms piece, no real discussion was given to workflow, let alone the atomic transaction.

I’m not in denial here of what Amazon and Google are doing. Sure both do transactions, both were built from the ground-up around a services based architecture. Now, many of those who argue that “SOA is dead” are often those who want to move onto the emporers new clothes. However, as fast as applications are being moved to the cloud, many businesses are nowhere in sight moving or exploiting the cloud. To help them get there, they’ll need to know how to do it and for that they’ll need a roadmap, a framework and set of guidelines, and if it includes their legacy applications and systems, how they get there, For that, they’ll likely need more than a strategy, they’ll need a services “oriented” architecture.

So, I guess we’ve arrived at the end, the same conclusion that many others have come to. But for me it is always about context.

I have to run now, literally. My weekly long run is Sunday afternoon and my running buddy @mstoonces will show up any minute. Also, given I’m starting my new job, I’m not sure how much time I’ll have to respond to your comments, but I welcome the discussion!


About & Contact

I'm Mark Cathcart, formally a Senior Distinguished Engineer, in Dells Software Group; before that Director of Systems Engineering in the Enterprise Solutions Group at Dell. Prior to that, I was IBM Distinguished Engineer and member of the IBM Academy of Technology. I am a Fellow of the British Computer Society (bsc.org) I'm an information technology optimist.


I was a member of the Linux Foundation Core Infrastructure Initiative Steering committee. Read more about it here.

Subscribe to updates via rss:

Feed Icon

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 809 other followers

Blog Stats

  • 85,711 hits