Archive for April, 2010

Patents and original ideas

The whole software patent storm has opened up again and is raining down on blogs across the Internet, folks like Stephen O’Grady over Redmonk and Tim Bray in his ongoing blog are trying to position and justify why they are against software patents, etc. and why the systems is broken. It is. As always, that bastion of rational thought on the process, and former colleague Simon Phipps has posted a blog entry with seven things he should be done to software patents, while we wait for nirvana.

I have no patents, I’ve deliberately taken a poistion not to have my name included on them when the opportunity arises. Of Simons seven proposals, most are very good, I really like the idea of Require sample code to be filed with the patent. However, I’m not comfortable with the ability to protect cleanroom implementations, for the same reason I’m not for software patents in the first place. While I understand you can’t patent an idea, once the idea, concept or feature is well enough understood even without access to the original implementation, it’s easy enough to sit down and reproduce it. And there lies the reason I’m against them.


I can literally claim to owe my technical career to being able to read someone elses source code. My first big break came from Stuart McRae, he gave me the source code to a Pascal compiler he’d written at Imperial College London, sometime around 1976. I learnt a number of things from it, not least it jumped started my programming career by giving me the chance to learn Pascal, which came in handy years later when the Borland Turbo Pascal compiler was the hottest thing on PCs.  What followed for me was 30-years of innovation from mainframe I/O; terminal emulation on PC’s, client/server and on to the web, as I said when I left IBM “Standing on the shoulders of giants”, most of whom shared there source code with me.

Over the years my access to source code became an important, in fact I’d say fundamental part of my career, and many important implementations. However, I’m not confused about the difference between open source and patents, I understand they are two very different things, and that either one can live without the other. However, I would submit that where key parts of our infrastructure stack are based on open source, increasingly idea and design “pollution” is a real issue.

Where did the design come from?

I know that at least a couple of granted patents, that their design mirrors an earlier design. It uses different language, runs in a different environment, but ultimately does the exactly same thing. It copies the shared read-only, and shared-read/write memory designs from the VM/370 Operating System. Just this week I saw a perfectly good explanation of a concept, with a prototype model, which someone pointed out was almost exactly the same as another implementation found on the Internet with some careful searching. Would either of these been found as prior art? Probably not. Would someone implementing an idea even have a clue, with prior art searches, that the same idea and concept already existed in an operating system written 35yrs ago? Almost certainly not.

Now, I’m not accusing ANYONE of deliberate malfeasance or other wrong doing in the world of software. What I firmly believe is that in todays interconnected world, and with new generations bought up and taught by breaking down open source software, with lightening fast Internet access and globally defined search, it is nearly impossible to have an original idea in software. Now, of course you can’t patent ideas, only designs, and then, hopefully, turn those designs into product implementations.

Here is where the problem lies though. It’s all too easy to remember partial an idea or design in some form, to then have a “do over”, re-write and re-implement in a different software environment. Given the extraordinary breadth of the software industry, the vast number of programming languages, where the sometimes meaningless order of language statements and instructions is mearly pre-amble to the innovation, and the sometimes deliberately obtuse language used to describe innovations, how is anyone ever going to truly do an accurate “prior art” search?

And there’s the rub for me, and it always has been. Given a veracious appetite for knowledge, reading source code, how can I ever in good faith claim an idea and design were ever exclusively mine? How can we ever be sure, even with an exhaustive prior art search, that the design is ever actually new?

Defense is the best offense

Now, I also acutely understand a companies need to protect it’s innovation. Having been involved in a number of IP claims, at least one of which was from a patent troll and/or patent mining company, it is a scary and expensive process to defend. The best defense is to have a large portfolio which you can use as a hammer to crush any “nuts“. They say you are violating patents x/y/z – you turnaround and claim they are violating your patents a-w. This is a common game played out regularly it would seem in the software industry.

The big companies, to avoid endless mitigation, claim and counter claim, have a process to deal with this. They cross license each others patents either in specific or across the board. This works for even the fiercest competitors, it creates a dual dependency, or deadlock, while allowing them to cooperate on specific projects and sometimes, effectively locking out all others due to the combined might of the patent portfolio as well as the legal cost. Simon comes close to this when discussing injunctive relief and recovering license fees.

Clearly just declaring software patents bust is a step in the right direction, but it doesn’t solve anything. The real issue here how to protect companies both small and large, and how to move beyond simply wishing patents no longer existed. Simons proposals are a start.

However, I fear that they will come to little as companies look at the revenue and competitive threat that comes from the revocation of software patents and push back hard. It’s for that reason that I think that Simons item 2. Make them last no more than five years, renewable once (maybe, and only if used in products) – is the best, last hope. Lets declare that all existing software patents have a limit of 10-years from this year. New patents have a term limit of 5-years, and are renewable once, if as Simon says they are used in products. It’s a start.

Cote on Consumer to Enterprise

REST Interface slide from Cote presentation

REST Interface slide from Cote presentation

Over on his people over process blog, Redmonk Analyst, Michael Cote, has what is a great idea, a rehersal of an upcoming presentation including slides and audio.

The presentation covers what technology is making the jump from the consumer side of applications and IT into the enterprise. I’m delighted to report Cote has used a quote from me on REST.

For clarification, the work we are doing isn’t directly related to our PowerEdge C-servers, or our cloud services. For that, Dell customer Rackspace cloud has some good REST API‘s and is well ahead of us, in fact I read a lot of their documentation while working on our stuff.

On the other hand, I’m adamant that the work we are doing adding a REST-like set of interfaces to our embedded systems management, is not adding REST API’s. Also, since I did contribute requirements and participate in discussions around WS-* back when I was IBM, I’d say that we were trying to solve an entirely different set of problems, and hence now REST is the right answer, to externalize the data needed for a web based UI.

At the same time, we will also continue to offer a complete implementation of WS Management(WSMAN). WSMAN is a valuable tool to externalize the complexity of a server, in order for it to be managed by an external console or control point. Dell provides the Dell Management Console (DMC) which consumes WSMAN and provides one-to-many server management.

The point of the REST interfaces is to provide a simple way to get data needed to display in a Web UI, we don’t see having to expose all the same data, and can use a much more lightweight infrastructure to process it. At the same time, it’s the objective of this project to keep the UI simple for one-to-one management. Customers who want a more complex management platform will be able to use DMC, or exploit the WSMAN availability.

REST, UI and embedded systems managent

I’ve been busy for the last week or so on the corporate re-inventing Dell initiative, but was in early this morning for the last of a long set of demos, and socialization efforts internally, where I’ve been showing people the early results of the REST Systems Management design we’ve been working on, plus the new embedded User Interface and Dell UI Framework that we are developing to exploit it. I plan to start sharing some information on that in the coming weeks as well as to get feedback and input. It’s been another great week here in Round Rock!

Leave a comment or send me an email if your are really interested in the REST project, I’ll send you something before I can post here.

Unix migrations and game changers

More product talk, much closer to home for me are this weeks new Dell PowerEdge servers including the PowerEdge R910 which was specifically designed and configured for a market segment I’m fully aware of, RISC Server migration.

It’s well worth taking a look at this youtube video from the R910 h/w design team, for me this is something that I just think people don’t realise, just how much clever design goes into the Dell PowerEdge servers. I think this, better than anything else I’ve seen, embodies the difference between peoples perception of what Dell Server engineering does, and what we actually do. I can honestly say that even back to my IBM mainframe days, I’ve never seen a better designed, more easily accessible, configurable and thought out server.

In terms of configuration the R910 is specifically aimed at those who are rethinking proprietary UNIX deployments either on SUN SPARC or POWER AIX. Based on industry standards and the x86 architecture, the R910 is an ideal platform for RISC/UNIX migrations, large database deployments and server virtualization implementations. It’s a 4U rack server is available with a high-performance four-socket Intel Nehalem-EX processor, up to 64 DIMM slots for memory, redundant power supplies and a failsafe embedded hypervisor resulting in the performance, reliability and memory scalability needed to run mission critical applications. It also includes an option to have 10Gb ethernet right on the motherboard.

There are 3-other new servers this week, including the M910 Blade server, the R810 virtualization and consolidation server and the R815 virtualization, high-performance computing, email messaging, and database workloads server.

The PowerEdge R815 deserves it’s own “shout-out”, it comes with the same level of detail in h/w design as the R910, but is powered by the brand new 12-core AMD Opteron 6100 processors and has up to 32 DIMMs with up to 48 processor cores in a four-socket, 2U server.  As my friend and former IBM colleague, Nigel Dessau, now Chief Marketing Officer at AMD put it, the new AMD processors are “game changers

All this weeks new servers include the iDrac embedded management that our team works on, as well as the  Dell Lifecycle Controller. Lifecycle Controller provides IT administrators with a single console view of their entire IT infrastructure while performing a complete set of provisioning functions including system deployment, updates, hardware configuration and diagnostics.

For customer who are interested in migrating from proprietary UNIX environments we are also now offering a set of migration services to an open server platform and an open OS.

About & Contact

I'm Mark Cathcart, formally a Senior Distinguished Engineer, in Dells Software Group; before that Director of Systems Engineering in the Enterprise Solutions Group at Dell. Prior to that, I was IBM Distinguished Engineer and member of the IBM Academy of Technology. I am a Fellow of the British Computer Society ( I'm an information technology optimist.

I was a member of the Linux Foundation Core Infrastructure Initiative Steering committee. Read more about it here.

Subscribe to updates via rss:

Feed Icon

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 797 other followers

Blog Stats

  • 85,230 hits