Archive for the 'appliances' Category

IoT App hell of the future

On the day after it was revealed that some models of the Google Home Mini speaker was revealed to be recording voices 24/7 due to a defect, Danny Palmer has a thoughtful piece on ZDNet about the toxic legacy of IoT devices.

Danny is spot-on about the social and technological impact of connected devices past their support date. While I’ve complained in the past about constantly updating apps, both adding function that slows the original device, and removing function that changes, often destroys the original value proposition of the device. It’s perhaps when the devices stop getting updates we have the most to fear from?

I have a Netgear NAS that is out of support, in fact, since I have an identical NAS that wakes-up Tuesdays at 2am and backs-up the primary NAS, I have two of them. While they are out of support, Netgear has been good at fixing urgent vulnerabilities. Of course, since I can’t see the source, I don’t know what vulnerabilities they have not fixed.

Kate and I went to see Blade Runner 2049 on the opening day at the local AMC cinema. It’s a bit of a thing of mine to sit through ALL, and I mean all of the end credits, As we left the theater, there it was, right at the very bottom of the screen, unseen from the seats, the Windows XP Start-button. I have no idea what projector they were using, but yes, many projectors did, and obviously still do run Windows XP.

The app hell of the future

Just over 5-years ago, in April 2011, I wrote this post after having a fairly interesting exchange with my then boss, Michael Dell, and George Conoly, co-founder and CEO of Forrester Research. I’m guessing in the long term, the disagreement, and semi-public dissension shut some doors in front of me.

Fast forward 5-years, and we are getting the equivalent of a do-over as the Internet of Things and “bots” become the next big thing. This arrived in my email the other day:

This year, MobileBeat is diving deep into the new paradigm that’s rocking the mobile world. It’s the big shift away from our love affair with apps to AI, messaging, and bots – and is poised to transform the mobile ecosystem.

Yes, it’s the emperor’s new clothes of software over again. Marketing lead software always does this, over imagines what’s possible, under estimates the issues with building in and then the fast fail product methodology kicks-in. So, bots will be the next bloatware, becoming a security attack front. Too much code, forced-fit into micro-controllers. The ecosystem driven solely by the need to make money. Instead of tiny pieces of firmware that have a single job, wax-on, wax-off, they will become dumping ground for lots of short-term fixes, that never go away.

Screenshot_20160524-113359Meanwhile, the app hell of today continues. My phone apps update all the time, mostly with no noticeable new function; I’m required to register with loads of different “app stores” each one a walled garden with few published rules, no oversight, and little transparency. The only real source of trusted apps is github and the like where you can at least scan the source code.IMG_20160504_074211

IMG_20160504_081201When these apps update, it doesn’t always go well. See this picture of my Garmin Fenix 3, a classic walled garden, my phone starts to update at 8:10 a.m., and when it’s done, my watch says it’s now 7:11 a.m.

IMG_20160111_074518Over on my Samsung Smart TV, I switch it from monitor to Smart TV mode and get this… it never ends. Nothing resolves it accept disconnecting the power supply. It recovered OK but this is hardly a good user experience.

Yeah, I have a lot of smart home stuff,  but little or none of it is immune to the app upgrade death spiral; each app upgrade taking the device nearer to obsolescence because there isn’t enough memory, storage or the processor isn’t fast enough to include the bloated functions marketing thinks it needs.

If the IoT and message bots are really the future, then software engineers need to stand up and be counted. Design small, tight reentrant code. Document the interfaces, publish the source and instead of continuously being pushed to deliver more and more function, push back, software has got to become engineering and not a form of story telling.


Case story for Dell Software and Hardware

I’ve not posted much of late as I’m working on a lot of back office and process stuff, but still working in Dell Software Group. I recently attended the annual Dell Patent Award dinner where I was able to catch up with Michael Dell and my boss, John Swainson as well as a few other executives, as well many of the great innovators and inventors.

My former boss and Dell Vice President, Gerry Hackett made an interesting point in her remarks prior to doing the roll call for her team at the dinner, she said to the effect that Dell was going to be the only integrated solution provider. I was surprised, but thinking it through she was right.

When I saw this customer story about San Bernardino County School district, I thought it was worth linking here.

Of NFC, QR Code, Payments, PayPal and Reuters and vendor influence

I thought this one worth a quick blog entry for, especially as it’s one of the industries dirty little, but well known secrets. I’ve been a unwilling shill a few times. After a while it gets much easier to spot them.

As part of the app store/walled garden debate that kicked off after my Q&A with George Conoly, co-counder and CEO of Forrester Research, I’ve been staying late working on some HTML5 related topics and technologies. Especially as they relate to mobile devices. One topic that has been really interesting is QR codes and how perhaps we might use them in servers. There was, much to my surprise already a project running to use them. I’ve been looking at dynamically generating them, possibly for use in error codes, and maintenance, service calls, etc.

One of the follow-ons from this was the use of Near Field Communication (NFC). Ostentatiously, NFC is being punted by the industry for mobile payments. It’s much more interesting to me though to use for the initiation of mobile, wireless connectivity, via say, Bluetooth. Anyway, just as I was scanning my tweetstream for today before I left, I spotted an @techmeme tweet “PayPal is top brand for mobile payments: survey (@georginius / Reuters) ”

This immediately struck me as nonsense. Linking PayPal to NFC, how so? Surely, the whole point of NFC is that you have a device, the device or an app on the device(possibly HTML5 based) is used to charge for something, a micro-purchase, coffee, sandwhich, MP3, or similar bypassing the typical website switch and charge service provided by PayPal.

Thus, rather than PayPal benefiting from NFC, they actually have the most to lose and need to be as proactive as they can to ensure they are infact not dis-intermediated in the upcoming NFC payments boom. What happens is that the NFC device micro-/payment is charged to the account associated to the device, or a credit card registered to the device owner. There are some obvious and some legal issues with this. Some countries are bound to have laws that restrict telco’s and wireless carriers business, ie. not allowing them to become banks. So rather than the carrier consuming the charge from the NFC device aka smart/cellphone, the charge is passed on to a credit card registered to the device owner. And, this is where, from reading after seeing the tweet, PayPal want in on the act.

Now, theres the obvious issue of the device falling into the hands of an unauthorized 3rd party, but thats a whole different post. The point of this post was there was nowhere in this process where we needed PayPal, unless I’ve misunderstood. PayPal need to be an early wave adopter, or they risk being cut-out completely.

I went and chased down the survey qouted by Reuters. Low and behold, survey by market research firm GfK suggests that PayPal, the eBay-owned online payment system, “could be set for a major boost as mobile payment systems start to take off over the next year”. The GfK survey was of course funded by, err, PayPal. The Reuters piece then goes on to discuss NFC.

If in fact NFC is used as I posit above, this is typical bait and switch type press release, where you create confusion by associating yourself in a positive light with something that is in fact a weakness. It’s done all the time, you make sure you ask the questions that get the answers you want, especially when you are paying the people asking the questions.

Now, it could be I’m completely wrong on this. Maybe someone from PayPal or GfK would like to send me a copy of the survey? It looks though like Reuters fell for the press release, hook, line and sub-editor. Their carrying the release has meant it’s gone “viral” and as George Bush might have said, “job done!”.

Dell joins Yocto project

Openembedded logoOne of the key activities here, outside of the VIS orchestration, automation engine has been the work around our embedded software stack and where we are heading next. Today we committed to joining the Yocto project, which will be aligned with the OpenEmbedded build system.

The Linux Foundation announced today, via Press Release that Dell+Cavium Networks, Freescale Semiconductor, Intel, LSI, Mentor Graphics, Mindspeed, MontaVista Software, NetLogic Microsystems, RidgeRun, Texas Instruments, Tilera, Timesys, and Wind River, among others would collaborate on a cross-compile environment enabling the development of “a complete Linux Distribution for embedded systems, with the initial target systems being ARM, MIPS, PowerPC and x86 (32 and 64 Bit).

I’m hopeful that this will allow our guys to continue their SDK work, allowing us to move core product technologies between chip architectures, while at the same time contributing back as we innovate around the Linux platform, while building out the the software build recipes and core Linux components, preventing fragmentation.

VIS from the top

Michael Dell recently spoke at the 2010 Gartner conference. One of the questions he was asked was about the evolutionary and revolutionary approaches to IT, most recently amplified by the cloud discussion. Michael nails it when discussing the Dell approach with our Data Center Solutions business, our PowerEdge C servers and the Virtual Integrated System aka VIS.

Deviation: The new old

104 modules in a Doepher A-100PMD12 double case sitting on top of the A-100PMB case

Deadmau5 Analalog Modular setup

IBM 360/40 at Attwood Statistics

IBM 360/40 at Attwood Statistics

Anyone that knows me, knows that I’ve retained a high level of interest in dance music. I guess it stems from growing up in and around London in the early 70’s and the emergence of  funk, and especially Jazz Funk, especially through some of the new music put together by people like Johnny Hammond(Los Conquistadors Chocolate), Idris Muhammed(Could Heaven Ever Be Like This) which remain to this day two of my all time favorite tracks, along with many from Quincy Jones.

Later, my interest was retained by the further exploitation of electronics as disco became the plat de jour and although I, like most others became disenchanted once it became metronomic and formulaic, I’m convinced that the style, type and beat of music you like and listen to create pathways in your brain to activate feelings.

As so it was that with time, and energy on my hands over the past few years I’ve re-engaged with dance music. Mostly because I like it, it activates those pathways in my mind that release feel good endorphins, I enjoy the freedom of the dance.

I’ve been to some great live performances, Tiesto and Gareth Emery especially down in San Antonio and Houston, and anyone who thinks these guys are just DJ’s, playing other peoples music through a computer or off CD’s is just missing the point.

However, one electronic music producer more than any other has really piqued my interest, Deadmau5, aka Joel Zimmerman from Toronto. I first saw Deadmau5 during South by South West (SXSW) in 2008, when Joel played at the now defunct Sky Lounge on Congress Ave. The club was small enough that you could actually stand at the side of the stage and see what he was doing, it was a fascinating insight. [In this video on YouTube, one of many from that night, not only can you see Joel “producing” music, but if you stop the video on the right frame at 16-seconds, you can see me in the audience! Who knew…]

I saw him again in March 2009 at Bar Rio in Houston. This time I had clear line of sight to what he was doing from the VIP balcony. It was fascinating, I actually saw and heard him make mistakes, not significant mistakes but ones that proved he was actually making live music. [You can read my review from the time here including links to YouTube videos.] It turns out he was something he was using during that Houston concert was either a prototype or something similar to a monome.

Joel regularly posts and runs live video streams from his home studio, and recently posted this video of his latest analog modular system. It and some of the other videos are a great insight into how dance music producers work. Watching this, this morning, I was struck with the similarities to the IBM 360/40 mainframe which was the first computer I worked on, especially I can remember the first time I was shown by an IBM Hardware Engineer, who might have been Paul Badger or Geoff Chapman, how the system worked. How to put it into instruction step, how to display the value of registers and so on. I felt the same way watching the Deadmau5 video, I got to get me some playtime with one of these.

And yes, the guy in the picture above is me and the 360/40. It was taken in probably the spring of 1976 I’d guess, at Attwood Statistics in Berkhampstead, Herts. UK.

The power and capacity of the IBM 36/40 are easily exceeded by handheld devices such as the Dell Streak. Meanwhile, it’s clear that some music producers are headed in the opposite direction, moving from digital software to analog hardware. The new old.

Appliances – Good, bad or virtual ?

So, in another prime example of “Why do Analysts blogs make it so hard to have a conversation?” , Gordon Haff of Illuminata today tweeted a link to a new blog post of his on appliances. No comments allowed, no trackbacks provided.

He takes Chuck Hollis (EMC) post and opines various positions on it. It’s not clear what the notion of “big appliance” is as Chuck uses it. Personally, I think he’s talking about solutions. Yes, I know it’s a fine line, but a large all purpose data mining solution with its’own storage, own server, own console, etc. is no more an appliance than a kitchen is. The kitchen will contain appliances but it is not one itself. If thats not what Chuck is describing, then his post has some confusion, very few organizations will have a large number of these “solutions”.

On the generally accepted view of appliances, I think both Gordon and Chuck are being a little naive when they think that all compute appliances can be made virtual and run on shared resource machines.

While at IBM I spent a lot of time, and learned some valuable lessons about appliances. I was looking at the potential for the first generation of IBM designed WebSphere DataPower appliances. At first, it seemd to me even 3-years ago that turning them into a virtual appliance would be a good idea. However, I’d made the same mistake that Hollis and Haff make. They assume that the type of processing done in an appliance can be transparently replaced by the onward march of Moores Law on Intel and IBM Power processors.

The same can be said for most appliances I’ve looked at. They have unique hardware design, which often includes numerous specialized processing functions, such as encryption, key management and even environmental monitoring. Appliances though real value add is that they are designed with a very specific market opportunity in mind. That design will require complex workload analysis, and reviewing the balance between general purpose compute, graphics, security, I/O and much more, and producing a balanced design and most importantly, a complete user experience to support it. Thats often the key.

Some appliances offer the sort of hardware based security and tamper protection that can never be replaced by general purpose machines.

Yes Hollis and Haff make a fair point that these appliances need separate management, the real point is that many of these appliances need NO management at all. You set them up, then run them. Because the workload is tested and integrated the software rarely, if ever fails. Since the hardware isn’t generally extensible, aka as Chuck would have it, you are locked into what you buy, updating drivers and introducing incompatibility isn’t an issue as it is with most general purpose servers.

As for trading one headache for another, while it’s a valid point, my experience so far with live migration and pools of virtual servers, network switches, SAN setup etc. is that you are once again trading one headache for another. While in a limited fashion it’s fairly straight forward to do live migration of a virtual workload from one system to another. Doing it at scale, which is what is required if you’ve reached the “headache”point that Chuck is positing, is far from simple.

Chuck closes his blog entry with:

Will we see a best-of-both-worlds approach in the future?

Well I’d say that was more than likely, in fact it’s happening and has been for a while. The beauty of an appliance is that the end user is not exposed to the internal workings. They don’t have to worry about most configuration options and setup, management is often minimised or eliminated, and many appliances today offer “phone home” like features for upgrade and maintenance. I know, we build many of them here at Dell for our customers, including EMC, Google etc.

One direction that we are likely to see, is that in the same current form factor of an appliance, it will become a fault tolerant appliance by replicating key parts of the h/w, virtualizing the appliance and running multiple copies of the appliance workload within a single physical appliance, all once again delivering that workload and deployment specific features and functions. This in turn reduces the number of physical appliance a customer will need. So the best of both worlds, although I suspect that not what Chuck was hinting at.

While there is definitely a market for virtual software stacks, complete application and OS instances, presuming that you can move all h/w appliances to this model, is missing the point.

Let’s not forget, SANs are often just another form of appliance, as are TOR/EOR network switches, and things like the Cisco Nexus. Haff says that appliances have been around since the late 1990’s, well at least as far as I can recall, in the category of “big appliances”, the IBM Parallel Query Server which ran a customized mainframe DB2 workload, and attached to an IBM S/390 Enterprise Server was around in the early 1990’s.

Before that many devices were in fact sold as appliances, they were just not called that, but by todays definition, thats exactly what they were. My all time favorite was the IBM 3704, part of the IBM 3705 communications controller family. The 3704 was all about integrated function and a unique user experience, with at the time(1976) an almost space age touch panel user interface.

Reuse, recycle, repair – Oral-B disaster

Sometime ago I commented on the repair status for iPhones, like my no longer used Palm Treo, which was subsequently a target for a class action lawsuit over breakdowns, the iPhone has some expensive repair options and I said at the time “Is it unreasonable to expect the designers of one of the best gadgets in the last few years to think about how they are serviced, refurbished and disposed of, I think not.

We simply can’t go on forever buying stuff and dumping the old, unwanted broken stuff without regard.”

Oral-B Pulsar vibrating toothbrush Picture AttributionNoncommercialShare AlikeSome rights reserved by Inju

And so it was last Friday evening I was making my usual dash up and down the isles at the grocery store. I don’t make a list, since I live alone I can mostly look at the isle and decide if I need to go down it.

I knew my tooth brush had reached the “sorry” stage and needed to be replaced. I’d owned one of those electric ones with the big handle that took 2x AA batteries and had two distinct heads, one rotated and the other moved up and down. Only the head wasn’t really big enough for me, so I’d stopped using it.

As I glanced through the racks and racks of toothbrushes I glanced at one by Oral-B that looked like it had a good size head, I picked it up band threw it in my shopping cart.

Yesterday morning, after reading this excellent post by Adobe all around good guy, Duane Nickull, on how to improve Vancouver. Duane lists a staggering number of good/simple steps, including “2. Immediately ban the sale of the following items from store shelves within Vancouver:” – Duane went on to list a number of common sense things.

I’d like to add at least one Duane, this Oral-B Pulsar toothbrush. This is outrageous. When I opened it, rather than buy a regular toothbrush, I found I’d bought an electric one. Worse than that, the tootbrush couldn’t be opened and specifically said on the packet that the battery was not designed to be replaced.

Given the tootbrush part isn’t going to last more than 6-8 weeks brushing twice per day for a reasonable amount of time, that means I’d be wasting 7x entirely good eletric motors per year, worse still, I’d be deliberately disposing of 7x AA batteries into landfill per year with all the environmental impact that has. It’s not unreasonable to assume that Procter and Gamble will sell at least half a million of these each year, the landfill consequences of dumping those batteries is unforgivable.

I’ve written Oral-B telling them that I’m buycotting their toothbrushes until they withdraw this product, or at least modify it and the instructions so the battery can be removed and the instructions tell you to remove the battery before discarding it. Please do the same. Their contact details are here. 

[Update: My email submission was assigned  ‘090127-000612’.]


A funny thing happened on the way to the forum…

Ahh yes, Nathan Lane and Frankie Howerd, they represent the differences between the UK and US, in many ways so different, but in many ways, so the same. I’ve been bemoaning the fact that I can’t blog about what I’ve been doing mostly for the last 5-years as it’s all design and development work, all considered by IBM to be confidential, and since none of it is open source, it’s hard to point to projects and give hints.

And so it is with the project I’m currently working on. Only this time, not only is it IBM Confidential, but it is being worked with a partner and based on a lot of their intellectual property, so even less chance to discuss in public. I’ve been doing some customer validation sessions over the last 3-months and got concrete feedback on key data center directions around data center fabric, 10gb ethernet, converged enhanced ethernet (CEE) and more. There are certainly big gains to be made in reducing capital expenditure and operational expenditure in this space,  but thats really only the start. The real benefit comes from having an enabled fabric that rather than forcing centralization around a server, which is much of what we’ve been doing for the last 20-years, or forcing centralization around an ever more complex switch, which is where Cisco have been headed, the fabric is in and of itself the hub and the switches just provide any to any connectivity, low latency and enable both existing and new applications, both virtualized and enabled, to exploit the fabric.

So following one of my customer validation sessions in the UK, I was searching around on the Internet for a link. And I came across this one. It discusses a strategic partnership between IBM and Juniper for custom ASICS for a new class of Intenet backbone devices, only it is from 1997, who’da guessed. A funny thing happened on the way to the forum…

Any to any fabric

I’ve spent the last few months working on IBMs’ plans for next generation data center fabric. It is a fascinating area, one ripe for innovation and some radical new thinking. When we were architecting on demand, and even before that, working on the Grid Toolbox, one of the interesting futures options was InfiniBand or IB.

What made IB interesting was that you could put logic in either end of the IB connection. Thus turning a standard IB connection into a custom switched connector by dropping your own code into the host channel adapter (HCA) or target channel adapter (TCA). Anyway, I’m getting off course. The point was that we could use an industry standard protocol and connection to do some funky platform specific things like specific cluster support, quality of service assertion, or security delegation without compromising the standard connection. This could be done between racks at the same speed and latency as between systems in the same rack. This could open up a whole new avenue of applications and would help to distribute work inside the enterprise, hence the Grid hookup. It never played out that way for many reasons.

Over in the Cisco Datacenter blog, Douglas Gourlay is considering changes to his “theory” on server disaggregation and network evolution – he theorises that over time everything will move to the network, including memory. Remember, the network is the computer?

He goes on to speculate that “The faster and more capable the network the more disaggregated the server becomes. The faster and more capable a network is the more the network consolidates other network types.” and wants time to sit down and “mull over if there is an end state”.

Well nope, there isn’t and end state. First off, the dynamics of server design and environmental considerations mean that larger and larger centralized computers will still be in vogue for a long time to come. Take for example iDataplex. It isn’t a single computer, but what is these days? In their own class are also the high end Power 6 595 Servers, again not really single servers but intended to multi-process, to virtualise etc. There is a definite trend for row scale computing, where additional capacity is dynamically enabled off a single set of infrastructure components and while you could argue these are distributed computers, just within the row, they are really composite computers.

As we start to see fabric settle down and become true fabrics, rather than either storage/data connections or network connections, new classes of use, new classes of aggregated systems will be designed. This is what really changes computing landscape, how they are used, not how they are built. The idea that you can construct a virtual computer from a network was first discussed by former IBM guru Irving Wladawsky-Berger. His Internet computer illustration was legend inside IBM and used and re-used in presentations throughout the late 1990s.

However, just like the client/server vision of the early ’90s, the distributed computing vision of the mid 90’s, and Irvings’ Internet computer of the late 1990s, plus all those those that came before and since, the real issue is how to use what you have, and what can be done better. That for me is the crux of the emerging world of 10Gb Ethernet, Converged Enhanced Ethernet, fibre channel over Ethernet et al. Don’t take existing systems and merely break them apart and network them, because you can.

As data center fabrics allow low latency, non-blocking, any to any and point to point communication, why force traffic through a massive switch and lift system to enable this to happen? Enabling storage to talk to tapes, for networks to access storage without going via a network switch or a server, enabling server to server, server to client, device to device surely has some powerful new uses. The live dynamic streaming and analysis of all sorts of data, without having to have it pass through a server. Appliances which dynamically vet, validate and operate on packets as they pass through from one point to another.

It’s this combination of powerful server computers, distributed network appliances, and secure fabric services that

Since Douglas ended his post with a qoute, I thought this apropo “And each day I learn just a little bit more, I don’t know why but I do know what for, If we’re all going somewhere let’s get there soon, Oh this song’s got no title just words and a tune”. – Bernie Taupin.

Appliances, Stacks and software virtual machines

A couple of things from the “Monkmaster” this morning peaked my interest and deserved a post rather than a comment. First up was James post on “your Sons IBM“. James discusses a recent theme of his around stackless stacks, and simplicity. Next-up came a tweet link on cohesiveFT and their elastic server on demand.

These are very timely, I’ve been working on a effort here in Power Systems for the past couple of months with my ATSM, Meghna Paruthi, on our appliance strategy. These are, as always with me, one layer lower than the stuff James blogs on, I deal with plumbing. It’s a theme and topic I’ll return to a few times in the coming weeks as I’m just about to wrap up the effort. We are currently looking for some Independent Software Vendors( ISVs) who already package their offerings in VMWare or Microsoft virtual appliance formats and either would like to do something similar for Power Systems, or alternatively have tried it and don’t think it would work for Power Systems.

Simple, easy to use software appliances which can be quickly and easily deployed into PowerVM Logical Partitions have a lot of promise. I’d like to have a market place of stackless, semi-or-total black box systems that can be deployed easily and quickly into a partition and use existing capacity or dynamic capacity upgrade on demand to get the equivalent of cloud computing within a Power System. Given we can already run circa 200-logical partitions on a single machine, and are planing something in the region of 4x that for the p7 based servers with PowerVM, we need to do something about the infrastructure for creating, packaging, servicing, updating and managing them.

We’ve currently got six-sorta-appliance projects in flight, one related to future datacenters, one with WebSphere XD, one with DB2, a couple around security and some ideas on entry level soft appliances.

So far it looks like OVF wrappers around the Network Installation Manager aka NIM, look like the way to go for AIX based appliances, with similar processes for i5/OS and Linux on Power appliances. However, there are a number of related issues about packaging, licensing and inter and intra appliance communication that I’m looking for some input on. So, if you are an ISV, or a startup, or even in independent contractor who is looking at how to package software for Power Systems, please feel free to post here, or email, I’d love to engage.

About & Contact

I'm Mark Cathcart, formally a Senior Distinguished Engineer, in Dells Software Group; before that Director of Systems Engineering in the Enterprise Solutions Group at Dell. Prior to that, I was IBM Distinguished Engineer and member of the IBM Academy of Technology. I am a Fellow of the British Computer Society ( I'm an information technology optimist.

I was a member of the Linux Foundation Core Infrastructure Initiative Steering committee. Read more about it here.

Subscribe to updates via rss:

Feed Icon

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 920 other followers

Blog Stats

  • 87,378 hits