Archive for the 'complexity' Category

Farewell Windows?

Not quite, and not for a long long time. In my house we run 4x laptops with Windows 10, we have a small office computer running Windows 10; then there is the Music Server in the basement, and the media laptop buried in the TV cabinet, they also run Windows 10. So it will be a long time before we stop using it.

However, in an excellent summary of what’s been going on at Microsoft, Matthias Biehl also makes a number of organizational truisms. It’s well worth a read. Also, do yourself a favor and try the Microsoft To-Do program, I use it on Windows and Android, it’s excellent.

culture flows from success

 

Serverless computing

I’ve been watching and reading on developments around serverless computing. I’ve never used it myself so only have limited understanding. However, given my extensive knowledge of servers, firmware, OS, Middleware and business applications, I’ve had a bunch of questions.

serverlessnyc

Many of my questions are echoed in this excellent write-up by Jeremy Daly on the recent Serverless NYC event.

For traditional enterprise type customers, it’s well worth reviewing the notes of the issues highlighted by Jason Katzer, Director of Software Engineering at Capital One. While some attendees talk about “upwards of a BILLION transactions per month” using serverlesss, that’s impressive, that’s still short of many enterprise requirements, it translates to 34.5-million transactions per day.

Katzer notes that there are always bottlenecks and often services that don’t scale the same way that your serverless apps do. Worth a read, thanks for posting Jeremy.

The Big Hack: How China Used a Tiny Chip to Infiltrate U.S. Companies – Bloomberg

This is a stunning discovery. I don’t have any insight into it except what’s been published here. However, it’s always been a concern. I remember at least one project that acquired a sample of hard disk controllers (HDC) from vendors with a view to rewriting a driver for OS cache optimization and synchronization.

I’d never actually seen inside a hard drive to that point, except in marketing promotional materials. We were using the HDC with different drives and I was surprised how complex they were. We speculated how easy it would have been to ship a larger capacity drive and insert a chip that would use the extra capacity to write shadow copies of a files that were unseen by the OS. We laughed it off as too complex and too expensive to actually do. Apparently not.

Source: The Big Hack: How China Used a Tiny Chip to Infiltrate U.S. Companies – Bloomberg

API’s and Mainframes

ab[1]

I like to try to read as many American Banker tech’ articles as I can. Since I don’t work anymore, I chose not to take out a subscription, so some I can read, others are behind their subscription paywall.

This one caught my eye. as it’s exactly what we did in circa 1998/99 at National Westminster Bank (NatWest) in the UK. The project was part of the rollout of a browser Intranet banking application, as a proof of concept, to be followed by a full blown Internet banking application. Previously both Microsoft and Sun had tackled the project and failed. Microsoft had scalability and reliability problems, and from memory, Sun just pushed too hard to move key components of the system to its servers, which in effect killed their attempt.

The key to any system design and architecture is being clear about what you are trying to achieve, and what the business needs to do. Yes, you need a forward looking API definition, one that can accept new business opportunities, and one that can grow with the business and the market. This is where old mainframe applications often failed.

Back in the 1960’s, applications were written to meet specific, and stringent taks, performance was key. Subsecond response times were almost always the norm’ as there would be hundreds or thousands of staff dependent on them for their jobs. The fact that many of those application has survived to this today, most still on the same mainframe platform is a tribute to their original design.

When looking at exploiting them from the web, if you let “imagineers” run away with what they “might” want, you’ll fail. You have to start with exposing the transaction and database as a set of core services based on the first application that will use them. Define your API structure to allow for growth and further exploitation. That’s what we successfully did for NatWest. The project rolled out on the internal IP network, and a year later, to the public via the Internet.

Of course we didn’t just expose the existing transactions, and yes, firewall, dispatching and other “normal” services as part of an Internet service were provided off platform. However, the core database and transaction monitor we behind a mainframe based webserver, which was “logically” firewalled from the production systems via an MPI that defined the API, and also routed requests.

So I read through the article to try to understand what the issue was that Shamir Karkal, the source for Barbas article, felt was the issue. Starting at the section “Will the legacy systems issue affect the industry’s ability to adopt an open API structure?” which began with a history lesson, I just didn’t find it.

The article wanders between a discussion of the apparent lack of a “service bus” style implementation, and the ability of Amazon to sell AWS and rapidly change the API to meet the needs of it’s users.

The only real technology discussion in the article that I found that had any merit, was where they talked about screen scraping. I guess I can’t argue with that, but surely we must be beyond that now? Do banks really still have applications that are bound by their greenscreen/3270/UI? That seems so 1996.

A much more interesting report is this one on more general Open Bank APIs. Especially since it takes the UK as a model and reflects on how poor US Banking is by comparison. I’ll be posting a summary on my ongoing frustrations with the ACH over on my personal blog sometime in the next few days. The key technology point here is that there is no way to have a realtime bank API, open, mainframe or otherwise, if the ACH system won’t process it. That’s America’s real problem.

Do you own the device you just bought?


Professor of Law, Washington and Lee University, has a great blog post that echoes exactly the same sentiments I heard Richard Stallman explain his original drive for open source, way back in the 1980’s.

Fairfield argues that we don’t own the devices we buy, we are merely buying a one-time license to the software within them. He makes a great case. It’s worth the read.

One key reason we don’t control our devices is that the companies that make them seem to think – and definitely act like – they still own them, even after we’ve bought them. A person may purchase a nice-looking box full of electronics that can function as a smartphone, the corporate argument goes, but they buy a license only to use the software inside. The companies say they still own the software, and because they own it, they can control it. It’s as if a car dealer sold a car, but claimed ownership of the motor.

My favorite counter-example of this is the Logitech Squeezebox network music player system I use.  Originally created by Slim Devices, as far back as 2000, with their first music player launched in 2001. Slim Devices were acquired by Logitech in 2006, who then abandoned the product line in 2012.

I started using Logitech Squeezebox in 2008, first by buying a Squeezebox Boom, then a Radio, another Boom, a Touch and have subsequently bought used Duet, and for my main living room, the audiophile quality Transporter.

While there are virtually no new client/players, there is a thriving client base built around the Raspberry Pi hardware with both client software builds and add-on audio hardware, as well as server builds to use the Pi. I’ve hacked some temporary preferences into the code to solve minor problems, but by far the most impressive enhancements to the long abandoned, official, server codebase are the extensions to keep up with changes in streaming services like the BBC iPlayer radio, Spotify, DSD play and streaming and many more enhancements. For any normal, closed source platform any one of these enhancements would likely have been impossible, and for many users made the hardware redundant.

The best place to start in the Squeezebox world is over on the forums, hosted, of course, at http://forums.slimdevices.com/

When my 1-month Ring (video) doorbell failed. It was all I could do to get Ring to respond. I spent nearly 4-hours on the phone with tech support. Not only did I have no control, the doorbell had stopped talking to their service, but they couldn’t really help. After the second session with support, I just said “look I’m done can you send a replacement?” – The tech support agent agreed they would, but 10-days later I was still waiting for even a shipping notice, much less a replacement. While the door bell worked as a door bell, none of the services, motion detection, door bell rings were any good as their services were unavailable to my door bell.

You don’t have to give up control when you buy a new device. You do own the skeleton of the hardware, buy you’ll have to make informed choices, and probably will give up control, if you want to own the soul of the machine, it’s software.

Nobody wants to use…

Everyone wants to have everything. Bertil Muth has a great blog on software invisibility and use, where he asserts “Nobody wants to use software“.

Bertil makes a good case for AI driven software, that senses or learns why it exists, and just does what it should. Of course building such software is hard, very hard. It’s a good read though with some thought provoking points.

In the article when discussing Amazon he made a claim it was worth clarifying. It’s about the “infamous” 1-click patent. My comment is here.

“Then they [Amazon]pioneered 1-Click payment”
Actually they didn’t, they popularized a prior method, which after re-examination by the patent office was restricted to the use online, only in shopping carts.

The idea of a single click payment or financial transaction had been implemented many times before, however, prior to 1982 software patents were extremely hard to get for individual functions of so-called unique concepts, and were reserved for much broader, unique “inventions”.

In 1984, I was one of many working on Chemical Banks Pronto Home Banking System. For transfers between accounts within the bank, we initiated a 1-click on the UI for the PC Junior version of Pronto.

As far as I’m aware, nothing from Pronto was patented due to the high cost at the time. It wasn’t until the late 1980’s software patents started to be filed for individual methods, by the mid-90’s software patents became commonplace, and their use both defensive and offensive, sadly became commonplace too.

Overall though, it’s an excellent post which resonates with many of the themes of simplicity and usability I’ve argued here and elsewhere over the years.

The app hell of the future

Just over 5-years ago, in April 2011, I wrote this post after having a fairly interesting exchange with my then boss, Michael Dell, and George Conoly, co-founder and CEO of Forrester Research. I’m guessing in the long term, the disagreement, and semi-public dissension shut some doors in front of me.

Fast forward 5-years, and we are getting the equivalent of a do-over as the Internet of Things and “bots” become the next big thing. This arrived in my email the other day:

This year, MobileBeat is diving deep into the new paradigm that’s rocking the mobile world. It’s the big shift away from our love affair with apps to AI, messaging, and bots – and is poised to transform the mobile ecosystem.

Yes, it’s the emperor’s new clothes of software over again. Marketing lead software always does this, over imagines what’s possible, under estimates the issues with building in and then the fast fail product methodology kicks-in. So, bots will be the next bloatware, becoming a security attack front. Too much code, forced-fit into micro-controllers. The ecosystem driven solely by the need to make money. Instead of tiny pieces of firmware that have a single job, wax-on, wax-off, they will become dumping ground for lots of short-term fixes, that never go away.

Screenshot_20160524-113359Meanwhile, the app hell of today continues. My phone apps update all the time, mostly with no noticeable new function; I’m required to register with loads of different “app stores” each one a walled garden with few published rules, no oversight, and little transparency. The only real source of trusted apps is github and the like where you can at least scan the source code.IMG_20160504_074211

IMG_20160504_081201When these apps update, it doesn’t always go well. See this picture of my Garmin Fenix 3, a classic walled garden, my phone starts to update at 8:10 a.m., and when it’s done, my watch says it’s now 7:11 a.m.

IMG_20160111_074518Over on my Samsung Smart TV, I switch it from monitor to Smart TV mode and get this… it never ends. Nothing resolves it accept disconnecting the power supply. It recovered OK but this is hardly a good user experience.

Yeah, I have a lot of smart home stuff,  but little or none of it is immune to the app upgrade death spiral; each app upgrade taking the device nearer to obsolescence because there isn’t enough memory, storage or the processor isn’t fast enough to include the bloated functions marketing thinks it needs.

If the IoT and message bots are really the future, then software engineers need to stand up and be counted. Design small, tight reentrant code. Document the interfaces, publish the source and instead of continuously being pushed to deliver more and more function, push back, software has got to become engineering and not a form of story telling.

YesToUninstallAnUpdate[1]

VM Backup product comparison

Dell sponsored a VM backup comparison white paper. Those that remember my early 1990’s data protection work will remember the product shootouts I used to do at IBM, picking apart the features and rating the functions to make it clear which products were suitable for what.

If yours is like 52% of IT organizations, your IT stack isn’t 100% virtualized. This probably means you’re managing two backup solutions: one for virtual and another for physical. Virtual capabilities that were once cutting edge are becoming default, but how do you balance ease-of-use with staying ahead of the curve? We drew upon the best research to help you find the optimal solution for your organization.

This guide isn’t quite like those, but I had a read through earlier this morning and it’s worth reviewing if you have VM’s and want to understand how to backup and what products are out there. Yes, AppAssure is a Dell product. The paper is available without registration.

Eventually consistent

Those interested in the current debate about Big Data, and massively parallel systems, Hadoop et al might want to take a look at this Eventually consistent post I just wrote for my new blog where I’m not sure what I’ll post, mostly ramblings.

And that’s a great example of the difference between eventually consistent and ACID based transaction-based systems. Many but not all IT Professionals understand this. Make sure yours does. In this case it could be the backend database consistency, ie there are multiple copies and they don’t match, or there could be multiple backend copies and the copy in the browser cache, does or doesn’t match. Either way “Houston, we have a problem”.”

Why is complexity bad?

In an internal meeting here this morning, I had another “rant” about unnecessary complexity in a design. One of the guys in the meeting wrote down what I said, pretty much verbatim and sent it to me afterwards asking if he could use it as a quote. When I read it even I was surprised with the clarity.

“Complexity in computing systems is really a bad thing, it’s the result of too many bright people making misguided judgements about what customers want, and customers thinking that their need to control has to come from complexity. Complexity creates cost, bugs, inhibits design, makes testing overly expensive, hinders flexibility and more. Most IT companies design approach to complexity is to automate it, which in turn creates more complexity.”

Comments?

Customer service – You’ve been Zappos’d

When I first ordered from Zappos.com and they screwed up with the packaging, craming a $200+ dollar jacket in a shoe box, so much so I had to have it professionally steamed to get the creases out, I was prepared to forgive them. After another order they put me on their VIP list, free shipping both ways[read shipping included in the price, since they are anything but cheap.] Zappos is an Amazon.com business.

My 3rd order was for some shoes, I ordered a 12, they shipped an 8. I returned them free, instead of a refund, I got a credit note. I’d have happily accepted the right size, but they didn’t have them. I did do at least one more order, but have backed off recently.

Then late last week I got an email telling me they’d been hacked, some of my data and my password had been compromised, they’d reset my password and I should logon and change it. So I tried. Their system responded “”We are so sorry, we are currently not accepting international traffic. If you have any questions please email us at help@zappos.com”.

Here is my summary email sent back to them today. What’s clear is that their customer service, average under normal circumstances, is less than what I’d expect, VIP or not.

“No wonder you got hacked. Let recap, please read carefully…

1. You got hacked
2. You write to me telling me to change my password
3. Your system won’t let me change my password because I’m overseas attending my father’s funeral.
4. I ask you to remove my account and ALL my data
5. You write back telling me to change my password
6. I write back telling you that wasn’t what I asked, and to delete my account and remove all my data
7. You write back telling me to deactivate my own account
8. I can’t. See #3
9. I write this email back pointing out how useless you are.”

Simplicity – It’s a confidence trick

My friend, foil and friendly adversary James Governor posted an blog entry today entitled “What if IBM Software Got Simple?

It’s an interesting and appealing topic. It was in some respects what got in our way last year, it was also what was behind the 1999 IBM Autonomic computing initiative, lets just make things that work. It’s simple to blame the architects and engineers for complexity, and James is bang-on when he says “When I have spoken to IBM Distinguished Engineers and senior managers in the past they have tended to believe that complexity could be abstracted”.

There are two things at play here, both apply equally to many companies, especially in the systems management space, but also in the established software marketplace. I’m sure James knows this, or at least had it explained. If not, let me have a go.

On Complexity

Yes, in the past software had to be complex. It was widely used and installed on hundreds of thousands of computers, often as much as ten years older than the current range of hardware. It was used by customers who had grown up over decades with specific needs, specific tools and specific ways of doing things. Software had to be upgraded pretty much non-disruptively, even at release and version boundaries you pretty much had to continue to support most if not all of the old interfaces, applications, internal data formats and API’s.

If you didn’t you had a revolt on your hands in your own customer base. I can cite a few outstanding examples of where the software provider misunderstood this and learn an important lesson both times, I would also go as far as far as to suggest, the product release marked the beginning of the end. VM/SP R5 where IBM introduced a new, non-compatible, non-customer lead UI; VM/XA Migration Aid, where IBM introduced a new, non-compatible CMS lightweight VM OS; and of course, from the X86 world, Microsoft Vista.

For those products a descision was taken at some point in the design to be non-compatible, drop old interfaces or deliberately break them to support the new function or architecture. This is one example where change brings complexity, the other is where you chose to remain compatible, and carry the old interfaces and API’s. This means that everything from the progamming interface, to the tools, compilers, debuggers etc. now has to support either two versions of the same thing, or one version that performs differently.

Either way, when asked to solve a problem introduced by these changes over a number of years, the only real option is to abstract. As I’ve said here many times, automating complexity doesn’t make things simple, it simply makes them more complex,.

On Simplicity

Simplicity is easy when you have nothing. Get two sticks, rub them together and you have a fire. It’s not so easy when you’ve spent 25-years designing and building a nuclear power station. What do I need to start a fire?

Simplicity is a confidence trick. Know your customers, know your market, ask for what it will take to satisfy both, and stick to this. The less confident your are about either, the more scope creep you’ll get, the less specific you’ll be about pretty much every phase of the architecture, the design and ultimately the product. In the cloud software business this is less of an issue, you don’t have releases per se. You roll out function and even if you are not in “google perpetual beta mode” you don’t really have customers on back releases of your product, and you are mostly not waiting for them to upgrade.

If you have a public API you have to protect and migrate that, but otherwise you take care of the customers data, and as you push out new function, they come with you. Since they don’t have to do anything, and for many of the web 2.0 sites we’ve all become used to, don’t have any choice or advance notice, it’s mostly no big deal. However, there is still a requirement that someone that has to know the customer, and know what they want. In the web 2.0 world that’s still the purview of a small cadre of top talent, Zuckerberg, Jobs, Williams, Page, Schmidt, Brin et al.

The same isn’t true for those old world companies, mine included. There are powerful groups and executives who have a vested interest in what and how products are designed, architected and delivered.  They know their customers, their markets and what it will takes to make them. This is how old school software was envisasaged, a legacy, a profit line, even a control point.

The alternative to complexity is to stop and either start over, or at least over multiple product cycles go back and take out all the complexity. This brings with-it a multi-year technical debt, and often a negative op-ex that ,most businesses and product managers are not prepared to carry. It’s simpler, easier and often quicker to acquire and abandon. In with the new, out with the old.

Happy New Year! I Need…

Are PDF’s where information goes to die?

Terminal 4, A-gates

An extract from the PSH Terminal 4 PDF map

Yesterday I wrote a rant on my triathlon, travel blog about an “information” problem I’d had at Phoenix Sky Harbor airport, you can read it here. Really my point was simply with mobile and tablet devices, PDF’s are hugely restrictive platform for the display of information.

Since PDF’s cannot really be dynamically updated, and are almost always a copy of some information created and stored elsewhere, they are often overlooked when the information is updated. Now adobe are moving away flash for websites, it’s about time that websites abandoned PDF’s, especially for simple graphics like this. While there remains some justification to use them as vehicles to transfer facsimile or “exact” copies of documents, for the most part as a form of information display, the PDF is a place information goes to die.

Windows 8 and is change ever good?

The tech sector thrives on change, it is what lets the next generation discover the mistakes of older generation, except in a new context. It is also why there are still thousands of new patents every year, same invention different context and use. People in all walks seem to be afraid of change, just recently the South Congress merchants association fought the city of Austin as they felt it would harm their businesses, drivers complained as it changed the “user interface”. Yet a month or so on and it seems to be working perfectly.

And so it’s no surprise to find Microsoft having to re-assure people over the upcoming UI change in Windows 8. This reminds me of almost every other big change, making sure people know you have not forgotten or overlooked what is important for them.

And so it will be with Windows 8. I had a version of the metro UI installed for a while, but I never really got to use it much. None of my apps exploited it, I never really put in any time to learn how to operate it, with a mouse since I don’t have a touchscreen laptop, well apparently thats the same as Mary-Jo. Introducing new interfaces, either user or programming, is always problematical. Ultimately something will end up going into “sustaining mode” and become pure cost to maintain compatibility. The only question is which it will be, the new or the old?

And there’s the rub, maintain two entirely different and to a degree incompatible sets of interfaces, is an entirely different game. When they are on the same platform, even more so. The question is will there be enough benefit over time to drive PC users to use the new interface exploitation, or should Microsoft just gone with the new UI for the new platform/form factor tablets?

This is what Apple have done fabulously well on. Picking the form factor device and building around it. As I’ve posited a few times in the last week, Steve Jobs wasn’t the best innovator, he didn’t deliver any earth shattering new technology. What Apple did under his recent reign was to deliver on a set of previously established technologies, but deliver them in such a way that the user experience was as good as it could be, even when that meant forcing change.

An interesting question for all those change loving technologists, are we reaching a point where the technology is good enough, and getting it right is more important than changing it?

I guess that depends on what change is. I’ve nailed my colors to the mast pretty much, simplification isn’t change, removing complexity is one of the most important things we can do, and it is one of the biggest barriers to entry.

Simplicity versus, well non-simplicity

I’ve had an interesting week, last Friday my corporate Blackberry Torch that was only 2-months old, was put in a ziploc bag with my name on it, and I was given a Dell Venue Pro phone with Windows Phone 7 in it’s place. I’ve written a detailed breakdown of what I liked and didn’t like. The phone itself is pretty rock solid, well designed, nice size, weight etc. and a great screen. Here is a video review which captures my views on the phone itself, a great piece of work from Dell.

What is interesting though is the Windows Phone software. Microsoft have obviously put a lot of time and effort into the User Interface and design experience. Although it features the usual finger touch actions we’ve come to expect, the UI itself, and the features it exposes have been carefully designed to make it simple to do simple things. There really are very few things you can change, alter, almost no settings, only very minimal menu choices etc.

What makes this interesting for me is this is exactly the approach we’ve taken with our UI. When trying to take 79-steps, involving 7x different products and simplify and automate it, it would be easy to make every step really complicated, and just reduce the number of steps. However, all that does is mean that there would be more chance of getting something wrong with each step; my experience with this type of design is that not only is the human operator more likely to make a mistake, but the number of options, configurations and choices drive up the complexity and testing costs become prohibitive, and eventually mistakes are made. Combinations not expected are not tested, tests are run in orthogonal configurations.

Back when the autonomic computing initiative was launched some 10-years ago at IBM, there seemed to be these two diametrically opposed desires. One desire was to simplify technology, the other was to make systems self managing. The problem with self managing is that it introduces an additional layer, in many cases, to automate and manage the existing complexity. To make this automation more flexible and to make it more adaptable, the automation was made more sophisticated and thus, more complex. The IBM Autonomic Computing website still exists and while I’m sure the research has moved on, as have the products, the mission and objectives are the same.

Our Virtual Integrated System work isn’t anywhere near as grandiose. Yet, in a small way it attempts to address whats at the core of IBMs’ Autonomic Computing, how to change the way we do things, how to be more efficient and effective with what we have. And that takes me back to Windows Phone 7. It’s great at what it does, but as a power user, it doesn’t do enough for me. I guess what I’m hoping at this point is that we’ll create a new category of system, it is neither simple, nor complex, it does what you want, the way you want it, but with flexibility. We’ll see.


About & Contact

I'm Mark Cathcart, formally a Senior Distinguished Engineer, in Dells Software Group; before that Director of Systems Engineering in the Enterprise Solutions Group at Dell. Prior to that, I was IBM Distinguished Engineer and member of the IBM Academy of Technology. I am a Fellow of the British Computer Society (bsc.org) I'm an information technology optimist.


I was a member of the Linux Foundation Core Infrastructure Initiative Steering committee. Read more about it here.

Subscribe to updates via rss:

Feed Icon

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 2,066 other subscribers

Blog Stats

  • 90,345 hits