Archive for the 'complexity' Category

The Big Hack: How China Used a Tiny Chip to Infiltrate U.S. Companies – Bloomberg

This is a stunning discovery. I don’t have any insight into it except what’s been published here. However, it’s always been a concern. I remember at least one project that acquired a sample of hard disk controllers (HDC) from vendors with a view to rewriting a driver for OS cache optimization and synchronization.

I’d never actually seen inside a hard drive to that point, except in marketing promotional materials. We were using the HDC with different drives and I was surprised how complex they were. We speculated how easy it would have been to ship a larger capacity drive and insert a chip that would use the extra capacity to write shadow copies of a files that were unseen by the OS. We laughed it off as too complex and too expensive to actually do. Apparently not.

Source: The Big Hack: How China Used a Tiny Chip to Infiltrate U.S. Companies – Bloomberg

API’s and Mainframes

ab[1]

I like to try to read as many American Banker tech’ articles as I can. Since I don’t work anymore, I chose not to take out a subscription, so some I can read, others are behind their subscription paywall.

This one caught my eye. as it’s exactly what we did in circa 1998/99 at National Westminster Bank (NatWest) in the UK. The project was part of the rollout of a browser Intranet banking application, as a proof of concept, to be followed by a full blown Internet banking application. Previously both Microsoft and Sun had tackled the project and failed. Microsoft had scalability and reliability problems, and from memory, Sun just pushed too hard to move key components of the system to its servers, which in effect killed their attempt.

The key to any system design and architecture is being clear about what you are trying to achieve, and what the business needs to do. Yes, you need a forward looking API definition, one that can accept new business opportunities, and one that can grow with the business and the market. This is where old mainframe applications often failed.

Back in the 1960’s, applications were written to meet specific, and stringent taks, performance was key. Subsecond response times were almost always the norm’ as there would be hundreds or thousands of staff dependent on them for their jobs. The fact that many of those application has survived to this today, most still on the same mainframe platform is a tribute to their original design.

When looking at exploiting them from the web, if you let “imagineers” run away with what they “might” want, you’ll fail. You have to start with exposing the transaction and database as a set of core services based on the first application that will use them. Define your API structure to allow for growth and further exploitation. That’s what we successfully did for NatWest. The project rolled out on the internal IP network, and a year later, to the public via the Internet.

Of course we didn’t just expose the existing transactions, and yes, firewall, dispatching and other “normal” services as part of an Internet service were provided off platform. However, the core database and transaction monitor we behind a mainframe based webserver, which was “logically” firewalled from the production systems via an MPI that defined the API, and also routed requests.

So I read through the article to try to understand what the issue was that Shamir Karkal, the source for Barbas article, felt was the issue. Starting at the section “Will the legacy systems issue affect the industry’s ability to adopt an open API structure?” which began with a history lesson, I just didn’t find it.

The article wanders between a discussion of the apparent lack of a “service bus” style implementation, and the ability of Amazon to sell AWS and rapidly change the API to meet the needs of it’s users.

The only real technology discussion in the article that I found that had any merit, was where they talked about screen scraping. I guess I can’t argue with that, but surely we must be beyond that now? Do banks really still have applications that are bound by their greenscreen/3270/UI? That seems so 1996.

A much more interesting report is this one on more general Open Bank APIs. Especially since it takes the UK as a model and reflects on how poor US Banking is by comparison. I’ll be posting a summary on my ongoing frustrations with the ACH over on my personal blog sometime in the next few days. The key technology point here is that there is no way to have a realtime bank API, open, mainframe or otherwise, if the ACH system won’t process it. That’s America’s real problem.

Do you own the device you just bought?


Professor of Law, Washington and Lee University, has a great blog post that echoes exactly the same sentiments I heard Richard Stallman explain his original drive for open source, way back in the 1980’s.

Fairfield argues that we don’t own the devices we buy, we are merely buying a one-time license to the software within them. He makes a great case. It’s worth the read.

One key reason we don’t control our devices is that the companies that make them seem to think – and definitely act like – they still own them, even after we’ve bought them. A person may purchase a nice-looking box full of electronics that can function as a smartphone, the corporate argument goes, but they buy a license only to use the software inside. The companies say they still own the software, and because they own it, they can control it. It’s as if a car dealer sold a car, but claimed ownership of the motor.

My favorite counter-example of this is the Logitech Squeezebox network music player system I use.  Originally created by Slim Devices, as far back as 2000, with their first music player launched in 2001. Slim Devices were acquired by Logitech in 2006, who then abandoned the product line in 2012.

I started using Logitech Squeezebox in 2008, first by buying a Squeezebox Boom, then a Radio, another Boom, a Touch and have subsequently bought used Duet, and for my main living room, the audiophile quality Transporter.

While there are virtually no new client/players, there is a thriving client base built around the Raspberry Pi hardware with both client software builds and add-on audio hardware, as well as server builds to use the Pi. I’ve hacked some temporary preferences into the code to solve minor problems, but by far the most impressive enhancements to the long abandoned, official, server codebase are the extensions to keep up with changes in streaming services like the BBC iPlayer radio, Spotify, DSD play and streaming and many more enhancements. For any normal, closed source platform any one of these enhancements would likely have been impossible, and for many users made the hardware redundant.

The best place to start in the Squeezebox world is over on the forums, hosted, of course, at http://forums.slimdevices.com/

When my 1-month Ring (video) doorbell failed. It was all I could do to get Ring to respond. I spent nearly 4-hours on the phone with tech support. Not only did I have no control, the doorbell had stopped talking to their service, but they couldn’t really help. After the second session with support, I just said “look I’m done can you send a replacement?” – The tech support agent agreed they would, but 10-days later I was still waiting for even a shipping notice, much less a replacement. While the door bell worked as a door bell, none of the services, motion detection, door bell rings were any good as their services were unavailable to my door bell.

You don’t have to give up control when you buy a new device. You do own the skeleton of the hardware, buy you’ll have to make informed choices, and probably will give up control, if you want to own the soul of the machine, it’s software.

Nobody wants to use…

Everyone wants to have everything. Bertil Muth has a great blog on software invisibility and use, where he asserts “Nobody wants to use software“.

Bertil makes a good case for AI driven software, that senses or learns why it exists, and just does what it should. Of course building such software is hard, very hard. It’s a good read though with some thought provoking points.

In the article when discussing Amazon he made a claim it was worth clarifying. It’s about the “infamous” 1-click patent. My comment is here.

“Then they [Amazon]pioneered 1-Click payment”
Actually they didn’t, they popularized a prior method, which after re-examination by the patent office was restricted to the use online, only in shopping carts.

The idea of a single click payment or financial transaction had been implemented many times before, however, prior to 1982 software patents were extremely hard to get for individual functions of so-called unique concepts, and were reserved for much broader, unique “inventions”.

In 1984, I was one of many working on Chemical Banks Pronto Home Banking System. For transfers between accounts within the bank, we initiated a 1-click on the UI for the PC Junior version of Pronto.

As far as I’m aware, nothing from Pronto was patented due to the high cost at the time. It wasn’t until the late 1980’s software patents started to be filed for individual methods, by the mid-90’s software patents became commonplace, and their use both defensive and offensive, sadly became commonplace too.

Overall though, it’s an excellent post which resonates with many of the themes of simplicity and usability I’ve argued here and elsewhere over the years.

The app hell of the future

Just over 5-years ago, in April 2011, I wrote this post after having a fairly interesting exchange with my then boss, Michael Dell, and George Conoly, co-founder and CEO of Forrester Research. I’m guessing in the long term, the disagreement, and semi-public dissension shut some doors in front of me.

Fast forward 5-years, and we are getting the equivalent of a do-over as the Internet of Things and “bots” become the next big thing. This arrived in my email the other day:

This year, MobileBeat is diving deep into the new paradigm that’s rocking the mobile world. It’s the big shift away from our love affair with apps to AI, messaging, and bots – and is poised to transform the mobile ecosystem.

Yes, it’s the emperor’s new clothes of software over again. Marketing lead software always does this, over imagines what’s possible, under estimates the issues with building in and then the fast fail product methodology kicks-in. So, bots will be the next bloatware, becoming a security attack front. Too much code, forced-fit into micro-controllers. The ecosystem driven solely by the need to make money. Instead of tiny pieces of firmware that have a single job, wax-on, wax-off, they will become dumping ground for lots of short-term fixes, that never go away.

Screenshot_20160524-113359Meanwhile, the app hell of today continues. My phone apps update all the time, mostly with no noticeable new function; I’m required to register with loads of different “app stores” each one a walled garden with few published rules, no oversight, and little transparency. The only real source of trusted apps is github and the like where you can at least scan the source code.IMG_20160504_074211

IMG_20160504_081201When these apps update, it doesn’t always go well. See this picture of my Garmin Fenix 3, a classic walled garden, my phone starts to update at 8:10 a.m., and when it’s done, my watch says it’s now 7:11 a.m.

IMG_20160111_074518Over on my Samsung Smart TV, I switch it from monitor to Smart TV mode and get this… it never ends. Nothing resolves it accept disconnecting the power supply. It recovered OK but this is hardly a good user experience.

Yeah, I have a lot of smart home stuff,  but little or none of it is immune to the app upgrade death spiral; each app upgrade taking the device nearer to obsolescence because there isn’t enough memory, storage or the processor isn’t fast enough to include the bloated functions marketing thinks it needs.

If the IoT and message bots are really the future, then software engineers need to stand up and be counted. Design small, tight reentrant code. Document the interfaces, publish the source and instead of continuously being pushed to deliver more and more function, push back, software has got to become engineering and not a form of story telling.

YesToUninstallAnUpdate[1]

VM Backup product comparison

Dell sponsored a VM backup comparison white paper. Those that remember my early 1990’s data protection work will remember the product shootouts I used to do at IBM, picking apart the features and rating the functions to make it clear which products were suitable for what.

If yours is like 52% of IT organizations, your IT stack isn’t 100% virtualized. This probably means you’re managing two backup solutions: one for virtual and another for physical. Virtual capabilities that were once cutting edge are becoming default, but how do you balance ease-of-use with staying ahead of the curve? We drew upon the best research to help you find the optimal solution for your organization.

This guide isn’t quite like those, but I had a read through earlier this morning and it’s worth reviewing if you have VM’s and want to understand how to backup and what products are out there. Yes, AppAssure is a Dell product. The paper is available without registration.

Eventually consistent

Those interested in the current debate about Big Data, and massively parallel systems, Hadoop et al might want to take a look at this Eventually consistent post I just wrote for my new blog where I’m not sure what I’ll post, mostly ramblings.

And that’s a great example of the difference between eventually consistent and ACID based transaction-based systems. Many but not all IT Professionals understand this. Make sure yours does. In this case it could be the backend database consistency, ie there are multiple copies and they don’t match, or there could be multiple backend copies and the copy in the browser cache, does or doesn’t match. Either way “Houston, we have a problem”.”


About & Contact

I'm Mark Cathcart, formally a Senior Distinguished Engineer, in Dells Software Group; before that Director of Systems Engineering in the Enterprise Solutions Group at Dell. Prior to that, I was IBM Distinguished Engineer and member of the IBM Academy of Technology. I am a Fellow of the British Computer Society (bsc.org) I'm an information technology optimist.


I was a member of the Linux Foundation Core Infrastructure Initiative Steering committee. Read more about it here.

Subscribe to updates via rss:

Feed Icon

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 809 other followers

Blog Stats

  • 85,711 hits