Digital Copiers, Faxes and MFP’s and their hard drives

I’m a subscriber to long time UK Tech journalist and Blogger, Charles Arthur / @charlesarthur Overspill blog where he currates links etc. Recently, he linked to an old report, from 2010, but it’s always worth reminding people of the dangers of photocopiers, fax machines and multi-function printers, especially older ones.

Copiers that are lightly used often have a lifecycle of 10-15 years. If you buy rather than lease, it’s quite possible you still have one that doesn’t include encryption of the internal hard drive. Even with a encrypted drive, there is still potential to hack the device software and retrieve the key, although pretty difficult.

The surprise thing is that many modern Multi-function Printers (MFP) also have local storage. While in modern models it is not an actual hard drive, it is likely to be some form of onboard flash memory ala cell phone memory, either part of the system board or via an embedded SD card. It’s worth remembering that these machines are Fax, copier, printers, and scanners all in one machine.

The US Federal Trade Commision has a web page that covers all the basics, in plain language.

Whatever the device, it is still incumbent on the owner to ensure it is wiped before returning it, selling it, or scrapping it. PASS IT ON!

For those interested in how you can get data from a copier/MFP type device, Marshall University Forensic Science team has a paper, here.

Open Distributed Challenges – Words Matter

I had an interesting exchange with Dez Blanchfield from Australia on twitter recently. At the time, based on his tweets, I assume Dez was an IBM employee. He isn’t and although our paths crossed briefly at the company in 2007, as far as I’m aware we never met.

The subject was open vs open source. Any longtime readers will know that’s part of what drove me to join IBM in 1986, to push back on the closing of doors, and help knock down walls in IBM openness.

At the end of our twitter exchange, the first 3-tweets are included above, I promised to track down one of my earlier papers. As far as I recall, and without going through piles of hard copy paper in storage, this one was formally published by IBM US using a similar name, and pretty much identical content, probably in the Spring 0f ’96.

It is still important to differentiate between de jure and de facto standards. Open Source creates new de facto standards every day, through wide adoption and implementation using that open source. While systems ,ove much more quickly these days, at Internet speed, there is still a robust need to de jure standards. Those that are legally, internationally and commonly recognised, whether or not they were first implemented through open source. Most technology standards these days are as that’s the best way to get them through standards organizations.

The PDF presented here is original, unedited, just converted to PDF from Lotus Word Pro.

Lotus Word Pro, and it’s predecessor, Ami Pro, are great examples of de facto standards, especially inside IBM. Following the rise of Microsoft Word and MS Office, Lotus products on the desktop effectively disappeared. Since even inside IBM, the Lotus source code was never available, not only were the products only a de facto standard, they were never open source. While in the post Lotus desktop software period considerable effort has been put into reverse engineer the file formats , and some free and chargeable convertors almost all of them can recover the text, most do a poor job or formatting.

For that reason, I bought a used IBM Thinkpad T42 with Windows XP; Lotus Smartsuite and still have a licensed copy of Adobe Acrobat to create PDF’s. Words matter, open source, open, and open standards are all great. As always, understand the limitations of each.

There are a load of my newer white papers in the ‘wayback’ machine, if you have any problems finding them, let me know, I’ll jump start the Thinkpad T42.

Annual IBM Shareholder Meeting

ibm-i-death-star[1]

Picture: (C) Nick Litten

 

 

 

 

Remembering the dawn of the open source movement

and this isn’t it.

attwood statistics 1975

Me re-booting an IBM System 360/40 in 1975

When I first started in IT in 1974 or as it was called back then, data processing, open source was the only thing. People were already depending on it, and defending their right to access source code.

I’m delighted with the number and breadth of formal organizations that have grown-up around “open source”. They are a great thing. Strength comes in numbers, as does recognition and bargaining power. Congratulations to the Open Source Initiative and everything they’ve achieved in their 20-years.

I understand the difference between closed source, (restrictive) licensed source code, free source, open source etc. The point here isn’t to argue one over the other, but to merely illustrate the lineage that has led to where we are today.

Perhaps one of the more significant steps in the modern open source movement was the creation in 2000 of the Open Source Development Labs, (OSDL) which in 2007 merged with the Free Standards Group (FSG) to become the Linux Foundation. But of course source code didn’t start there.

Some people feel that the source code fissure was opened when  Linus Torvalds released his Linux operating system in 1991 as open source; while Linus and many others think the work by Richard Stallman on the GNU Toolset and GNU License started in 1983, was the first step. Stallman’s determined advocacy for source code rights and source access certainly was a big contributor to where open source is today.

But it started way before Stallman. Open source can not only trace its roots to two of the industries behemoths, IBM and AT&T, but the original advocacy came from them too. Back in the early 1960’s, open source was the only thing. There wasn’t a software industry per se until the US Government invoked its’ antitrust law against IBM and AT&T, eventually forcing them, among other things, to unbundle their software and make it separately available as well as many other related conditions.

’69 is the beginning, not the end

The U.S. vs.I.B.M. antitrust case started in 1969, with trial commencing in 1975(1). The case was specifically about IBM blocking competitive hardware makers getting access and customers being able to run competitive systems, primarily S/360 architecture, using IBM Software.

In the years leading up to 1969, customers had become increasingly frustrated, and angry at IBM’s policy to tie it’s software to its hardware. Since all the software at that time was source code available, what that really meant was a business HAD to have one IBM computer to get the source code, it could then purchase an IBM plug-compatible manufacturers (PCM) computer(2) and compile the source code with the manufacturers Assembler and tools, then run the binaries on the PCM systems.

IBM made this increasingly harder as the PCM systems became more competitive. Often large previously IBM only systems users who would have, 2, 4, sometimes even 6 IBM S/360 systems, costing tens of millions of dollars, would buy a single PCM computer. The IBM on-site systems engineers (SE) could see the struggles of the customer, and along with the customers themselves, started to push back against the policy. The SE job was made harder the more their hands were tied, and the more restrictions that were put on the source code.

To SHARE or not to?

For the customers in the US, one of their major user groups, SHARE had
a vast experience in source code distribution, it’s user created content, tools tapes were legend, what most never knew, is that back in 1959, with General Motors, SHARE had its own IBM mainframe (709) operating system, the SHARE Operating System (SOS).

At that time there was formal support offerings of on-site SE’s that would work on problems and defects in SOS. But by 1962, IBM had introduced it’s own S/7090 Operating System, which was both incompatible with SOS, and also at that time IBM withdrew support by it’s SE and Program Support Representatives (PSR’s) to work on SOS.

1965 is where to the best of my knowledge is when the open source code movement, as we know it today, started

To my knowledge, that’s where the open source code movement, as we know it today, started. Stallman’s experience with a printer driver mirrors exactly what had happened some 20-years before. The removal of source code, the inability to build working modifications to support a business initiative, using hardware and software ostentatiously already owned by the customer.

IBM made it increasingly harder to get the source code, until the antitrust case. By that time, many of IBMs customers had created and depended on small, and large modifications to IBM source code.

Antitrust outcomes

Computerworld - IBM OCOBy the mid-70’s, once of the results of years of litigation, and consent decrees in the United States, IBM had been required to unbundle its software, and make it available separately. Initially it was chargeable to customers who wanted to run it on PCM, non-IBM systems, but overtime as new releases and new function appeared, even customers with IBM systems saw a charge appear, especially as Field Developed Programs, moved to full Program Products and so on. In a bid to stop competing products, and user group offerings being developed from their products, this meant the IBM Products were increasingly supplied object-code-only (OCO). This became a a formal policy in 1983.

I’ve kept the press cutting from ComputerWorld(March 1985) shown above since my days at Chemical Bank in New York. It pretty much sums-up what was going on at the time, OCO and users and user groups fighting back against IBM.

What this also did is it gave life to the formal software market, companies were now used to paying for their software, we’ve never looked back. In the time since those days, software with source code available has continued to flourish. With each new twist and evolution of technology, open source thrives, finds it’s own place, sometimes a dominant position, sometimes subservient, in the background.

The times in the late 1950’s and 60’s were the dawn of open source. If users, programmers, researchers and scientists had not fought for their rights then, it is hard to know where the software industry would be now.

Footnotes

(1) The PCM industry had itself come about as a result of a 1956 antitrust case and the consent decree that followed.

(2) The 1969 antitrust case was eventually abandoned in 1982.

API’s and Mainframes

ab[1]

I like to try to read as many American Banker tech’ articles as I can. Since I don’t work anymore, I chose not to take out a subscription, so some I can read, others are behind their subscription paywall.

This one caught my eye. as it’s exactly what we did in circa 1998/99 at National Westminster Bank (NatWest) in the UK. The project was part of the rollout of a browser Intranet banking application, as a proof of concept, to be followed by a full blown Internet banking application. Previously both Microsoft and Sun had tackled the project and failed. Microsoft had scalability and reliability problems, and from memory, Sun just pushed too hard to move key components of the system to its servers, which in effect killed their attempt.

The key to any system design and architecture is being clear about what you are trying to achieve, and what the business needs to do. Yes, you need a forward looking API definition, one that can accept new business opportunities, and one that can grow with the business and the market. This is where old mainframe applications often failed.

Back in the 1960’s, applications were written to meet specific, and stringent taks, performance was key. Subsecond response times were almost always the norm’ as there would be hundreds or thousands of staff dependent on them for their jobs. The fact that many of those application has survived to this today, most still on the same mainframe platform is a tribute to their original design.

When looking at exploiting them from the web, if you let “imagineers” run away with what they “might” want, you’ll fail. You have to start with exposing the transaction and database as a set of core services based on the first application that will use them. Define your API structure to allow for growth and further exploitation. That’s what we successfully did for NatWest. The project rolled out on the internal IP network, and a year later, to the public via the Internet.

Of course we didn’t just expose the existing transactions, and yes, firewall, dispatching and other “normal” services as part of an Internet service were provided off platform. However, the core database and transaction monitor we behind a mainframe based webserver, which was “logically” firewalled from the production systems via an MPI that defined the API, and also routed requests.

So I read through the article to try to understand what the issue was that Shamir Karkal, the source for Barbas article, felt was the issue. Starting at the section “Will the legacy systems issue affect the industry’s ability to adopt an open API structure?” which began with a history lesson, I just didn’t find it.

The article wanders between a discussion of the apparent lack of a “service bus” style implementation, and the ability of Amazon to sell AWS and rapidly change the API to meet the needs of it’s users.

The only real technology discussion in the article that I found that had any merit, was where they talked about screen scraping. I guess I can’t argue with that, but surely we must be beyond that now? Do banks really still have applications that are bound by their greenscreen/3270/UI? That seems so 1996.

A much more interesting report is this one on more general Open Bank APIs. Especially since it takes the UK as a model and reflects on how poor US Banking is by comparison. I’ll be posting a summary on my ongoing frustrations with the ACH over on my personal blog sometime in the next few days. The key technology point here is that there is no way to have a realtime bank API, open, mainframe or otherwise, if the ACH system won’t process it. That’s America’s real problem.

IoT App hell of the future

On the day after it was revealed that some models of the Google Home Mini speaker was revealed to be recording voices 24/7 due to a defect, Danny Palmer has a thoughtful piece on ZDNet about the toxic legacy of IoT devices.

Danny is spot-on about the social and technological impact of connected devices past their support date. While I’ve complained in the past about constantly updating apps, both adding function that slows the original device, and removing function that changes, often destroys the original value proposition of the device. It’s perhaps when the devices stop getting updates we have the most to fear from?

I have a Netgear NAS that is out of support, in fact, since I have an identical NAS that wakes-up Tuesdays at 2am and backs-up the primary NAS, I have two of them. While they are out of support, Netgear has been good at fixing urgent vulnerabilities. Of course, since I can’t see the source, I don’t know what vulnerabilities they have not fixed.

Kate and I went to see Blade Runner 2049 on the opening day at the local AMC cinema. It’s a bit of a thing of mine to sit through ALL, and I mean all of the end credits, As we left the theater, there it was, right at the very bottom of the screen, unseen from the seats, the Windows XP Start-button. I have no idea what projector they were using, but yes, many projectors did, and obviously still do run Windows XP.

Do you own the device you just bought?


Professor of Law, Washington and Lee University, has a great blog post that echoes exactly the same sentiments I heard Richard Stallman explain his original drive for open source, way back in the 1980’s.

Fairfield argues that we don’t own the devices we buy, we are merely buying a one-time license to the software within them. He makes a great case. It’s worth the read.

One key reason we don’t control our devices is that the companies that make them seem to think – and definitely act like – they still own them, even after we’ve bought them. A person may purchase a nice-looking box full of electronics that can function as a smartphone, the corporate argument goes, but they buy a license only to use the software inside. The companies say they still own the software, and because they own it, they can control it. It’s as if a car dealer sold a car, but claimed ownership of the motor.

My favorite counter-example of this is the Logitech Squeezebox network music player system I use.  Originally created by Slim Devices, as far back as 2000, with their first music player launched in 2001. Slim Devices were acquired by Logitech in 2006, who then abandoned the product line in 2012.

I started using Logitech Squeezebox in 2008, first by buying a Squeezebox Boom, then a Radio, another Boom, a Touch and have subsequently bought used Duet, and for my main living room, the audiophile quality Transporter.

While there are virtually no new client/players, there is a thriving client base built around the Raspberry Pi hardware with both client software builds and add-on audio hardware, as well as server builds to use the Pi. I’ve hacked some temporary preferences into the code to solve minor problems, but by far the most impressive enhancements to the long abandoned, official, server codebase are the extensions to keep up with changes in streaming services like the BBC iPlayer radio, Spotify, DSD play and streaming and many more enhancements. For any normal, closed source platform any one of these enhancements would likely have been impossible, and for many users made the hardware redundant.

The best place to start in the Squeezebox world is over on the forums, hosted, of course, at http://forums.slimdevices.com/

When my 1-month Ring (video) doorbell failed. It was all I could do to get Ring to respond. I spent nearly 4-hours on the phone with tech support. Not only did I have no control, the doorbell had stopped talking to their service, but they couldn’t really help. After the second session with support, I just said “look I’m done can you send a replacement?” – The tech support agent agreed they would, but 10-days later I was still waiting for even a shipping notice, much less a replacement. While the door bell worked as a door bell, none of the services, motion detection, door bell rings were any good as their services were unavailable to my door bell.

You don’t have to give up control when you buy a new device. You do own the skeleton of the hardware, buy you’ll have to make informed choices, and probably will give up control, if you want to own the soul of the machine, it’s software.


About & Contact

I'm Mark Cathcart, formally a Senior Distinguished Engineer, in Dells Software Group; before that Director of Systems Engineering in the Enterprise Solutions Group at Dell. Prior to that, I was IBM Distinguished Engineer and member of the IBM Academy of Technology. I am a Fellow of the British Computer Society (bsc.org) I'm an information technology optimist.


I was a member of the Linux Foundation Core Infrastructure Initiative Steering committee. Read more about it here.

Subscribe to updates via rss:

Feed Icon

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 797 other followers

Blog Stats

  • 85,158 hits