Workhorse Windows Computers

As each year goes by, I get less and less technical. I do still hack together fixes for software when needed, and write some procedural code from time to time. Most of the tools I use are still Windows based. While I know all the “cool kids” do Linux or containers, and I should too, especially with my background, it just hasn’t happened.

We have a number of discrete, single task computer systems that we use around the house. I have a dedicated music streaming server in the mechanical room in the basement; a system than runs Rouvy, Zwift, plays streaming video in the garage for my bike trainer and treadmill sessions; and in the living room, I use another system attached to a TV to do vinyl recording, mixing, and editing.

Everyone of those systems is a Dell Optiplex desktop computer. Why?

Because you can pick up the systems from ebay for $25-$150, upgrade the memory, replace the HDD with an SSD, and come out with a top-notch, headless system for less than $200. The processors are everything from 2-core Pentiums to 4-core Intel i7’s and if you get really creative, you can switch almost all the parts from one system, to another. The form factors run from Ultra Small Form Factor(USFF), to Small Form Factor(SFF) to Mini-Tower, to full size desktop.

The reason there are so many of these on ebay for sale is exactly the same reason I want them. They are low cost, they are easily maintained, Dell has an outstanding support website which when searched with a “Service Tag” will return the original configuration, including all the software.

That’s important, you can tell if the original system was shipped with a Windows “Certificate of Authenticity”. If it was, with a small hop-skip-and-jump you can install Windows 7 and upgrade to Windows 10. Some of the Optiplex 90xx series even have sufficient hardware to run Windows 11.

Sadly this week the system I use for my music editing died. It wouldn’t even power on. It’s been sat unloved, on a shelf, behind a closed door, in the media center under the TV for 4-years. I went through some basic diagnostics and it wasn’t obvious where the problem was, so I removed the SSD from the i5 Optiplex SFF 3020, and put in an i3 Opitiplex USFF 970. It booted and everything was great.

I scoured ebay, and secured a Dell Optiplex 7010 SFF i5-3570 3.40GHz 8GB RAM 2 TB HDD for just $66. I added another 8GB of memory, and the existing SSD and no software installs, when switched on, Windows booted, switched drivers, and rebooted and I was back online.

The best thing about using these though, is possibly using the Dell Support portal to track inventory, add location and other information. You can organize by folders, I even keep one for defective equipment and another for out of use stuff. https://www.dell.com/support/mps/

Buying from ebay

If you are considering buying an OptiPlex from ebay, here are a few tips.

  1. “Bare Bones” – Beware buying systems listed as bare bones. These are likely to have been completely stripped, and have no processor, HDD, or memory/RAM. They can be useful for parts like power supplies or replacement USB modules. If you don’t have experience putting systems together, probably best to avoid, by the time you buy parts it’s likely more expensive than buying a pre-built system.
  2. Processors – As discussed, the OptiPlex come with a range of processors. Here is a simple guide: Pentium(Slow); i3(adequate; i5(good), i7(much better) – an I3 will run a browser and local apps with 8GB of memory, perfectly OK for day to day Google Apps and web browsing. i7 with 16GB will run pretty much anything, I record vinyls albums while watching streaming movies, and checking email etc.
  3. Memory – These systems can pretty much only take 16GB of RAM/Memory. 8GB is fine, 4GB will work with Windows 10 but is slow, on an i3 or Pentium processor, it’s probably too slow. You can add memory easily, but only have two slots on most systems until you get up to the 9xx models. You can mix and match sizes of memory, but not speed. Don’t over pay for an identical system that has maybe 4/8GB of additional memory. You can pick up the RAM/Memory for as little has $9 per 4GB on ebay.
  4. HDD/SSD – check the model on the Dell support system before buying, even if just by model number, better by service tag. You can check the documents or original configuration and see what size HDD they were shipped with. Probably the best performance upgrade you can make is to get an SSD, even a 128GB SSD. I buy Kingston SSD’s. Many of the SFF systems will need a plastic drive caddy to mount the SSD in. If you need more storage, an external HDD or SSD might be the simplest, all the models have ample USB ports, some even have USB 3 ports.
  5. Graphics – in the same way the OptiPlex models have evolved with processors/speed, the graphics have developed the same way. The early Pentium models have basic onboard graphics, typically only supporting 1920×1040 resolution. Even if you attach these to a large screen LCD 4k TV that won’t deliver anything other than a good Windows text based display. Later models, especially SFF, Mini-tower and desktop models withy i5 and i7 processors will have Intel 4000 series graphics chips. The newer models have Intel 4600 series processors which can do 4k display. The desktop and mini-tower have enough space to install a full graphics expansion card.
  6. Networking – All the OptiPlex models have Ethernet ports, generally, while the all support a wifi card, almost none of them will be supplied with one. If you need wifi, it’s probably simplest and cheapest to add a wifi USB stick. The same is true for Bluetooth, almost none have it, which is frustrating if you want to use your phone, or headphones with them. Again, you can find a 2x pack of Bluetooth 4 dongles for as little as $12 new.
  7. Power – The new USFF models, those with the all black faceplate, like the Dell Optiplex 9020m i5  don’t include a power supply. They need an external brick and cable like a laptop. Don’t buy one without a power supply.
  8. Local Pickup vs Shipping – Watch the shipping price. This is a good rule for buying anything on ebay, but especially larger/heavier items. I paid $50, $12.95 and had free shipping on Optiplex systems, factor that into the total price. If you have time, like me, consider local pickup. I found one seller just 20-miles away that offers free pickup. Not only do you save on shipping but you may be able to get it the same day, or next day.

Whatever you do, do check the Dell Owners Manual for the system you intend to buy. It will be on the Dell support website in .pdf format. You can see how simple these are to work with and what components they can take etc.

Finally remember, even if you can buy a cheap new system for a similar price, it’s unlikely to be so flexible and repairable. Also, it may not have an embedded Windows license. Don’t forget, e-waste is a massive problem, by buying used from ebay you are keeping it away from a distant country where it might be disassembled by a child using a soldering iron. At best, you’ll be keeping it from landfill.

If you want to buy, here is a starter link to ebay. Once there add the key tech you want like +i5 or +16GB to the search. Feel free to leave questions.

Browser Passwords

If you get frustrated by having to search your browsers password settings database, or better still if your typing skills are about as accurate as my “hunt and peck” or “hammer and nail” techniques, you might find this Javascript useful.

To use it, create a browser favorite, then just paste it in the URL filed, at least thats all thats required in Chrome, Edge and other chromium variants.

If the password field is completed, click the favorite it will be changed to clear text. If you want to check as you type, click the favorite and type and it should display each character.

javascript:(function(){var IN = document.getElementsByTagName("input");for(var i=0; i<IN.length; ++i){F = IN[i];if (F.type.toLowerCase() == "password"){if(document.all){var n = document.createElement("input");for(var k in F.attributes) if(k.toLowerCase() != 'type'){try{n[k] = F[k]}catch(err){}};F.parentNode.replaceChild(n,F);}else{F.type="text"}}}})()

Mysterious Disappearing MAC Address

One of my systems applied a Windows 10 updates on Friday, it runs attached to my TV, and so while not headless(ie. no attached monitor) it often runs for days without the UI visible. So there it was, has anyone ever clicked “Let’s Go”?

The system wasn’t connected the Internet? Puzzling, since it has a 1Gb wired connection into a switch, that goes straight to the 1Gb Fibre Optic cable modem, and everything else was working.

Choose Adapter settings > Disable > Enable > Wait > Identifying Network... > No Network Connection.

Next up was a CMD prompt and IPCONFIG /ALL

Strangely, it reported the IP V4 address as 169.x.x.x – no DNS etc. Then I spotted it, Physical Address: 00-00-00-00-00-00
Huh?

I tried all the usual things:

Disable Adapter > Delete Driver > Shutdown/Reboot

and variations of that. Then went ahead and started searching on the web, that was as helpful as it always. They only thing I learned is, I was far from alone. Especially with Realtek PCIe GBE Family Controller users. I downloaded their device diagnostics, and everything ran clean, and right there in the diagnostics window was the supposedly zero-d out MAC/Physical address.

VPN Software?

I checked with the support team with NORDVPN that runs on that system, they assured me they do NOT change the MAC address, or use any form of MAC Spoofing in their software.

No Connectivity

The reason for the Internet connectivity issue, is that the cable modem I use, will not give out DNS data and assign an IP address to a device that is not on the list of devices I maintain.

Among the various reports of issues relating to this, I found this one. So there is every possibility that it was a #Windows #WIN10 update that screwed up the MAC address which is stored in the registry, who knows? Also, everyone of the posts I found recommended an app to store and update a new MAC Address. I’m not a big fan of either using REGEDIT and downloading and installing random apps to update the registry.

Setting a MAC Address in Windows 10

Turns out you don’t need to. If you go into the properties for the adapter, and scroll through them, you’ll come to the “Network Address”.

In the value field should be the same MAC address that is on the label that came with the PC. It also should match the MAC address you can find in the BIOS if you want to go rooting around in there. If you have the MAC address, for example, from PC Hardware case, you can simply add that back in at (3) above and select (4) OK. Just make sure you get the correct MAC address, don’t duplicate one already on your network, and don’t use the MAC address from your Wifi adapter, for your Ethernet adapter.

A Picture I found online. Don’t do this especially with a label that includes your Dell Service tag… trust me on that.

You can also look up your provider MAC address prefixes, here, and make a new one. Again, the MAC address does need to be unique. In my case I had the original, but decided while working through this issue to use a MAC Address starting with FCCF62 and is from a block assigned to “IBM Corp”. Since I don’t have any IBM devices on my network and am unlikely to have any.

My system has been fine since fixing this. The change survived through a couple of reboots, and I re-installed NORDVPN and it’s also working fine.

Why Post?

First obviously was to document what I’d done; Second was to share what had happened and how I resolved it; Third was in hope someone would post with a logical discussion of how this happened, and also, how I could have resolved it more simply and quickly.

I remain amazed that the Realtek diagnostics, a). loaded their own MAC address from Windows registry and b). didn’t at least recognize the MAC address wasn’t from a block they own?

IBM 3090 Training

Between 2001 and 2004, I had an office in the home of the mainframes, IBM Poughkeepsie, in Building 705. As a Brit’, it wasn’t my natural home, also, I wasn’t a developer or a designer, as a software architect focusing in software and application architectures, it never felt like home.

IBM Library number ZZ25-6897.

One day, on my way to lunch at the in-house cafeteria, I walked by a room whose door was always closed. There was a buzz of people coming from it, and the door was open. A sign outside said “Library closing, Take anything you can use!”

I have some great books, a few of which I plan to scan and donate the output to either the Computer History Museum, or to the Internet Archive.

One of the more fun things I grabbed were a few IBM training laserdiscs. I had no idea what I’d do with them, I had never owned a laserdisc player. I just thought they’d look good sitting on my bookshelf. Especially since they are the same physical size as vinyl albums.

Now 16-years on, I’ve spent the last 4-years digitising my entire vinyl collection, in total some 2,700 albums. One of my main focus areas has been the music of Jazz producer, Creed Taylor. One of the side effects from that is I’ve created a new website, ctproduced.com – In record collecting circles, I’m apparently a completionist. I try to buy everything.

And so it was I started acquiring laserdiscs by Creed Taylor. It took a while, and I’m still missing Blues At Bradleys by Charles Fambrough. While I’ve not got around to writing about them in any detail, you can find them at the bottom of the entry here.

What I had left were the IBM laserdiscs. On monday I popped the first laserdisc in, it was for the IBM 3090 Processor complex. It was a fascinating throwback for me. I’d worked with IBM Kingston on a number of firmware and software availability issues, both as a customer, and later as an IBM Senior Software Engineer.

I hope your find the video fascinating. The IBM 3090 Processor was, to the best of my knowledge, the last of the real “mainframes”. Sure we still have IBM processor architecture machines that are compatible with the 3090 and earlier architectures. However, the new systems, more powerful, more efficient, are typically a single frame system. Sure, a parallel sysplex can support multiple mainframes, it doesn’t require them. Enjoy!

Farewell Windows?

Not quite, and not for a long long time. In my house we run 4x laptops with Windows 10, we have a small office computer running Windows 10; then there is the Music Server in the basement, and the media laptop buried in the TV cabinet, they also run Windows 10. So it will be a long time before we stop using it.

However, in an excellent summary of what’s been going on at Microsoft, Matthias Biehl also makes a number of organizational truisms. It’s well worth a read. Also, do yourself a favor and try the Microsoft To-Do program, I use it on Windows and Android, it’s excellent.

culture flows from success

 

#HEARTBLEED was 5-years ago.

I was reading through my old handwritten tech notebooks this morning, search for some details on a Windows problem I know I’ve had before. I noticed an entry for March 28th, 2014 on the latest bug tracker list from Red Hat. One of the items on the list from the week before was the #Heartbleed bug in OpenSSL.

heartbleed-twoway-featured[1]

Image from synopsis.com

In less than a couple of weeks, Jim Zemlin from the Linux Foundation contacted John Hull in the open source team at Dell, who passed the call to me. I was happy to tell Jim we’d be happy to sign up, I got voice approval for the spending commitment and the job was done.

The Core Infrastructure Initiative (CII) was announced on April 24th, 2014. One of the first priorities was how to build a more solid base for funding and enabling open source developers. The first projects to receive funding were announced on April 26th, 2014 with remarkable speed.

Five years later I’m delighted to see Dell are still members, along with the major tech vendors, especially and unsurprisingly, Google. Google employees have made both substantial commitments to CII and open projects in general. I remember with great appreciation many of the contributions made by the tehn steering committee members, especially, but not limited to Ben Laurie and Bruce Schneier.

This blog, on synopsis.com, has a summary, entitled Heartbleed: OpenSSL vulnerability lives on. May 2, 2017.

My blog entries on Heartbleed and CII are here, here, and here.

There is still much to be concerned about. There are still many unpatched Apache HTTPD servers, especially versions 2.2.22 and 2.2.15 accessible on the Internet.

Remember, just because you don’t see software, it doesn’t mean it isn’t there.

Serverless computing

I’ve been watching and reading on developments around serverless computing. I’ve never used it myself so only have limited understanding. However, given my extensive knowledge of servers, firmware, OS, Middleware and business applications, I’ve had a bunch of questions.

serverlessnyc

Many of my questions are echoed in this excellent write-up by Jeremy Daly on the recent Serverless NYC event.

For traditional enterprise type customers, it’s well worth reviewing the notes of the issues highlighted by Jason Katzer, Director of Software Engineering at Capital One. While some attendees talk about “upwards of a BILLION transactions per month” using serverlesss, that’s impressive, that’s still short of many enterprise requirements, it translates to 34.5-million transactions per day.

Katzer notes that there are always bottlenecks and often services that don’t scale the same way that your serverless apps do. Worth a read, thanks for posting Jeremy.

The Big Hack: How China Used a Tiny Chip to Infiltrate U.S. Companies – Bloomberg

This is a stunning discovery. I don’t have any insight into it except what’s been published here. However, it’s always been a concern. I remember at least one project that acquired a sample of hard disk controllers (HDC) from vendors with a view to rewriting a driver for OS cache optimization and synchronization.

I’d never actually seen inside a hard drive to that point, except in marketing promotional materials. We were using the HDC with different drives and I was surprised how complex they were. We speculated how easy it would have been to ship a larger capacity drive and insert a chip that would use the extra capacity to write shadow copies of a files that were unseen by the OS. We laughed it off as too complex and too expensive to actually do. Apparently not.

Source: The Big Hack: How China Used a Tiny Chip to Infiltrate U.S. Companies – Bloomberg

Open Source redux

While I don’t update here much anymore that’s mostly because I’ve not been active in the general technology scene for the last 2.5 years following my departure from Dell and the resultant non-compete. I’m taking a few easy steps back now, I’ve reactivated my British Computer Society (BCS) Fellow membership and am hoping to participate in their Open Source Specialist Group meeting and AGM on October 25th.

MS-DOS Open Source

msdos-logo-150x150[1]Interestingly, Microsoft have announced they are re-open sourcing the code for MS-DOS 1.25 and 2.0 releases. Although never available outside of Microsoft or IBM in its entirety, there were certainly sections o the code floating around in the mid-1980’s. I was given the code for some drivers in 1984 by an IBM Systems Engineer, which I proceeded to hack and use as a starter for the 3270 driver I used for file transfer.

I’ve got a copy of the code released by Microsoft, and other the next 6-months am going to set about compiling it and working to get to work on a PC as a way to re-introduce myself to working in PC Assembler and the current state of compilers.

The Zowe Open Source Project

This was announced today at SHARE St Louis. A great new effort and opportunity to integrate open source technologies and applications into the IBM z/OS operating system. Zowe, as the article says, is

a framework of software services that offers industry standard REST APIs, API catalog, extensible command line interface and web-based UI framework

They’ve also put together the zowe,org community for architects, developers and designers to share best practices. It’s not clear what the legal relationship is between the open mainframe project and zowe, but zowe is listed as a project, so that’s great news in terms of strategy and direction. As of writing, the open mainframe zowe project web page has the best detail on the project.

Zowe appears to be a collaboration between IBM and a number of companies, including Rocket Software. Rocket has a broad portfolio of software and systems that integrate with IBM Systems, they also have my friend, former colleague and sparing partner at IBM, Jim Porell on staff.

Digital Copiers, Faxes and MFP’s and their hard drives

I’m a subscriber to long time UK Tech journalist and Blogger, Charles Arthur / @charlesarthur Overspill blog where he currates links etc. Recently, he linked to an old report, from 2010, but it’s always worth reminding people of the dangers of photocopiers, fax machines and multi-function printers, especially older ones.

Copiers that are lightly used often have a lifecycle of 10-15 years. If you buy rather than lease, it’s quite possible you still have one that doesn’t include encryption of the internal hard drive. Even with a encrypted drive, there is still potential to hack the device software and retrieve the key, although pretty difficult.

The surprise thing is that many modern Multi-function Printers (MFP) also have local storage. While in modern models it is not an actual hard drive, it is likely to be some form of onboard flash memory ala cell phone memory, either part of the system board or via an embedded SD card. It’s worth remembering that these machines are Fax, copier, printers, and scanners all in one machine.

The US Federal Trade Commision has a web page that covers all the basics, in plain language.

Whatever the device, it is still incumbent on the owner to ensure it is wiped before returning it, selling it, or scrapping it. PASS IT ON!

For those interested in how you can get data from a copier/MFP type device, Marshall University Forensic Science team has a paper, here.

Open Distributed Challenges – Words Matter

I had an interesting exchange with Dez Blanchfield from Australia on twitter recently. At the time, based on his tweets, I assume Dez was an IBM employee. He isn’t and although our paths crossed briefly at the company in 2007, as far as I’m aware we never met.

The subject was open vs open source. Any longtime readers will know that’s part of what drove me to join IBM in 1986, to push back on the closing of doors, and help knock down walls in IBM openness.

At the end of our twitter exchange, the first 3-tweets are included above, I promised to track down one of my earlier papers. As far as I recall, and without going through piles of hard copy paper in storage, this one was formally published by IBM US using a similar name, and pretty much identical content, probably in the Spring 0f ’96.

It is still important to differentiate between de jure and de facto standards. Open Source creates new de facto standards every day, through wide adoption and implementation using that open source. While systems ,ove much more quickly these days, at Internet speed, there is still a robust need to de jure standards. Those that are legally, internationally and commonly recognised, whether or not they were first implemented through open source. Most technology standards these days are as that’s the best way to get them through standards organizations.

The PDF presented here is original, unedited, just converted to PDF from Lotus Word Pro.

Lotus Word Pro, and it’s predecessor, Ami Pro, are great examples of de facto standards, especially inside IBM. Following the rise of Microsoft Word and MS Office, Lotus products on the desktop effectively disappeared. Since even inside IBM, the Lotus source code was never available, not only were the products only a de facto standard, they were never open source. While in the post Lotus desktop software period considerable effort has been put into reverse engineer the file formats , and some free and chargeable convertors almost all of them can recover the text, most do a poor job or formatting.

For that reason, I bought a used IBM Thinkpad T42 with Windows XP; Lotus Smartsuite and still have a licensed copy of Adobe Acrobat to create PDF’s. Words matter, open source, open, and open standards are all great. As always, understand the limitations of each.

There are a load of my newer white papers in the ‘wayback’ machine, if you have any problems finding them, let me know, I’ll jump start the Thinkpad T42.

Annual IBM Shareholder Meeting

ibm-i-death-star[1]

Picture: (C) Nick Litten

 

 

 

 

Remembering the dawn of the open source movement

and this isn’t it.

attwood statistics 1975

Me re-booting an IBM System 360/40 in 1975

When I first started in IT in 1974 or as it was called back then, data processing, open source was the only thing. People were already depending on it, and defending their right to access source code.

I’m delighted with the number and breadth of formal organizations that have grown-up around “open source”. They are a great thing. Strength comes in numbers, as does recognition and bargaining power. Congratulations to the Open Source Initiative and everything they’ve achieved in their 20-years.

I understand the difference between closed source, (restrictive) licensed source code, free source, open source etc. The point here isn’t to argue one over the other, but to merely illustrate the lineage that has led to where we are today.

Perhaps one of the more significant steps in the modern open source movement was the creation in 2000 of the Open Source Development Labs, (OSDL) which in 2007 merged with the Free Standards Group (FSG) to become the Linux Foundation. But of course source code didn’t start there.

Some people feel that the source code fissure was opened when  Linus Torvalds released his Linux operating system in 1991 as open source; while Linus and many others think the work by Richard Stallman on the GNU Toolset and GNU License started in 1983, was the first step. Stallman’s determined advocacy for source code rights and source access certainly was a big contributor to where open source is today.

But it started way before Stallman. Open source can not only trace its roots to two of the industries behemoths, IBM and AT&T, but the original advocacy came from them too. Back in the early 1960’s, open source was the only thing. There wasn’t a software industry per se until the US Government invoked its’ antitrust law against IBM and AT&T, eventually forcing them, among other things, to unbundle their software and make it separately available as well as many other related conditions.

’69 is the beginning, not the end

The U.S. vs.I.B.M. antitrust case started in 1969, with trial commencing in 1975(1). The case was specifically about IBM blocking competitive hardware makers getting access and customers being able to run competitive systems, primarily S/360 architecture, using IBM Software.

In the years leading up to 1969, customers had become increasingly frustrated, and angry at IBM’s policy to tie it’s software to its hardware. Since all the software at that time was source code available, what that really meant was a business HAD to have one IBM computer to get the source code, it could then purchase an IBM plug-compatible manufacturers (PCM) computer(2) and compile the source code with the manufacturers Assembler and tools, then run the binaries on the PCM systems.

IBM made this increasingly harder as the PCM systems became more competitive. Often large previously IBM only systems users who would have, 2, 4, sometimes even 6 IBM S/360 systems, costing tens of millions of dollars, would buy a single PCM computer. The IBM on-site systems engineers (SE) could see the struggles of the customer, and along with the customers themselves, started to push back against the policy. The SE job was made harder the more their hands were tied, and the more restrictions that were put on the source code.

To SHARE or not to?

For the customers in the US, one of their major user groups, SHARE had
a vast experience in source code distribution, it’s user created content, tools tapes were legend, what most never knew, is that back in 1959, with General Motors, SHARE had its own IBM mainframe (709) operating system, the SHARE Operating System (SOS).

At that time there was formal support offerings of on-site SE’s that would work on problems and defects in SOS. But by 1962, IBM had introduced it’s own S/7090 Operating System, which was both incompatible with SOS, and also at that time IBM withdrew support by it’s SE and Program Support Representatives (PSR’s) to work on SOS.

1965 is where to the best of my knowledge is when the open source code movement, as we know it today, started

To my knowledge, that’s where the open source code movement, as we know it today, started. Stallman’s experience with a printer driver mirrors exactly what had happened some 20-years before. The removal of source code, the inability to build working modifications to support a business initiative, using hardware and software ostentatiously already owned by the customer.

IBM made it increasingly harder to get the source code, until the antitrust case. By that time, many of IBMs customers had created and depended on small, and large modifications to IBM source code.

Antitrust outcomes

Computerworld - IBM OCOBy the mid-70’s, once of the results of years of litigation, and consent decrees in the United States, IBM had been required to unbundle its software, and make it available separately. Initially it was chargeable to customers who wanted to run it on PCM, non-IBM systems, but overtime as new releases and new function appeared, even customers with IBM systems saw a charge appear, especially as Field Developed Programs, moved to full Program Products and so on. In a bid to stop competing products, and user group offerings being developed from their products, this meant the IBM Products were increasingly supplied object-code-only (OCO). This became a a formal policy in 1983.

I’ve kept the press cutting from ComputerWorld(March 1985) shown above since my days at Chemical Bank in New York. It pretty much sums-up what was going on at the time, OCO and users and user groups fighting back against IBM.

What this also did is it gave life to the formal software market, companies were now used to paying for their software, we’ve never looked back. In the time since those days, software with source code available has continued to flourish. With each new twist and evolution of technology, open source thrives, finds it’s own place, sometimes a dominant position, sometimes subservient, in the background.

The times in the late 1950’s and 60’s were the dawn of open source. If users, programmers, researchers and scientists had not fought for their rights then, it is hard to know where the software industry would be now.

Footnotes

(1) The PCM industry had itself come about as a result of a 1956 antitrust case and the consent decree that followed.

(2) The 1969 antitrust case was eventually abandoned in 1982.

API’s and Mainframes

ab[1]

I like to try to read as many American Banker tech’ articles as I can. Since I don’t work anymore, I chose not to take out a subscription, so some I can read, others are behind their subscription paywall.

This one caught my eye. as it’s exactly what we did in circa 1998/99 at National Westminster Bank (NatWest) in the UK. The project was part of the rollout of a browser Intranet banking application, as a proof of concept, to be followed by a full blown Internet banking application. Previously both Microsoft and Sun had tackled the project and failed. Microsoft had scalability and reliability problems, and from memory, Sun just pushed too hard to move key components of the system to its servers, which in effect killed their attempt.

The key to any system design and architecture is being clear about what you are trying to achieve, and what the business needs to do. Yes, you need a forward looking API definition, one that can accept new business opportunities, and one that can grow with the business and the market. This is where old mainframe applications often failed.

Back in the 1960’s, applications were written to meet specific, and stringent taks, performance was key. Subsecond response times were almost always the norm’ as there would be hundreds or thousands of staff dependent on them for their jobs. The fact that many of those application has survived to this today, most still on the same mainframe platform is a tribute to their original design.

When looking at exploiting them from the web, if you let “imagineers” run away with what they “might” want, you’ll fail. You have to start with exposing the transaction and database as a set of core services based on the first application that will use them. Define your API structure to allow for growth and further exploitation. That’s what we successfully did for NatWest. The project rolled out on the internal IP network, and a year later, to the public via the Internet.

Of course we didn’t just expose the existing transactions, and yes, firewall, dispatching and other “normal” services as part of an Internet service were provided off platform. However, the core database and transaction monitor we behind a mainframe based webserver, which was “logically” firewalled from the production systems via an MPI that defined the API, and also routed requests.

So I read through the article to try to understand what the issue was that Shamir Karkal, the source for Barbas article, felt was the issue. Starting at the section “Will the legacy systems issue affect the industry’s ability to adopt an open API structure?” which began with a history lesson, I just didn’t find it.

The article wanders between a discussion of the apparent lack of a “service bus” style implementation, and the ability of Amazon to sell AWS and rapidly change the API to meet the needs of it’s users.

The only real technology discussion in the article that I found that had any merit, was where they talked about screen scraping. I guess I can’t argue with that, but surely we must be beyond that now? Do banks really still have applications that are bound by their greenscreen/3270/UI? That seems so 1996.

A much more interesting report is this one on more general Open Bank APIs. Especially since it takes the UK as a model and reflects on how poor US Banking is by comparison. I’ll be posting a summary on my ongoing frustrations with the ACH over on my personal blog sometime in the next few days. The key technology point here is that there is no way to have a realtime bank API, open, mainframe or otherwise, if the ACH system won’t process it. That’s America’s real problem.


About & Contact

I'm Mark Cathcart, formally a Senior Distinguished Engineer, in Dells Software Group; before that Director of Systems Engineering in the Enterprise Solutions Group at Dell. Prior to that, I was IBM Distinguished Engineer and member of the IBM Academy of Technology. I am a Fellow of the British Computer Society (bsc.org) I'm an information technology optimist.


I was a member of the Linux Foundation Core Infrastructure Initiative Steering committee. Read more about it here.

Subscribe to updates via rss:

Feed Icon

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 915 other followers

Blog Stats

  • 88,695 hits