Archive for the 'evangelism' Category

Remembering the dawn of the open source movement

and this isn’t it.

attwood statistics 1975

Me re-booting an IBM System 360/40 in 1975

When I first started in IT in 1974 or as it was called back then, data processing, open source was the only thing. People were already depending on it, and defending their right to access source code.

I’m delighted with the number and breadth of formal organizations that have grown-up around “open source”. They are a great thing. Strength comes in numbers, as does recognition and bargaining power. Congratulations to the Open Source Initiative and everything they’ve achieved in their 20-years.

I understand the difference between closed source, (restrictive) licensed source code, free source, open source etc. The point here isn’t to argue one over the other, but to merely illustrate the lineage that has led to where we are today.

Perhaps one of the more significant steps in the modern open source movement was the creation in 2000 of the Open Source Development Labs, (OSDL) which in 2007 merged with the Free Standards Group (FSG) to become the Linux Foundation. But of course source code didn’t start there.

Some people feel that the source code fissure was opened when  Linus Torvalds released his Linux operating system in 1991 as open source; while Linus and many others think the work by Richard Stallman on the GNU Toolset and GNU License started in 1983, was the first step. Stallman’s determined advocacy for source code rights and source access certainly was a big contributor to where open source is today.

But it started way before Stallman. Open source can not only trace its roots to two of the industries behemoths, IBM and AT&T, but the original advocacy came from them too. Back in the early 1960’s, open source was the only thing. There wasn’t a software industry per se until the US Government invoked its’ antitrust law against IBM and AT&T, eventually forcing them, among other things, to unbundle their software and make it separately available as well as many other related conditions.

’69 is the beginning, not the end

The U.S. vs.I.B.M. antitrust case started in 1969, with trial commencing in 1975(1). The case was specifically about IBM blocking competitive hardware makers getting access and customers being able to run competitive systems, primarily S/360 architecture, using IBM Software.

In the years leading up to 1969, customers had become increasingly frustrated, and angry at IBM’s policy to tie it’s software to its hardware. Since all the software at that time was source code available, what that really meant was a business HAD to have one IBM computer to get the source code, it could then purchase an IBM plug-compatible manufacturers (PCM) computer(2) and compile the source code with the manufacturers Assembler and tools, then run the binaries on the PCM systems.

IBM made this increasingly harder as the PCM systems became more competitive. Often large previously IBM only systems users who would have, 2, 4, sometimes even 6 IBM S/360 systems, costing tens of millions of dollars, would buy a single PCM computer. The IBM on-site systems engineers (SE) could see the struggles of the customer, and along with the customers themselves, started to push back against the policy. The SE job was made harder the more their hands were tied, and the more restrictions that were put on the source code.

To SHARE or not to?

For the customers in the US, one of their major user groups, SHARE had
a vast experience in source code distribution, it’s user created content, tools tapes were legend, what most never knew, is that back in 1959, with General Motors, SHARE had its own IBM mainframe (709) operating system, the SHARE Operating System (SOS).

At that time there was formal support offerings of on-site SE’s that would work on problems and defects in SOS. But by 1962, IBM had introduced it’s own S/7090 Operating System, which was both incompatible with SOS, and also at that time IBM withdrew support by it’s SE and Program Support Representatives (PSR’s) to work on SOS.

1965 is where to the best of my knowledge is when the open source code movement, as we know it today, started

To my knowledge, that’s where the open source code movement, as we know it today, started. Stallman’s experience with a printer driver mirrors exactly what had happened some 20-years before. The removal of source code, the inability to build working modifications to support a business initiative, using hardware and software ostentatiously already owned by the customer.

IBM made it increasingly harder to get the source code, until the antitrust case. By that time, many of IBMs customers had created and depended on small, and large modifications to IBM source code.

Antitrust outcomes

Computerworld - IBM OCOBy the mid-70’s, once of the results of years of litigation, and consent decrees in the United States, IBM had been required to unbundle its software, and make it available separately. Initially it was chargeable to customers who wanted to run it on PCM, non-IBM systems, but overtime as new releases and new function appeared, even customers with IBM systems saw a charge appear, especially as Field Developed Programs, moved to full Program Products and so on. In a bid to stop competing products, and user group offerings being developed from their products, this meant the IBM Products were increasingly supplied object-code-only (OCO). This became a a formal policy in 1983.

I’ve kept the press cutting from ComputerWorld(March 1985) shown above since my days at Chemical Bank in New York. It pretty much sums-up what was going on at the time, OCO and users and user groups fighting back against IBM.

What this also did is it gave life to the formal software market, companies were now used to paying for their software, we’ve never looked back. In the time since those days, software with source code available has continued to flourish. With each new twist and evolution of technology, open source thrives, finds it’s own place, sometimes a dominant position, sometimes subservient, in the background.

The times in the late 1950’s and 60’s were the dawn of open source. If users, programmers, researchers and scientists had not fought for their rights then, it is hard to know where the software industry would be now.

Footnotes

(1) The PCM industry had itself come about as a result of a 1956 antitrust case and the consent decree that followed.

(2) The 1969 antitrust case was eventually abandoned in 1982.

Open letter: CD Recycling

Dear IT Industry Colleague,

I’ve just moved house. In the process I realised that I had hundreds of old datas CD’s. Some of them with old backups, many of them used to copy copies of other CD’s some DVD’s with dumps of system folders and so on a so forth.

I figured I’d just dump them in the recycling, which gets collected bi-weekly. On checking though, not only are these not recyclable, but they are actually pretty hard to completely destroy. They also contain a large amount of toxic chemicals, and unless they are sent to a specialty recycling center, most end up in incinerators or landfill, neither is a good thing.

There is a good article here on the general problems with the creation and disposal of CD/DVD’s, from 2013. It says, among other things:

The discs are made of layers of different mixed materials, including a combination of various mined metals and petroleum–derived plastics, lacquers and dyes, which, when disposed of, can pollute groundwater and bring on a myriad of health problems. Most jewel cases are made of polyvinyl chloride (PVC), which has been thought to produce a higher-than-normal cancer rate within workers and those who live in the area where it is manufactured. They also release harmful chemicals when incinerated.

Having realized the problems, what did I do? First, when disposing old data CD’s and DVD’s you must understand there is an obvious potential security exposure. In principle, any data can be read from the CD. In practice, it may not be that simple if the data is formatted using specific backup programs, encrypted etc. But you do have to consider this before discarding them.

man eating cdI came up with a couple of easy ways to make recovering data hard. One involved scratching the recording sides (remember, some are dual sided). The scratches can be removed but it’s a time consuming process and not something done by someone who casually comes across your CD.

The second process used a nail in a set of grips, I heated the nail and simply pushed a couple of holes through each CD/DVD. Again, some data could still be read by the determined, but very unlikely.

IMG_20160609_182555Once I was done marking all the media, I threw them in an old Amazon box, too them to the US Post Office, mailed them as “media mail” to the CD Recycling Center of America. The CD Recycling Center provides “certified destruction” of your CD’s.

Our industry uses vast amounts of natural resources, it consumes rare minerals at an alarming rate, often mined in difficult, dangerous, and sometimes illegal conditions. Individually this is hard for us to do anything about. Please though, don’t throw old data CDs, DVD’s or any others in the garbage/trash/refuse and especially the recycling.

Yes, it takes a few minutes of your time; yes, it will cost you to box, tape, address and actually post the package back for destruction. Over the years IT has made me a lot of money, it is the least I could do. Please join me. Thank you.

 

(My) Influential Women in Tech

Taking some time out of work in the technical, software, computer industry has been really helpful to give my brain time to sift through the required, the necessary, the nice, and the pointless things that I’ve been involved in over 41-years in technology.

international-womens-day-logo1[1]Given that today is International Women’s Day 2016 and numerous tweets have flown by celebrating women, and given the people I follow, many women in Technology. I thought I’d take a minute to note some of the great women in Tech I had the opportunity to work with.

I was fortunate in that I spent much of my career at IBM. There is no doubt that IBM was a progressive employer on all fronts, women, minorities, the physically challenged, and that continues today with their unrelenting endorsement of the LGBT community. I never personally met or worked with current IBM CEO, Ginni Rometty, she like many that I did have the opportunity to work with, started out in Systems Engineering and moved into management. Those that I worked with included Barbara McDuffie, Leslie Wilkes, Linda Sanford and many others.

Among those in management at IBM that were most influential, Anona Amis at IBM UK. Anona was my manager in 1989-1990, at a time when I was frustrated and lacking direction after joining IBM two years earlier, with high hopes of doing important things. Anona, in the period of a year, taught me both how to value my contributions, but also how to make more valuable contributions. She was one of what I grew to learn, was the backbone of IBM, professional managers.

My four women of tech, may at sometime or other, have been managers. That though wasn’t why I was inspired by them.

Susan Malika: Sue, I met Sue initially through the CICS Product group, when we were first looking at ways to interface a web server to the CICS Transaction Monitor. Sue and the team already had a prototype connector implemented as a CGI. Over the coming years, I was influenced by Sue in a number of fields, especially in data interchange and her work on XML. Sue is still active in tech.

Peggy Zagelow: I’d always been pretty dismissive of databases, apart from a brief period with SQL/DS; I’d always managed fine without one. Early on in the days of evangelizing Java, I was routed to the IBM Santa Teresa lab, on an ad hoc query from Peggy about using Java as a procedures language for DB2. Her enthusiasm, and dogma about the structured, relational database; as well as her ability to code eloquently in Assembler was an inspiration. We later wrote a paper together, still available online[here]. Peggy is also still active in the tech sector at IBM.

Donna Dillenberger: Sometime in 1999, Donna and the then President of the IBM Academy of Technology, Ian Brackenbury, came to the IBM Bedfont office to discuss some ideas I had on making the Java Virtual Machine viable on large scale mainframe servers. Donna, translated a group of unconnected ideas and concepts I sketched out on a white board, into the “Scalable JVM”. The evolution of the JVM was a key stepping stone in the IBM evolution of Java. I’m pleased to see Donna was appointed an IBM Fellow in 2015. The paper on the JVM is here.(1).

Gerry Hackett: Finally, but most importantly, Geraldine aka Gerry Hackett. Gerry and I  met when she was a first line development manager in the IBM Virtual Machine development laboratory in Endicott New York, sometime around 1985. While Gerry would normally fall in the category of management, she is most steadfastly still an amazing technologist. Some years later I had the [dubious] pleasure of “flipping slides” for her as Gerry presented IBM Strategy. Aside: “Todays generation will never understand the tension between a speaker and a slide turner.” Today, Gerry is a Vice President at Dell. She recruited me to work at Dell in 2009, and under her leadership the firmware and embedded management team have made steady progress, and implemented some great ideas. Gerry has been a longtime advocate for women in technology, a career mentor, and a fantastic roll model.

Importantly, what all these women demonstrated, by the “bucketload”, was quiet, technological confidence; the ability to see, deliver and celebrate great ideas and great people. They were quiet unlike their male peers, not in achievement, but in approach. This why we need more women in technology, not because they are women, but because technical companies, and their products will not be as good without them.

(1). Edited to link to correct Dillenberger et al paper.

Net neutrality and the FCC

If you have not seen John Oliver lay bare the FCC Net Neutrality proposal, you must. You can find it here.

I’ve shared it widely among my facebook friends and implored them to email or contact the FCC. In the last few days I’ve been asked what I wrote. Here is my letter in full. I’ve taken the liberty to mark up a few minor corrections I wish I’d made before I sent it.

To: openinternet@fcc.gov

Sir, Madam,
I am a Legal Foreign Resident living and working in Austin Texas, I have lived in here 8-years, and for 2-years before that in New York State. I am an Executive Director and Senior Distinguished Engineer at Dell Computer, I write though as a private individual.
The current service I receive, in a residential Street, just 1-mile from the City of Austin City Hall, is expensive, slow an the only option available to me. Yes, since the google fiber announcement, both Grande Communications and AT&T have announced new offerings, but neither is available. [on my street]
Net net, I have no choice. Now, you are proposing that the relatively expensive cable service I and my neighbors receive will now be subject to a further direct or indirect charge.
It’s not clear at all that I can give away anymore of my privacy to the massive cable conglomerates, so the only way your proposal would work is either I pay the cable company more money for premium traffic, or I pay the content companies more for premium content. The end result to me though is I will be paying more for the same service I had last year.
This proposed dual speed internet traffic regulation must NOT be implemented. It is anti-competitive, allowing larger companies to out spend smaller, new companies; it is monopolistic, it allows the incumbent cable companies to shore-up their already entrenched positions but by charging effectively, more for the same. Ultimately, this proposal further allows monopolistic providers to shake down individuals and content companies with for no real benefit.
US Cable Broadband is shockingly expensive. I regularly travel the world on business and I’m sure you have the data that shows that the USA, in general, is both more expensive per Mb per second, and has slower transport speeds to home users in cities like Austin, than do cities in what a few years ago we would have considered third world.
The price of Internet connection to homes, at a speed that makes modern home working, web conferencing etc. effective is now becoming an inhibitor. Soon it will have an affect on hiring patterns and wages for entry level positions. This cannot be what you want? Are video conferencing companies, web cam/whiteboard services premium content?
Please ensure that the Net Neutrality proposal is NOT implemented and revisit cable Internet as a common carrier.
—————————-

 

Linkedin Pulse – More walls and barriers

Dell has been encouraging team members to post to the new linkedin Pulse, which came to LinkedIn as part of a $90m acquisition last year. I’ve been looking at it from my phone and tablet while travelling and wrote this summary today, I thought it worth sharing here with many of my tech’ savvy friends, colleagues, and readers.

http://www.victoriana.com/gardening/winsford.htm“I won’t be participating. I assume you never got answers to my questions,  mostly because as expected,  LinkedIn are trying to recreate the best of the Web inside a walled garden.

That is, they appear to have introduced a new comment and groups system,  are encouraging posts which are really better suited to either public blogs,  news websites,  or company websites.

I can’t work out the link semantics,  making it difficult to share anything outside linkedin.  There seems to be little or no moderation on comments and discussions.  The article on CISO and Target is a good example,  follow on to the comments.

I suspect they won’t make full articles available to search engines,  and you will have to be logged in to LinkedIn to read them.

Since I’m not invested in LinkedIn,  and not in classical marketing,  I can’t see any value of helping LinkedIn recreate the Internet inside linkedin. ”

Did I get it wrong? LinkedIn is trying to be the facebook place where recruiters and marketing people hangout.

 

BUILD Madison

Last week I had the opportunity to go to the Dell Software Group lab at Madison, Wisconsin. I’d been there briefly once before, and was impressed with the energy and community involvement, this trip was for their 3rd annual build-a-thon, managed and excellently compared by location manager Tom Willis. I was joined by Doug Wright, Doug manages our common engineering team, which includes some team members in Madison.

The event was a 24hr hack-a-thon, where the R&D staff submitted ideas in advance, formed teams and went at the problem in a fun, team, relaxed environment. Projects didn’t have to be specifically work based, and one team took the opportunity to build a tele-presence robot using lego(r) and an Android tablet.

I was really refreshing to see programmers going at challenges in only 24hrs, even in an environment where we do code releases for some products every 10-days, it reminded me of the CIP(continuous improvement programmes)  projects we used to run back in the mid-1980’s where we couldn’t get code out fast enough for the explosive PC application demand. It was idea, code, evaluate, cleanup, ship and then re-evaluate, code, “lather, rinse, repeat”.

The projects for build were evaluated awarded points based on their success criteria, except the Directors award which was chosen. Extra points were given if the solution used the Common User Interface UX framework, and automated testing, as well as being grouped around key Dell initiatives.

The two main winners, with lots of honorable mentions. The main winner was “Project Timber”, a distributed log aggregator. Although I’d already declared my hand over on the @dsgbuild twitter account, looking at the other projects, Doug and I decided on the CUI Builder as the Directors award. Overall it was great to be around, congrats to Tom for organizing, and especially to Jenna for keeping the food and snacks flowing and the midnight root beer shakes. Finally, a special mention for the video confession booth, great idea, and the edited video was either really funny, or I was really tired…

For Dell Software Group employees, you can find more details and pictures here, on commons.

This slideshow requires JavaScript.

REPLY-TO-ALL storms

One thing that I’ve come to loathe is a “reply-to-all” email storm. They happen on all sorts of email systems, are are often made worse by making the reply-to-all but the default, rather than an option. They are also compounded by distribution lists, especially in big organizations. We had the perfect-storm Friday afternoon, an errant email addressed to an-all-org distribution list.

Peoples inability to use the advanced features the tools they spend so much of their time using, and their willingness to compound the problem with banal, stupid, inconsiderate and just thoughtless responses, also sent reply-to-all, not only astounds me, it frustrates me.

Yes I’m sure you want to be removed from this email chain; yes I know you want people to stop using reply-to-all but sending the response reply-to-all just shows you’ve not thought it through, and don’t know how to use your tools. I understand the problem is compounded by mobile use, where you don’t have the same technology, and having your blackberry vibrate in your pocket 200x can be a little overkill.

educateIf you have Microsoft Outlook, and especially Outlook 2010. Then help is at hand. I keep a couple of Quicksteps for these occaisions, the first is called REPLYTOALL and the second is called EDUCATE. REPLYTOALL just replies to the sender pointing out how useless their reply-to-all was. The quickstep changes the subject, adds the text, signs the email, and sends it, finally it deletes the original message.

If they reply, and this is known since the subject is changed, then I use EDUCATE to reply. It sends the following reply, again changing the subject, adding the text, signing the email, sending it, and finally deleting their response.

You replied to an email chain by using reply to all… asking to be removed or similar… a completely pointless effort that just added to the problem.

Interestingly you simply don’t have to do that. You have three choices that don’t need a reply at all, let alone a reply to all.

  1. After you have read the first message where you decide you don’t need to read anymore, from the inbox view right click on that message and choose ignore
  2. Right click the message and select ALWAYS MOVE MESSAGES IN THIS CONVERSION and then select deleted items. Right click and select create rule(this is more complex than the two above but can achieve the same thing)
  3. What I set up sometime ago is a quickstep. It’s easy to do, its called REPLYTOALL. I select the messages and click the QUICKSTEP, it is set up to send the reply you got, but also deletes the original email, and I just got the chance to update it to change the Subject, as per this reply, to READ ME!And yes, just in case you wondered, yes this reply came from another QUICKSTEP.

Some of us actually think email is still a vehicle for communicating ideas, not just the quickest way to abdicate responsibility for doing anything.

 


About & Contact

I'm Mark Cathcart, formally a Senior Distinguished Engineer, in Dells Software Group; before that Director of Systems Engineering in the Enterprise Solutions Group at Dell. Prior to that, I was IBM Distinguished Engineer and member of the IBM Academy of Technology. I am a Fellow of the British Computer Society (bsc.org) I'm an information technology optimist.


I was a member of the Linux Foundation Core Infrastructure Initiative Steering committee. Read more about it here.

Subscribe to updates via rss:

Feed Icon

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 797 other followers

Blog Stats

  • 85,158 hits