Archive for the 'Linux' Category

IBM 3090 Training

Between 2001 and 2004, I had an office in the home of the mainframes, IBM Poughkeepsie, in Building 705. As a Brit’, it wasn’t my natural home, also, I wasn’t a developer or a designer, as a software architect focusing in software and application architectures, it never felt like home.

IBM Library number ZZ25-6897.

One day, on my way to lunch at the in-house cafeteria, I walked by a room whose door was always closed. There was a buzz of people coming from it, and the door was open. A sign outside said “Library closing, Take anything you can use!”

I have some great books, a few of which I plan to scan and donate the output to either the Computer History Museum, or to the Internet Archive.

One of the more fun things I grabbed were a few IBM training laserdiscs. I had no idea what I’d do with them, I had never owned a laserdisc player. I just thought they’d look good sitting on my bookshelf. Especially since they are the same physical size as vinyl albums.

Now 16-years on, I’ve spent the last 4-years digitising my entire vinyl collection, in total some 2,700 albums. One of my main focus areas has been the music of Jazz producer, Creed Taylor. One of the side effects from that is I’ve created a new website, ctproduced.com – In record collecting circles, I’m apparently a completionist. I try to buy everything.

And so it was I started acquiring laserdiscs by Creed Taylor. It took a while, and I’m still missing Blues At Bradleys by Charles Fambrough. While I’ve not got around to writing about them in any detail, you can find them at the bottom of the entry here.

What I had left were the IBM laserdiscs. On monday I popped the first laserdisc in, it was for the IBM 3090 Processor complex. It was a fascinating throwback for me. I’d worked with IBM Kingston on a number of firmware and software availability issues, both as a customer, and later as an IBM Senior Software Engineer.

I hope your find the video fascinating. The IBM 3090 Processor was, to the best of my knowledge, the last of the real “mainframes”. Sure we still have IBM processor architecture machines that are compatible with the 3090 and earlier architectures. However, the new systems, more powerful, more efficient, are typically a single frame system. Sure, a parallel sysplex can support multiple mainframes, it doesn’t require them. Enjoy!

#HEARTBLEED was 5-years ago.

I was reading through my old handwritten tech notebooks this morning, search for some details on a Windows problem I know I’ve had before. I noticed an entry for March 28th, 2014 on the latest bug tracker list from Red Hat. One of the items on the list from the week before was the #Heartbleed bug in OpenSSL.

heartbleed-twoway-featured[1]

Image from synopsis.com

In less than a couple of weeks, Jim Zemlin from the Linux Foundation contacted John Hull in the open source team at Dell, who passed the call to me. I was happy to tell Jim we’d be happy to sign up, I got voice approval for the spending commitment and the job was done.

The Core Infrastructure Initiative (CII) was announced on April 24th, 2014. One of the first priorities was how to build a more solid base for funding and enabling open source developers. The first projects to receive funding were announced on April 26th, 2014 with remarkable speed.

Five years later I’m delighted to see Dell are still members, along with the major tech vendors, especially and unsurprisingly, Google. Google employees have made both substantial commitments to CII and open projects in general. I remember with great appreciation many of the contributions made by the tehn steering committee members, especially, but not limited to Ben Laurie and Bruce Schneier.

This blog, on synopsis.com, has a summary, entitled Heartbleed: OpenSSL vulnerability lives on. May 2, 2017.

My blog entries on Heartbleed and CII are here, here, and here.

There is still much to be concerned about. There are still many unpatched Apache HTTPD servers, especially versions 2.2.22 and 2.2.15 accessible on the Internet.

Remember, just because you don’t see software, it doesn’t mean it isn’t there.

Open Distributed Challenges – Words Matter

I had an interesting exchange with Dez Blanchfield from Australia on twitter recently. At the time, based on his tweets, I assume Dez was an IBM employee. He isn’t and although our paths crossed briefly at the company in 2007, as far as I’m aware we never met.

The subject was open vs open source. Any longtime readers will know that’s part of what drove me to join IBM in 1986, to push back on the closing of doors, and help knock down walls in IBM openness.

At the end of our twitter exchange, the first 3-tweets are included above, I promised to track down one of my earlier papers. As far as I recall, and without going through piles of hard copy paper in storage, this one was formally published by IBM US using a similar name, and pretty much identical content, probably in the Spring 0f ’96.

It is still important to differentiate between de jure and de facto standards. Open Source creates new de facto standards every day, through wide adoption and implementation using that open source. While systems ,ove much more quickly these days, at Internet speed, there is still a robust need to de jure standards. Those that are legally, internationally and commonly recognised, whether or not they were first implemented through open source. Most technology standards these days are as that’s the best way to get them through standards organizations.

The PDF presented here is original, unedited, just converted to PDF from Lotus Word Pro.

Lotus Word Pro, and it’s predecessor, Ami Pro, are great examples of de facto standards, especially inside IBM. Following the rise of Microsoft Word and MS Office, Lotus products on the desktop effectively disappeared. Since even inside IBM, the Lotus source code was never available, not only were the products only a de facto standard, they were never open source. While in the post Lotus desktop software period considerable effort has been put into reverse engineer the file formats , and some free and chargeable convertors almost all of them can recover the text, most do a poor job or formatting.

For that reason, I bought a used IBM Thinkpad T42 with Windows XP; Lotus Smartsuite and still have a licensed copy of Adobe Acrobat to create PDF’s. Words matter, open source, open, and open standards are all great. As always, understand the limitations of each.

There are a load of my newer white papers in the ‘wayback’ machine, if you have any problems finding them, let me know, I’ll jump start the Thinkpad T42.

Remembering the dawn of the open source movement

and this isn’t it.

attwood statistics 1975

Me re-booting an IBM System 360/40 in 1975

When I first started in IT in 1974 or as it was called back then, data processing, open source was the only thing. People were already depending on it, and defending their right to access source code.

I’m delighted with the number and breadth of formal organizations that have grown-up around “open source”. They are a great thing. Strength comes in numbers, as does recognition and bargaining power. Congratulations to the Open Source Initiative and everything they’ve achieved in their 20-years.

I understand the difference between closed source, (restrictive) licensed source code, free source, open source etc. The point here isn’t to argue one over the other, but to merely illustrate the lineage that has led to where we are today.

Perhaps one of the more significant steps in the modern open source movement was the creation in 2000 of the Open Source Development Labs, (OSDL) which in 2007 merged with the Free Standards Group (FSG) to become the Linux Foundation. But of course source code didn’t start there.

Some people feel that the source code fissure was opened when  Linus Torvalds released his Linux operating system in 1991 as open source; while Linus and many others think the work by Richard Stallman on the GNU Toolset and GNU License started in 1983, was the first step. Stallman’s determined advocacy for source code rights and source access certainly was a big contributor to where open source is today.

But it started way before Stallman. Open source can not only trace its roots to two of the industries behemoths, IBM and AT&T, but the original advocacy came from them too. Back in the early 1960’s, open source was the only thing. There wasn’t a software industry per se until the US Government invoked its’ antitrust law against IBM and AT&T, eventually forcing them, among other things, to unbundle their software and make it separately available as well as many other related conditions.

’69 is the beginning, not the end

The U.S. vs.I.B.M. antitrust case started in 1969, with trial commencing in 1975(1). The case was specifically about IBM blocking competitive hardware makers getting access and customers being able to run competitive systems, primarily S/360 architecture, using IBM Software.

In the years leading up to 1969, customers had become increasingly frustrated, and angry at IBM’s policy to tie it’s software to its hardware. Since all the software at that time was source code available, what that really meant was a business HAD to have one IBM computer to get the source code, it could then purchase an IBM plug-compatible manufacturers (PCM) computer(2) and compile the source code with the manufacturers Assembler and tools, then run the binaries on the PCM systems.

IBM made this increasingly harder as the PCM systems became more competitive. Often large previously IBM only systems users who would have, 2, 4, sometimes even 6 IBM S/360 systems, costing tens of millions of dollars, would buy a single PCM computer. The IBM on-site systems engineers (SE) could see the struggles of the customer, and along with the customers themselves, started to push back against the policy. The SE job was made harder the more their hands were tied, and the more restrictions that were put on the source code.

To SHARE or not to?

For the customers in the US, one of their major user groups, SHARE had
a vast experience in source code distribution, it’s user created content, tools tapes were legend, what most never knew, is that back in 1959, with General Motors, SHARE had its own IBM mainframe (709) operating system, the SHARE Operating System (SOS).

At that time there was formal support offerings of on-site SE’s that would work on problems and defects in SOS. But by 1962, IBM had introduced it’s own S/7090 Operating System, which was both incompatible with SOS, and also at that time IBM withdrew support by it’s SE and Program Support Representatives (PSR’s) to work on SOS.

1965 is where to the best of my knowledge is when the open source code movement, as we know it today, started

To my knowledge, that’s where the open source code movement, as we know it today, started. Stallman’s experience with a printer driver mirrors exactly what had happened some 20-years before. The removal of source code, the inability to build working modifications to support a business initiative, using hardware and software ostentatiously already owned by the customer.

IBM made it increasingly harder to get the source code, until the antitrust case. By that time, many of IBMs customers had created and depended on small, and large modifications to IBM source code.

Antitrust outcomes

Computerworld - IBM OCOBy the mid-70’s, once of the results of years of litigation, and consent decrees in the United States, IBM had been required to unbundle its software, and make it available separately. Initially it was chargeable to customers who wanted to run it on PCM, non-IBM systems, but overtime as new releases and new function appeared, even customers with IBM systems saw a charge appear, especially as Field Developed Programs, moved to full Program Products and so on. In a bid to stop competing products, and user group offerings being developed from their products, this meant the IBM Products were increasingly supplied object-code-only (OCO). This became a a formal policy in 1983.

I’ve kept the press cutting from ComputerWorld(March 1985) shown above since my days at Chemical Bank in New York. It pretty much sums-up what was going on at the time, OCO and users and user groups fighting back against IBM.

What this also did is it gave life to the formal software market, companies were now used to paying for their software, we’ve never looked back. In the time since those days, software with source code available has continued to flourish. With each new twist and evolution of technology, open source thrives, finds it’s own place, sometimes a dominant position, sometimes subservient, in the background.

The times in the late 1950’s and 60’s were the dawn of open source. If users, programmers, researchers and scientists had not fought for their rights then, it is hard to know where the software industry would be now.

Footnotes

(1) The PCM industry had itself come about as a result of a 1956 antitrust case and the consent decree that followed.

(2) The 1969 antitrust case was eventually abandoned in 1982.

Do you own the device you just bought?


Professor of Law, Washington and Lee University, has a great blog post that echoes exactly the same sentiments I heard Richard Stallman explain his original drive for open source, way back in the 1980’s.

Fairfield argues that we don’t own the devices we buy, we are merely buying a one-time license to the software within them. He makes a great case. It’s worth the read.

One key reason we don’t control our devices is that the companies that make them seem to think – and definitely act like – they still own them, even after we’ve bought them. A person may purchase a nice-looking box full of electronics that can function as a smartphone, the corporate argument goes, but they buy a license only to use the software inside. The companies say they still own the software, and because they own it, they can control it. It’s as if a car dealer sold a car, but claimed ownership of the motor.

My favorite counter-example of this is the Logitech Squeezebox network music player system I use.  Originally created by Slim Devices, as far back as 2000, with their first music player launched in 2001. Slim Devices were acquired by Logitech in 2006, who then abandoned the product line in 2012.

I started using Logitech Squeezebox in 2008, first by buying a Squeezebox Boom, then a Radio, another Boom, a Touch and have subsequently bought used Duet, and for my main living room, the audiophile quality Transporter.

While there are virtually no new client/players, there is a thriving client base built around the Raspberry Pi hardware with both client software builds and add-on audio hardware, as well as server builds to use the Pi. I’ve hacked some temporary preferences into the code to solve minor problems, but by far the most impressive enhancements to the long abandoned, official, server codebase are the extensions to keep up with changes in streaming services like the BBC iPlayer radio, Spotify, DSD play and streaming and many more enhancements. For any normal, closed source platform any one of these enhancements would likely have been impossible, and for many users made the hardware redundant.

The best place to start in the Squeezebox world is over on the forums, hosted, of course, at http://forums.slimdevices.com/

When my 1-month Ring (video) doorbell failed. It was all I could do to get Ring to respond. I spent nearly 4-hours on the phone with tech support. Not only did I have no control, the doorbell had stopped talking to their service, but they couldn’t really help. After the second session with support, I just said “look I’m done can you send a replacement?” – The tech support agent agreed they would, but 10-days later I was still waiting for even a shipping notice, much less a replacement. While the door bell worked as a door bell, none of the services, motion detection, door bell rings were any good as their services were unavailable to my door bell.

You don’t have to give up control when you buy a new device. You do own the skeleton of the hardware, buy you’ll have to make informed choices, and probably will give up control, if you want to own the soul of the machine, it’s software.

Linux Foundation Certification program

LFCS-LFCE_badge_rgb[1]I was delighted to be able to endorse the Linux Foundations’ new certification program at its’ recent launch,a long with industry luminaris including Mark Shuttleworth.

 “Linux certification that is based on performance and is easily accessible will be key to increasing the number of qualified Linux professionals,” said Mark Cathcart, Senior Distinguished Engineer, Dell. “The Linux Foundation’s approach to this market need is smart and thoughtful and they have the proven ability to deliver.”

Although I’ve contributed little to nothing to Linux in the way of technology, I’m totally impressed in how totally pervasive Linux has become, from embedded to Enterprise, since I wrote the chapters in the Year 2000 IBM Redbook on why IBM was getting involved with Linux.

So the new Linux foundation certification program is a perfectly logical step in furthering the skills and workface that are driving Linux today. Congratulations to Jim Zemlin and the Linux Foundation for achieving this significant milestone.

Linux Foundation Training and Certification

Jim Zemlins Blog entry on the certification program

Linux Foundation Press Release covering the program announcement

16-years? Wow, time to send in a donation to the “Way back machine”, I’d forgotten they have many of my old pages here and here.

OpenSSL and the Linux Foundation

Former colleague and noted open source advocate Simon Phipps recently reblogged to his webmink blog a piece that was originally written for meshedinsights.com

I committed Dell to support the Linux Foundation Converged Infrastructure Initiative (CII) and attended a recent day long board meeting with other members to discuss next steps. I’m sure you understand Simon, but for the benefit of readers here are just two important clarifications.

By joining the Linux Foundation CII initiative, your company can contribute to helping fund developers of OpenSSL and similar technologies directly through Linux Foundation Fellowships. This is in effect the same as you(Simon) are suggesting, having companies hire experts . The big difference is, the Linux Foundation helps the developers stay independent and removes them from the current need to fund their work through the (for profit) OpenSSL Software Foundation (OSF). They also remain independent of a large company controlling interest.

Any expansion of the OpenSSL team depends on the team itself being willing and able to grow the team. We need to be mindful of Brooks mythical man month. Having experts outside the team producing fixes and updates faster than they can be consumed(reviewed, tested, verified, packaged and shipped) just creates a fork, if not adopted by the core.

I’m hopeful that this approach will pay off. The team need to produce at least an abstract roadmap for bug fix adoption, code cleanup and features, and I look forwarding to seeing this. The Linux Foundation CII initiative is not limited to OpenSSL, but that is clearly the first item on the list.

More on OpenSSL, Heartbeat

I don’t propose to become an expert on OpenSSL, much less the greater security field, but I know people who are. My role in the Linux Foundation Core Infrastructure Initiative was to help Dell recognize how we can support a key industry technology, and at least give Dell the ability to have input on what comes next.

Our SonicWall team have many experts. They’ve published a great blog both on  their product positioning and use in relation to Heartbleed and vulnerabilities, and Network Security product manager Dmitriy Ayrapetov raises the question, in a world of mostly TCP traffic, are TLS Heartbeats even necessary?

The Dell SecureWorks Counter Threat Unit™ (CTU) have a blog on malware arising out of and exploiting the heartbleed vulnerability. Another great Dell resource well worth following for those with an interest in security.

Core Infrastructure Initiative (OpenSSL)

I’m pleased to announce that Dell with be a joining the Linux Foundation and a number of key industry partners in establishing the Core Infrastructure Initiative(CII). This is another open source initiative, and I’m glad to have have played my part in pushing through the approval. I mentioned in my February blog, and we continue to work on three other, I think significant initiatives.

CII is a new project to fund and support critical elements of the global information infrastructure. The Core Infrastructure Initiative enables technology companies to collaboratively identify and fund open source projects that are in need of assistance, while allowing the developers to continue their work under the community norms that have made open source so successful.

The first project under consideration to receive funds from the Initiative will be OpenSSL, which could receive fellowship funding for key developers as well as other resources to assist the project in improving its security, enabling outside reviews, and improving responsiveness to patch requests.

You can read the full Linux Foundation news release here and the New York Times already has a blog here.

Growing software influence and Dell

A few things have happened in the last couple of months that show the growing influence and maturity of the software team at Dell, and it’s been on my backlog to write up as a blog post.

DMTF VP of Regional Chapters

Yinghua Qin, the Senior Software Manager in our Zuhai China laboratory has been accepted as the new VP of Regional Chapters at the DMTF. This is an outstanding opportunity for Yinghua, who leads the Foglight and a number of software engineering projects, as well as serves as the local liaison to Sun Yat-sen University(SYSU) school mobile engineering (SMIE). Yinghua reports to the Foglight lead architect Geoff Vona.

Dell actually has at various stages in the past been very proactive with the DMTF. Current board chair, Winston Bumpus, was formally a Dell employee; My ESG colleague Jon Haas has been a major contributor to a number of standards. I for one am looking forward to the increased cooperation that working in international standards can bring.

Open Source Project

The Dell Cloud Manager product development team have open sourced their blockade test tool. Blockade is a utility for testing network failures and partitions in distributed applications. Blockade uses Docker containers to run application processes and manages the network from the host system to create various failure scenarios.

It’s a small step, but congratulations to Tim Freeman and the team for navigating through the process to produce the first new open source development project from the Dell Software Group team.

Angular giveback

A number of our development teams are using Angular.js. Once again after an original approach in November by Sara Cowles from the Dell Cloud Manager team stepped forward and asked the right questions, after checking with other teams, I was happy to sign the Google CLA to fax back to google.

Yocto – Embedded Linux and Beyond

Congratulations also go to Mikey Brown from Dells’ Enterprise Systems Group(ESG). Mikey has picked up the mantle of a project I was a big supporter of, when I was in ESG, Yocto. After doing a great job getting a couple of our embedded Linux offering back on track using Yocto, and the build infrastructure around. Mickey has re-connected with the Yocto team.

Each of these on their own are small steps, but these plus a number of other things going on give me a good feeling things are heading in the right direction. I’ll get to go have another facsinating time hearing from students about how things look from their side of the technology field when I head over to Texas A&M University(Insert “GO AGGIES” here!) to address class 481 on 2/25.

Dell joins Yocto project

Openembedded logoOne of the key activities here, outside of the VIS orchestration, automation engine has been the work around our embedded software stack and where we are heading next. Today we committed to joining the Yocto project, which will be aligned with the OpenEmbedded build system.

The Linux Foundation announced today, via Press Release that Dell+Cavium Networks, Freescale Semiconductor, Intel, LSI, Mentor Graphics, Mindspeed, MontaVista Software, NetLogic Microsystems, RidgeRun, Texas Instruments, Tilera, Timesys, and Wind River, among others would collaborate on a cross-compile environment enabling the development of “a complete Linux Distribution for embedded systems, with the initial target systems being ARM, MIPS, PowerPC and x86 (32 and 64 Bit).

I’m hopeful that this will allow our guys to continue their SDK work, allowing us to move core product technologies between chip architectures, while at the same time contributing back as we innovate around the Linux platform, while building out the the software build recipes and core Linux components, preventing fragmentation.

IBM Big Box quandary

In another follow-up from EMC World, the last session I went to was “EMC System z, z/OS, z/Linux and z/VM”. I thought it might be useful to hear what people were doing in the mainframe space, although largely unrelated to my current job. It was almost 10-years to the day that I was at IBM, were writing the z/Linux strategy, hearing about early successes etc. and strangely, current EMC CTO Jeff Nick and I were engaged in vigourous debate about implementation details of z/Linux the night before we went and told SAP about IBM’s plans.

The EMC World session demonstrated, that as much as things change, the they stay the same. It also reminded me, how borked the IT industry is, that we mostly force customers to choose by pricing rather than function. 10-12 years ago z/Linux on the mainframe was all about giving customers new function, a new way to exploit the technology that they’d already invested in. It was of course also to further establish the mainframes role as a server consolidation platform through virtualization and high levels of utilization.(1)

What I heard were two conflicting and confusing stories, at least they should be for IBM. The first was a customer who was moving all his Oracle workloads from a large IBM Power Systems server to z/Linux on the mainframe. Why? Becuase the licensing on the IBM Power server was too expensive. Using z/Linux, and the Integrated Facility for Linux (IFL) allows organizations to do a cost avoidance exercise. Processor capacity on the IFL doesn’t count towards the total installed, general processor capacity and hence doesn’t bump up the overall software licensing costs for all the other users. It’s a complex discussion and that wasn’t the purpose of this post, so I’ll leave it at that.

This might be considered a win for IBM, but actually it was a loss. It’s also a loss for the customer. IBM lost because the processing was being moved from it’s growth platform, IBM Power Systems, to the legacy System z. It’s good for z since it consolidates it’s hold in that organization, or probably does. Once the customer has done the migration and conversion, it will be interesting to see how they feel the performance compares. IBM often refers to IFL and it’s close relatives the ziip and zaap as speciality engines. Giving the impression that they perform faster than the normal System z processors. It’s largely an urban myth though, since these “specialty” engines really only deliver the same performance, they are just measured, monitored and priced differently.

The customer lost becuase they’ve spent time and effort to move from one architecture to another, really only to avoid software and server pricing issues. While the System z folks will argue the benefits of their platform, and I’m not about to “dis” them, actually the IBM Power server can pretty mouch deliver a good enough implementation as to make the difference, largely irrelavant.

The second confliction I heard about was from EMC themselves. The second main topic of the session was a discussion about moving some of the EMC Symmetrix products off the mainframe, as customers have reported that they are using too much mainframe capacity to run. The guys from EMC were thinking of moving the function of the products to commodity x86 processors and then linking those via high speed networking into the mainframe. This would move the function out of band and save mainframe processor cycles, which in turn would avoid an upgrade, which in turn would avoid bumping the software costs up for all users.

I was surprised how quickly I interjected and started talking about WLM SRM Enclaves and moving the EMC apps to run on z/Linux etc. This surely makes much more sense.

I was left with though a definate impression that there are still hard times ahead for IBM in large non-X86 virtualized servers. Not that they are not great pieces of engineering, they are. But getting to grips with software pricing once and for all should really be their prime focus, not a secondary or tertiary one. We were working towards pay per use once before, time to revist me thinks.

(1) Sport the irony of this statement given the preceeding “Nano, Nano” post!

The Windows Legacy

My good friend and fellow Brit’ Nigel Dessau posted his thoughts, and to some degree, frustrations with Windows Vista and potentially Windows 7 today on his personal blog, here.

The problem is of course they are stuck in their own legacy. If I were Microsoft,  I’d declare Windows 8 would only support Windows 7 and earlier apps and drivers in a virtual machine.

They’d declare a bunch of their more low level interfaces deprecated with Windows 7 and won’t be accessible in Windows 8 except in a Windows 7 VM.

Then they’d make their Windows virtual machine technology abstract all physical devices, so that Windows could handle them how they thought best, and wouldn’t let applications talk to devices directly, only via the abstraction. They would have generic storage, generic network, and generic graphics interfaces that applications could write to and Microsoft would deal with everything else.

This would initially limit the number of devices that would be supported, but thats really status quo anyway. They would declare how devices that want to play in the Windows space would behave, declare the specs, and Microsoft would own the testing and to a degree validation of almost all drivers or they could farm this out to a seperate organization who would independently certify the device, not write the code. Once they stabilised the generic interfaces though, the whole Windows system itself would become more stable.

This would be a big step for Microsoft. When you look at the Windows ecosystem, there are hundreds of thousands of Windows applications and utilities. Way too many of them though are to deal with the inadeqaucies of Windows itself, or missing function. Cut out the ability to write these sort of applications and their will be at least an infrastructure developer backlash. It might even provoke more antitrust claims. While I know nothing about the iPhone, this would likely put Windows 8 in the same position with respect to developers.

For all I know, this could be what they have in mind, it’s and area I need to get up to speed on with them, and obviously the processor roadmaps for AMD and Intel, as well as understanding where Linux is headed.

IBM Annouces Plans to acquire Transitive

As is the way with these things, public comment is full of legal trip-wires, none of which I propose to activate. Suffice to say that today IBM announced plans to acquire Transitive, who provide the core technology for PowerVM Lx86.

We’ve also done due dilligence on the patents and copyrights for the Intel SSE Instruction set and will be looking at how we can upgrade the level of Intel support provided in Lx86.

Lx86 on Power update

I had an interesting discussion with an IBM Client IT Architect earlier today; his customer wants to run Windows on his IBM Power Systems Server. It wasn’t a new discussion, I’d had it numerous times over the past 10-years or so, only in the old days the target platform was System z aka the mainframe. Let the record show we even had formal meetings with Microsoft back in the late 90’s about porting their then HAL and WIN32. Lots of reasons why it didn’t work out.

Only these days we think it’s a much more interesting proposition. Given the drive to virtualize x86 servers, to consolidate from a management and energy efficiency perspective, is now is all the rage in with many clients, the story doesn’t have to be sold, you just have to explain how much better at it IBM Power Servers are. Now of course we don’t run Windows, and that’s where this conversation got interesting.

His client wanted to virtualize. They’d got caught up in some of the early gold rush to Linux and had replaced a bunch of Windows print and low access file servers with Linux running on the same hardware, worked well, job done. Roll forward 3-years and now the hardware is creaking at best. The client hadn’t moved any other apps to Linux and was centralizing around larger, virtualized x86 servers to save license costs for Windows.

I’ve no idea what they’ll do next, but my point was, it’s not Windows you need, it’s Linux. And, if you want to centralise around a large virtualized server, it’s not x86 but Power. You can either port the apps to Linux on Power, or if as you say, they don’t want to/can’t port, it’s more than likley they can run the apps with Lx86.

The latest release of PowerVM Lx86 is V1.3, and is available now. We’ve added support for some new instructions and improved the performance in processing other instructions. We provide support for additional Linux operating systems

  • SUSE Linux Enterprise Server 10 Service Pack 2 for Power
  • Red Hat Enterprise Linux 4 update 7 for Power

and have simplified a number of installation related activities, for example embedding the PowerVM Lx86 installation, with the IBM Installation Toolkit for Linux v3.1. Also

  • Archiving previously installed environment for backup or migration to other systems.
  • Automate installation for non-interactive installation and installation from an archive.
  • SELinux is supported by PowerVM Lx86 when running on RHEL

PowerVM Lx86 is supplied with Provided with PowerVM Express, Standard, and Enterprise Editions.

And so back to the question in hand, why not Windows? Technically there is no real reason. Yes, there are some minor architecture differences. But these can be handled via traps and then fixed in software or firmware. The real issue from my perspective is support. If your vendor/ISV won’t support their software running on Windows on the server, or at a minimum requires you to recreate the problem in a supported environment(and we all know how hard that can be), why would you do it?

This has always been the biggest problem when introducing any new emulated/virtualized environment. It’s not at all clear that this is resolved yet even on x86 virtualized environments. Then there are those pesky license agreements you either sign, or agree to by “clicking”. These normally restrict the environments that you run the software on. Legally, we are also restricted in what we can emulate, patents and copyright laws apply across hardware too. Just Do It – might be a slogan that went a long way for Nike Marketing, but that’s not something I’ve heard a lawyer advise.


About & Contact

I'm Mark Cathcart, formally a Senior Distinguished Engineer, in Dells Software Group; before that Director of Systems Engineering in the Enterprise Solutions Group at Dell. Prior to that, I was IBM Distinguished Engineer and member of the IBM Academy of Technology. I am a Fellow of the British Computer Society (bsc.org) I'm an information technology optimist.


I was a member of the Linux Foundation Core Infrastructure Initiative Steering committee. Read more about it here.

Subscribe to updates via rss:

Feed Icon

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 2,066 other subscribers

Blog Stats

  • 89,480 hits