Archive for the 'Linux' Category

Linux Foundation Certification program

LFCS-LFCE_badge_rgb[1]I was delighted to be able to endorse the Linux Foundations’ new certification program at its’ recent launch,a long with industry luminaris including Mark Shuttleworth.

 “Linux certification that is based on performance and is easily accessible will be key to increasing the number of qualified Linux professionals,” said Mark Cathcart, Senior Distinguished Engineer, Dell. “The Linux Foundation’s approach to this market need is smart and thoughtful and they have the proven ability to deliver.”

Although I’ve contributed little to nothing to Linux in the way of technology, I’m totally impressed in how totally pervasive Linux has become, from embedded to Enterprise, since I wrote the chapters in the Year 2000 IBM Redbook on why IBM was getting involved with Linux.

So the new Linux foundation certification program is a perfectly logical step in furthering the skills and workface that are driving Linux today. Congratulations to Jim Zemlin and the Linux Foundation for achieving this significant milestone.

Linux Foundation Training and Certification

Jim Zemlins Blog entry on the certification program

Linux Foundation Press Release covering the program announcement

16-years? Wow, time to send in a donation to the “Way back machine”, I’d forgotten they have many of my old pages here and here.

OpenSSL and the Linux Foundation

Former colleague and noted open source advocate Simon Phipps recently reblogged to his webmink blog a piece that was originally written for meshedinsights.com

I committed Dell to support the Linux Foundation Converged Infrastructure Initiative (CII) and attended a recent day long board meeting with other members to discuss next steps. I’m sure you understand Simon, but for the benefit of readers here are just two important clarifications.

By joining the Linux Foundation CII initiative, your company can contribute to helping fund developers of OpenSSL and similar technologies directly through Linux Foundation Fellowships. This is in effect the same as you(Simon) are suggesting, having companies hire experts . The big difference is, the Linux Foundation helps the developers stay independent and removes them from the current need to fund their work through the (for profit) OpenSSL Software Foundation (OSF). They also remain independent of a large company controlling interest.

Any expansion of the OpenSSL team depends on the team itself being willing and able to grow the team. We need to be mindful of Brooks mythical man month. Having experts outside the team producing fixes and updates faster than they can be consumed(reviewed, tested, verified, packaged and shipped) just creates a fork, if not adopted by the core.

I’m hopeful that this approach will pay off. The team need to produce at least an abstract roadmap for bug fix adoption, code cleanup and features, and I look forwarding to seeing this. The Linux Foundation CII initiative is not limited to OpenSSL, but that is clearly the first item on the list.

More on OpenSSL, Heartbeat

I don’t propose to become an expert on OpenSSL, much less the greater security field, but I know people who are. My role in the Linux Foundation Core Infrastructure Initiative was to help Dell recognize how we can support a key industry technology, and at least give Dell the ability to have input on what comes next.

Our SonicWall team have many experts. They’ve published a great blog both on  their product positioning and use in relation to Heartbleed and vulnerabilities, and Network Security product manager Dmitriy Ayrapetov raises the question, in a world of mostly TCP traffic, are TLS Heartbeats even necessary?

The Dell SecureWorks Counter Threat Unit™ (CTU) have a blog on malware arising out of and exploiting the heartbleed vulnerability. Another great Dell resource well worth following for those with an interest in secuirty.

Core Infrastructure Initiative (OpenSSL)

I’m pleased to announce that Dell with be a joining the Linux Foundation and a number of key industry partners in establishing the Core Infrastructure Initiative(CII). This is another open source initiative, and I’m glad to have have played my part in pushing through the approval. I mentioned in my February blog, and we continue to work on three other, I think significant initiatives.

CII is a new project to fund and support critical elements of the global information infrastructure. The Core Infrastructure Initiative enables technology companies to collaboratively identify and fund open source projects that are in need of assistance, while allowing the developers to continue their work under the community norms that have made open source so successful.

The first project under consideration to receive funds from the Initiative will be OpenSSL, which could receive fellowship funding for key developers as well as other resources to assist the project in improving its security, enabling outside reviews, and improving responsiveness to patch requests.

You can read the full Linux Foundation news release here and the New York Times already has a blog here.

Growing software influence and Dell

A few things have happened in the last couple of months that show the growing influence and maturity of the software team at Dell, and it’s been on my backlog to write up as a blog post.

DMTF VP of Regional Chapters

Yinghua Qin, the Senior Software Manager in our Zuhai China laboratory has been accepted as the new VP of Regional Chapters at the DMTF. This is an outstanding opportunity for Yinghua, who leads the Foglight and a number of software engineering projects, as well as serves as the local liaison to Sun Yat-sen University(SYSU) school mobile engineering (SMIE). Yinghua reports to the Foglight lead architect Geoff Vona.

Dell actually has at various stages in the past been very proactive with the DMTF. Current board chair, Winston Bumpus, was formally a Dell employee; My ESG colleague Jon Haas has been a major contributor to a number of standards. I for one am looking forward to the increased cooperation that working in international standards can bring.

Open Source Project

The Dell Cloud Manager product development team have open sourced their blockade test tool. Blockade is a utility for testing network failures and partitions in distributed applications. Blockade uses Docker containers to run application processes and manages the network from the host system to create various failure scenarios.

It’s a small step, but congratulations to Tim Freeman and the team for navigating through the process to produce the first new open source development project from the Dell Software Group team.

Angular giveback

A number of our development teams are using Angular.js. Once again after an original approach in November by Sara Cowles from the Dell Cloud Manager team stepped forward and asked the right questions, after checking with other teams, I was happy to sign the Google CLA to fax back to google.

Yocto – Embedded Linux and Beyond

Congratulations also go to Mikey Brown from Dells’ Enterprise Systems Group(ESG). Mikey has picked up the mantle of a project I was a big supporter of, when I was in ESG, Yocto. After doing a great job getting a couple of our embedded Linux offering back on track using Yocto, and the build infrastructure around. Mickey has re-connected with the Yocto team.

Each of these on their own are small steps, but these plus a number of other things going on give me a good feeling things are heading in the right direction. I’ll get to go have another facsinating time hearing from students about how things look from their side of the technology field when I head over to Texas A&M University(Insert “GO AGGIES” here!) to address class 481 on 2/25.

Dell joins Yocto project

Openembedded logoOne of the key activities here, outside of the VIS orchestration, automation engine has been the work around our embedded software stack and where we are heading next. Today we committed to joining the Yocto project, which will be aligned with the OpenEmbedded build system.

The Linux Foundation announced today, via Press Release that Dell+Cavium Networks, Freescale Semiconductor, Intel, LSI, Mentor Graphics, Mindspeed, MontaVista Software, NetLogic Microsystems, RidgeRun, Texas Instruments, Tilera, Timesys, and Wind River, among others would collaborate on a cross-compile environment enabling the development of “a complete Linux Distribution for embedded systems, with the initial target systems being ARM, MIPS, PowerPC and x86 (32 and 64 Bit).

I’m hopeful that this will allow our guys to continue their SDK work, allowing us to move core product technologies between chip architectures, while at the same time contributing back as we innovate around the Linux platform, while building out the the software build recipes and core Linux components, preventing fragmentation.

IBM Big Box quandary

In another follow-up from EMC World, the last session I went to was “EMC System z, z/OS, z/Linux and z/VM”. I thought it might be useful to hear what people were doing in the mainframe space, although largely unrelated to my current job. It was almost 10-years to the day that I was at IBM, were writing the z/Linux strategy, hearing about early successes etc. and strangely, current EMC CTO Jeff Nick and I were engaged in vigourous debate about implementation details of z/Linux the night before we went and told SAP about IBM’s plans.

The EMC World session demonstrated, that as much as things change, the they stay the same. It also reminded me, how borked the IT industry is, that we mostly force customers to choose by pricing rather than function. 10-12 years ago z/Linux on the mainframe was all about giving customers new function, a new way to exploit the technology that they’d already invested in. It was of course also to further establish the mainframes role as a server consolidation platform through virtualization and high levels of utilization.(1)

What I heard were two conflicting and confusing stories, at least they should be for IBM. The first was a customer who was moving all his Oracle workloads from a large IBM Power Systems server to z/Linux on the mainframe. Why? Becuase the licensing on the IBM Power server was too expensive. Using z/Linux, and the Integrated Facility for Linux (IFL) allows organizations to do a cost avoidance exercise. Processor capacity on the IFL doesn’t count towards the total installed, general processor capacity and hence doesn’t bump up the overall software licensing costs for all the other users. It’s a complex discussion and that wasn’t the purpose of this post, so I’ll leave it at that.

This might be considered a win for IBM, but actually it was a loss. It’s also a loss for the customer. IBM lost because the processing was being moved from it’s growth platform, IBM Power Systems, to the legacy System z. It’s good for z since it consolidates it’s hold in that organization, or probably does. Once the customer has done the migration and conversion, it will be interesting to see how they feel the performance compares. IBM often refers to IFL and it’s close relatives the ziip and zaap as speciality engines. Giving the impression that they perform faster than the normal System z processors. It’s largely an urban myth though, since these “specialty” engines really only deliver the same performance, they are just measured, monitored and priced differently.

The customer lost becuase they’ve spent time and effort to move from one architecture to another, really only to avoid software and server pricing issues. While the System z folks will argue the benefits of their platform, and I’m not about to “dis” them, actually the IBM Power server can pretty mouch deliver a good enough implementation as to make the difference, largely irrelavant.

The second confliction I heard about was from EMC themselves. The second main topic of the session was a discussion about moving some of the EMC Symmetrix products off the mainframe, as customers have reported that they are using too much mainframe capacity to run. The guys from EMC were thinking of moving the function of the products to commodity x86 processors and then linking those via high speed networking into the mainframe. This would move the function out of band and save mainframe processor cycles, which in turn would avoid an upgrade, which in turn would avoid bumping the software costs up for all users.

I was surprised how quickly I interjected and started talking about WLM SRM Enclaves and moving the EMC apps to run on z/Linux etc. This surely makes much more sense.

I was left with though a definate impression that there are still hard times ahead for IBM in large non-X86 virtualized servers. Not that they are not great pieces of engineering, they are. But getting to grips with software pricing once and for all should really be their prime focus, not a secondary or tertiary one. We were working towards pay per use once before, time to revist me thinks.

(1) Sport the irony of this statement given the preceeding “Nano, Nano” post!


About & Contact

I'm Mark Cathcart, Senior Distinguished Engineer, in Dells Software Group. I was formerly Director of Systems Engineering in the Enterprise Solutions Group at Dell, and an IBM Distinguished Engineer and member of the IBM Academy of Technology. I'm an information technology optimist.

Blog Stats

  • 77,994 hits

Subscribe to updates via rss:

Feed Icon

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 507 other followers


Follow

Get every new post delivered to your Inbox.

Join 507 other followers