emblemparade.com

What Are Free Operating Systems? Are They “Linux”?

Originally published on LiveJournal, 4.4.08

I hope to provide you with a clear and precise answer to this, while taking you by the hand on a small journey through the history of operating systems. You’ll find that there’s some controversy over the names, and it’s quite heated. This is not merely a geek fight: the stakes, I believe, are quite high.

Some names from the world of free operating systems might have entered your consciousness. There’s Red Hat, Fedora, Ubuntu, SUSE, Gentoo, Slackware. And there are hundreds more that you’ve never heard about. In fact, there are products that let you create your own custom free operating system.

What’s going on here? If you have a personal computer, most likely it comes with some version of Microsoft Windows installed on it, unless you bought it from Apple, in which case it comes with Apple’s operating system, simply called OS. Why are there hundreds of other options? Are those all “Linux”?

The straightforward answer is that most of these free operating systems have Linux in them, but quite a few important ones don’t. But I’m getting ahead of myself.

The term “operating system” is already charged. If you have a personal computer with Windows, you’ve probably learned that “operating system” means whatever is on your computer before you install “applications” on it. Out of the box, Windows usually comes with a paltry few accessories and utilities: there’s Wordpad, Paint, and Hearts. To compete, Apple boasts a much bigger suite of applications, but these are also not considered part of the OS. My point is that Microsoft has made us used to thinking about the operating system and the applications as being two separate products. Products you, in fact, buy separately.

Free operating systems don’t need that distinction, and in fact it’s quite useless for making sense of them. What they really are is software distributions (called “distros,” for short) of many, many pieces of free software. In fact, just waiting for your desktop to load, with that reassuring “tra la la” opening music, before opening any application to do your work, you’re already seeing software coming from thousands of different projects, the product of work of tens of thousands of people. Some of these projects are decades old, some of them brand new. Most of these projects have not been planned in coordination with each other. That they can all work together seems like something of an engineering miracle.

There’s software that draws the windows and lets you drag them around the screen. There’s software that responds to your mouse movements. There’s software that connects to your sound card. Software to access your hard drive and organize the files. Software to load fonts and render them correctly on your screen. Software to connect to your wireless network card. Software to work with internet standards. Each piece of software was organized by a small team of individuals, sometimes even a sole engineer, dedicated to a very particular goal, and to nothing else. By adhering to certain technical standards, they can make sure that the pieces of software work together, even supporting software that has yet to be written.

This community of highly specialized projects, adhering by their own free choice to certain standards, is a direct inheritance from academia. And the “free” in “free operating systems” referred, at least originally, to that choice that they make to adhere to those standards.

See, in the 1960s, computers were getting cheaper. It used to be that only big corporations and research institutes could afford the big, hulking machines, filling basements and requiring a staff of operators. Suddenly, there were what we now call “mid-range” computers. These took up much less space, and though they weren’t as powerful, they could still handle a decent amount of users, and allowed smaller businesses, and smaller university departments, access to a computer. The most successful of these was the VAX, by Digital Equipment Corporation. But IBM, purveyor of the Mainframe, also had a few successful models.

When I say cheaper, I still don’t mean cheap. The research investments going into making these things was enormous. The cost was due not so much the hardware, but the software. There were very few programmers back then, and code was guarded carefully. While universities used VAX computers, they couldn’t modify the code, called VMS, that ran them. And they had to pay through the nose for it.

The obvious choice was to make their own software to run on VAXes and other mid-range computers. Manpower, however, was lacking, and universities did not have the budgets of IBM. They could, however, share resources, together being something equivalent to an IBM. And so they started developing a set of standards by which they could use software from another campus together with their own. They didn’t start from zero, though: as a blueprint, they built on the small UNIX operating system, developed at AT&T labs. UNIX was somewhat unique in that it was written in a high-level language called C, where most operating systems at the time were written directly for the hardware. The C language was hardware-agnostic, thus allowing the UNIX code to be easily ported to different hardware. It was a good starting point for the universities' new program. It was called "open computing."

VAX is pretty much dead, but software from these UNIX-like operating systems is still fully usable, and is being actively developed. The version that survived the most is called BSD (Berkeley Software Distribution). Much of it is part of the core of the Darwin operating system, which powers the Macintosh OS. Because the pieces are small and can be assembled together, it’s trivial to take what’s useful and relevant without having to take the whole thing. Imagine, for example, taking Macintosh’s user interface and adding to it Windows' hardware support. It’s the kind of stuff that happens all the time in open computing.

The story doesn’t quite end there, though. The notion of “freedom” wasn’t very developed or important early on. It was more important that the software was “open,” to allow development across a wide, diverse community. But as computers got cheaper and cheaper, and the personal computer was introduced, licensing started to become an issue. Could I run a free operating system at home? Well, yes and no. It depended on which parts you assembled from where. The universities still owned copyrights for all the code they developed, and in some cases they did not hesitate to step in and assert their legal rights. Just like Digital and IBM before them, they were not keen on relinquishing their decades of investment. They also protected some code with stricter licenses. It was thus “open,” but not really “free.” Using the code was risky, because you never knew when your freedom to use it might be revoked.

And so, in 1983, a group was formed to do what the universities had did all over again, except that this time the license would guarantee that the code would be open for eternity. The license ingeniously used copyright law to protect this right, and called it, in a progressive wink, “copylefting.” You’re allowed to use the code as you please (with some important limitations that were recently introduced in version 3 of the license) but any changes you make must be distributed under the same license. Richard Stallman, the hero of this project, called it GNU, which stands for “GNU's Not UNIX.” See what he did there? Funny. The community around GNU was similar to the old academic group, except that in this case many volunteers came in who were in the industry, not strictly in academia. So, GNU was both more free and more diverse.

Now, here is where things get tricky. Many free operating systems use pieces from GNU. There are, however, many projects not within GNU, but which still use GNU’s ingenious license. The GNU project likes to see these projects as, somehow, being part of GNU. Additionally, there’s a lot of code running around from BSD. It’s thus sometimes hard to know to which group to attribute the complete operating system.

Enter Linux, to make things really heated. In 1991, Linus Torvalds, its creator, wanted his own operating system. He was frustrated with the pace of development of GNU, and annoyed with the old academic community model. He believed, and was proved correct, that a very small team, even just one person, could create an operating system much more quickly and more efficiently than GNU. Torvalds is easily frustrated by a lot of things. But he’s also a self-described pragmatic. So, while he didn’t want to work with GNU, he liked the GNU license, and used it for his operating system. Not only that, he pulled in a lot of pieces from the GNU project, which he was of course free to do.

Now, Linux was never released as an official operating system. Torvalds never really cared for such formalities, anyway. Instead, there are many free operating systems out there, which, as I mentioned earlier, are “distributions” of all kinds of pieces. Some don’t use Linux, instead relying on old (but good) BSD code and parts of GNU, while others do include Linux. So, are these “Linux”?

The answer would depend on how you measure the contribution of various projects to your final operating system. Many people would say GNU is the most important project, not only for providing so many essential pieces of software, but also for the license, which Linux uses. The licensing argument, I think, is quite unfair. If there were no GNU license, Torvalds would have simply chosen another license. The GNU license did not enable his project. Also, if GNU software would not have been available, then he could have gone with the GNU components.

Other people prefer to evaluate the relative importance of various parts of a distribution, by measuring how much they contribute to the quality of the final operating system. And that’s where it becomes very vague.

Torvalds did not write a complete operating system, a task quite impossible for one lone engineer. Linux is “only” the kernel, which is the lowest part of the operating system. It’s in charge of interfacing with the hardware, and also managing multiple programs running, user security, etc. The kernel was that piece of the puzzle that bogged down the GNU project the most, and which benefited the most from Torvalds' one-man crack operation. It’s actually a terrific kernel! Even though it’s meant to be general purpose, running on machines as diverse as supercomputers and cellphones, it competes pretty well with more specialized kernels. There’s no doubt that the Linux kernel is an excellent piece of software, and that it keeps getting better.

But, is it really that important to have an excellent kernel for a personal computer operating system? History has shown that it’s not the case at all. Home users have little to benefit from good kernels. They mostly care about applications. Thus, the shift from Windows 98, which had a horrible kernel, to Windows XP, which featured the excellent NT kernel, was mostly evaluated in terms of desktop features and applications. In fact, for a while the “better” kernel caused compatibility troubles, and was very unreliable. Many people still think that Windows 98 Second Edition was Microsoft’s most stable operating system! A more noticeable shift happened when Apple moved its OS to version X. Previous versions were so terrible that your Mac infamously froze while it was printing. It was capable of doing only one task at a time, a rather crippling kernel limitation, which the new kernel resolved. Otherwise, many of the benefits of the new OS X had more to do with excellent quality assurance and sensible design, and ultimately had nothing to do with the kernel, or even with code, at all.

And thus, despite the fact that Linux is a truly terrific kernel, I think free operating systems would have been just as good, or nearly as good, for home uses if they used other kernels. Mac OS X’s Darwin, in fact, uses Carnegie Mellon University’s Mach kernel. There are other good choices, too, some of which have certain advantages over Linux. Linux is necessary for these free operating systems to run, but it’s not an essential component in evaluating their quality.

I think that calling free operating systems “Linux” can be seen to diminish the work of countless others who have contributed to making free operating systems a success. Remember that many of the pieces of software predate Linux, and even contemporary projects are not designed with the Linux kernel in mind. In my opinion, it would be most respectful to mark such operating systems as "containing Linux."

The timing of Linux’s introduction is also important. In the 1990s, lots of pieces of free software were reaching a level of quality and maturity rivaling and exceeding their parallels in commercial operating systems. The arrival of an easy-to-package kernel was just the stimulus needed to start releasing excellent free operating systems for home computers. It also enabled the first commercial success based on free operating systems, Red Hat. Red Hat had no qualms that I know of about calling their operating system “Linux.” Thus, in the same way that comets get named after their discoverers, not after the countless people who watched the skies and didn’t find it, free operating systems containing Linux would simply be called Linux, despite other important differences between them.

And the story isn’t entirely over, yet. The GNU project is still promising to release its own kernel, called HURD, which is progressing at GNU’s regularly sluggish pace. It’s note quite ready yet, but the Debian operating system, highly GNU-centered, has an experimental version which uses HURD rather than Linux. There’s no reason to not believe that free operating systems which we now call “Linux” might not contain Linux at all in the future. One wonders if they’ll have to change their name.

And there’s even more. In addition to the BSD-oriented and GNU-oriented free operating systems, which are all UNIX-like, there’s also ReactOS, which is Windows-like. Don’t laugh! For companies with huge investments in Windows, this can be a free plug-and-play replacement, and with the GNU license, it’s free forever. And as a poetic coda, know that Sun, a company that made its fortune on proprietary mid-range software, has recently released its flagship UNIX-like operating system, Solaris, under the GNU license. Twenty years ago, it would have been revolutionary. These days, with so many excellent BSD-oriented and GNU-oriented free operating systems, all of which are UNIX-like, it’s kinda ho-hum news. What this news really says, though, is that the free software movement won fair and square, on capitalism’s own terms.

In the end, it doesn’t quite matter whether or not you acknowledge every single engineer, tester and documentation translator who participated in making your free operating system experience a reality. The real revolution is in the “copylefting” license, which guarantees that all these people will not be able to revoke your right to use the software. And just as important is their choice in exercising their freedom to give you that right. Whatever free operating system you chose, whether it contains Linux or not, taps into this revolution, the likes of which have not been seen in any other field of human production thus far. But freedom is catchy, and we are beginning to see GNU-like licenses appear on books, music and even food recipes. It’s a revolution in ownership of the fruits of human civilization. It doesn’t dismantle capitalism — major companies are making good money over free operating systems — but it can help in dismantling capitalism’s stranglehold on intellectual property.

I run the Ubuntu operating system at home. It’s easy to use, and features great software that helps me communicate with friends and family, research and write academic papers, organize my heaps of data, and produce music. It even entertains me at times. Oh, and it also currently contains the excellent Linux kernel!

The Other Story: IT and Offices

The article as a whole is intended for home users. People who work in IT (Information Technology), wouldn’t really need such an introduction. But, just in case you are curious...

IT is a different universe. Microsoft Windows is only an average player in IT, and has nothing close to the monopoly it has on PCs. In IT, it still very much looks like the 1960s: the major players there still IBM, Sun, Hewlett-Packard (inheritor of Digital), etc., and the hardware is still mid-range, though updated to contemporary technology. Even IBM Mainframes still exist and are having a comeback for certain uses. And the operating systems are still very much like Digital’s old VMS for VAX, except that they have since moved to “open computing” standards of UNIX-like operating systems. This gives companies who use their products a bit more flexibility, and the ability to run some free software packages. Now, IT isn’t interested in word processing or instant messaging; the software packages I’m talking about here are all server-side and integration, thing like web servers and “middleware.” In that world, the free software versions compete well with commercial packages.

Because they’re already in the world of open computing, it makes a lot of sense companies would replace their commercial UNIX-like operating systems with free ones. In fact, it’s this world of IT that enabled the initial commercial success of free operating systems. Red Hat got on board by offering a “contains Linux” operating system to compete with IBM’s, Sun’s and HP’s. The price was attractive, and the performance was competitive.

If you use the internet, you’re already a user of free operating systems! Free operating systems and their software packages are especially good at delivering internet content safely and cheaply. The vast majority of web pages served today come from free operating systems.

A commercial world related to IT is that of small businesses, office-based businesses, and office-based government administration. Microsoft Windows is a much stronger competitor here than in the rest of the IT world, especially when it comes to workstations. The killer app is the Microsoft Office suite, especially Word and Outlook, which are de-facto standards for documentation and communication. Companies and governments spend a lot of money training their employees, and even certifying them, to use Microsoft products. Free operating systems have had a much more difficult time competing here than in any other field. The key to success would be making good replacements for Word and Outlook, such that employees currently trained for Microsoft product can be transitioned with little cost. There are a few packages, and they’re getting better, but it’s still an uphill battle, and Microsoft is putting up a fight. Sun, especially, has invested a lot in this competition, eventually turning its proprietary office suite into a free software one, just to hit Microsoft. Note, again, that the quality of the Linux kernel has little advantage to offer here. The battleground is “office” applications.

Well, not only there. The quality of the desktop is also important, and it’s an area where free operating systems were known to lag. But... not any more. It was Red Hat’s attempt to break into the small business market that was especially beneficial for advancement in free desktops, which eventually benefited home users. In fact, free software desktops, such as GNOME and KDE, are just as good, if not better, than Windows'. Desktop quality is no longer an issue.

The office-based world has a few major players. Red Hat was the first, but Novell, which used to dominate the office server business, has turned itself entirely to free operating systems. Novell’s SUSE free operating system offers some of the best, cleanest desktop experiences, and it comes with Novell’s decades of experience in service and support. It’s the #1 alternative for Microsoft in many European governments. Ubuntu (from a company named Canonical) has emerged as a new player in the office-based world, too, though it’s aiming more at replacing Microsoft’s specific niche as a provider of a unified experience both at the office and at home. It seems like a smart move. Part of the argument for Windows' success was that employees bought Windows for their home computers because they already knew how to use it due to their office training. Of course, nowadays PCs are so ubiquitous that the opposite is happening: people learn Windows at home, and that familiarity gives them an advantage at the workplace. Canonical is aiming at this kind of mutual home/office feedback. So far, it’s been generating excitement, but the competition is stiff.

Meanwhile, Microsoft is making efforts at standardizing its office formats in Europe, to force back the free office suites for a few more years. The battle is on.