True, but there is reason for optimism.
First off, whether one agrees with Rob Pike or not, do not dismiss him.
His work on Unix/Plan9/Inferno/... speaks for itself. The Bell Labs
research group consistently produces powerful, compact solutions,
and Rob Pike's contributions are legion. Compare Limbo/Dis/Inferno
to Java/JVM/JavaOS. I have an enormous amount of respect for Rob Pike.
I mostly agree with the points made. But the gloom and doom is
overstated. Research is no longer the exclusive province of MIT,
CMU, Stanford, Bell Labs, Xerox PARC, and IBM. The world had changed.
"There was a claim in the late 1970s and early 1980s that Unix had killed
operating systems research because no one would try anything else. At
the time, I didn't believe it. Today, I grudgingly accept that the
claim may be true (Microsoft notwithstanding)."
This, I say without reservation as a Unix user since 1978, is
entirely the fault of AT&T and the subsequent Unix license holders.
(The lawyers, not Pike!) It did not have to be this way. The
commercialization and fracture of Unix in the 1980's killed operating
systems research. Mainstream Unix has not improved since the days of
BSD Unix. Sure, Ritchie created Streams, and it was good, but SysV
inherited bastardized, all-things-to-all-people STREAMS. Then the SysV
people ignored the whole Unix philosophy and created SysV IPC. Yuck!
Other than that, there have been few significant changes in Unix.
Until GNU/Linux mindshare exploded, the trend was for *any* academic
research in computer science to be routed through the university
intellectual property lawyers so that they could try to make a buck off
of it. And a lot of it was being done on NT, under non-disclosure.
One can't have it both ways: either one treats OS research as an
academic discipline, share results, and build on the work of others,
or one patents and commercializes everything. The same is true of
most other scientific and artistic inquiry. Lawyers are a friction
that retard innovation.
"With so many external constraints, and so many things already done,
much of the interesting work requires effort on a large scale. Many
person-years are required to write a modern, realistic system. That
is beyond the scope of most university departments."
This is true. But today, in addition to operating systems, we have
hosted operating environments: Java, Inferno, Mozilla, ActiveX, ...
These *can* be handled by one person. (Hell, look at the Java VM --
one could do something better as a class project. :-) One thing that
would enormously aid kernel work is the completion of the Plex86
project, and its extension to all common architectures (PowerPC,
SPARC, Alpha, MIPS, StrongARM.) Want to lend your considerable
talents to a worthwile project, Mr. Pike? Jump in and help bring the
Plex86 (www.plex86.org) team up to speed building a free VM system.
People will be much more inclined to work on Plan 9, or GNU Hurd, EROS,
TUNES, etc., if they can run them in a window on Linux, crash them at
will, move files back and forth, etc. Linux developers have another
solution at the moment -- a user-mode port of Linux to its own API.
The port will probably get merged into the source tree some time
"Linux's success may indeed be the single strongest argument for
my thesis: The excitement generated by a clone of a decades-old
operating system demonstrates the void that the systems software
research community has failed to fill."
"Besides, Linux's cleverness is not in the software, but in the
development model, hardly a triumph of academic CS (especially
software engineering) by any measure."
I remember in college how I was eager to do *real* physics research.
I took Quantum Field Theory early in my undergraduate career, then
marched into the office of one of the bright lights of that decade
and asked for something to work on. What did he say? "Well kid,
can you do QCD calculations yet? No? Well come back when you can
and we can talk."
What's the point of this example? Well, one has to crawl before one
can walk. Students re-implemented Unix because it was well-understood.
Don't denigrate them for doing the exercises at the back of the book.
Linux has proven to be maintainable and extensible, which is more
than can be said for IRIX, say.
Collaborative, evolutionary development is still in its infancy.
Sustaining it requires that stakeholders use what they develop.
(Yes, eat their own dog food. :-) That means the system must be
usable for everyday, real-world tasks. People contribute what they
can, when they can -- few individuals (unlike you) can design and
write an operating system from scratch. For most, that means reusing
existing freely available POSIX software.
Again, the free software movement had to build its own infrastructure.
GNU Hurd had an ambitious design, and ran into significant problems.
Linux took a very conservative design, and produced something usable.
The Linux kernel is evolving in interesting directions (towards Plan
9!): pseudo filesystems are proliferating, like procfs, devfs, shmfs,
usbfs, ...; Al Viro has implemented multi-mount, union mount, and has
laid the groundwork for mount traps. If he gets his way in 2.5.x,
Linux will allow per-process namespaces. Linux already has a threading
model built on a Plan 9-like clone(). POSIX-thread brain-damage lives
out is userland, as a library. Increasingly, POSIX compatibility is
done in glibc (as it was supposed to be done in HURD). Perhaps some
day we can get rid of those unsafe system calls.
This is being done in an *evolutionary* way -- e.g., the Unix
security model (esp. suid) and per-process namespaces interact in
difficult ways. To implement secure applets, one wants to turn off
the system calls that manipulate other namespaces, such as socket().
All of this is being worked on -- Linux is adopting the best practices
of other systems.
Even so, Linux is not the be-all-and-end-all of OS's, and it is not
intended to be.
"But technically, they're not that hot. And Microsoft has been working
hard, and I claim that on many (not all) dimensions, their corresponding
products are superior technically. And they continue to improve."
"There has been much talk about component architectures but only one
true success: Unix pipes. It should be possible to build interactive
and distributed applications from piece parts."
One thing that Microsoft can be credited with is an attempt to build
components. Unfortunately, they tied it to Microsoft-compiler virtual
tables, and forget security. It is also butt-ugly, as are may of
their API's. These are difficult things to fix later.
What makes the pipe model work is the common currency: files and strams
of bytes, newline-terminated text. We have yet to provide a "currency"
at a higher level that provides the facilities of pipes and command
lines (namely pluggability and configurability). This is partly a
language problem -- as long as programs are linear streams of text
in multiple languages that are difficult to parse and compose, it is
difficult to move above this low-level currency. My best guess is that
the answer lies in more reflective, introspective systems, ala TUNES.
Today we have scripting (in Scheme, Python, and other languages).
Microsoft is making an effort in this area with Intentional
Programming. An explicit goal is to be able to import existing (C,
Fortran, ...) code and make good use of it. I have no idea whether
their work is worthwhile. But the need to capture intention, no
syntactic artifact is clearly one of the things holding back the
development of better tools. Direct manipulation GUI's tend to convey
even less intention than CLI: at least CLI's can be parameterized
with variables and scripted.
"To be a viable computer system, one must honor a huge list of large,
and often changing, standards: TCP/IP, HTTP, HTML, XML, CORBA, Unicode,
POSIX, NFS, SMB, MIME, POP, IMAP, X, ..."
As you and your colleagues so elegantly wrote in "The Hideous Name" and
"The Use of Name Spaces in Plan 9", the proliferation of namespaces in
Unix and other operating systems and tools has been one of their greatest
failings. And yet committees everywhere are proliferating new namespaces
and rotten protocols and APIs. There is only one solution to this, and
it is to reuse the work of others. As you know, this is not as great
an issue with Plan 9, say, as it can use the resources of another system
(say Linux or NT) as if they were local. Again, if something like Plex86
were available, it could reuse them *in the same box*.
I hope that the team at Bell Labs will become more engaged with open projects.
Lead, and others will follow. Linus has shown how -- make people stakeholders.
Read linux-kernel and you see that despite strong disagreement and nasty
flamewars the same people continue to participate. Why? Because they are
stakeholders in Linux.
You folks are expert architects and engineers -- create a framework in
which others can develop an expanding economy of tools. Allow people to
become stakeholders. Help people see your vision. You can design much
more than you can build alone. Bell Labs did this in the seventies.
It can do it again. Without the lawyers this time. What happened
to Inferno is a disaster. Don't repeat that mistake.