Aug 22

An Open Letter To Darl McBride

Mr. McBride:

Late yesterday. I learned that you have charged
that your company is the victim of an insidious conspiracy
masterminded by IBM. You have urged the press and public to believe
that the Open Source Initiative and the Free Software Foundation and
Red Hat and Novell and various Linux enthusiasts are up in arms not
because of beliefs or interests of their own, but because little gray
men from Armonk have put them up to it. Bwahahaha! Fire up the
orbital mind-control lasers!

Very few things could possibly illustrate the brain-boggling
disconnect between SCO and reality with more clarity than hearing you
complain about how persecuted your company is. You opened this
on 6 March by accusing the open-source community of
criminality and incompetence as a way to set up a lawsuit against IBM.
You have since tried to seize control of our volunteer work for your
company’s exclusive gain, and your lawyers have announced
the intention
to destroy not just the GPL but all the open-source
licenses on which our community is built. It’s beyond me how can have
the gall to talk as though we need funding or marching orders from IBM
to mobilize against you. IBM couldn’t stop us from

I’m not sure which possibility is more pathetic — that the
CEO of SCO is lying through his teeth for tactical reasons, or that
you genuinely aren’t capable of recognizing honest outrage when you
see it. To a manipulator, all behaviors are manipulation. To a
conspirator, all opposition is conspiracy. Is that you? Have you
truly forgotten that people might make common cause out of integrity,
ethical considerations, or simple self-defense? Has the reality you
inhabit truly become so cramped and ugly?

I’m in at least semi-regular communication with most of the people
and organizations who are causing you problems right now. The only
conspiracy among us is the common interest in preventing the
open-source community from being destroyed by SCO’s greed and
desperation. (And we think it’s a perfect sign of that desperation
that at SCOforum you ‘proved’ your relevance by
bragging about the amount of press coverage SCO generates. Last I checked,
companies demonstrated relevance by showing products, not
press clippings.)

Yes, one of the parties I talk with is, in fact, IBM. And you know
what? They’re smarter than you. One of the many things they
understand that you do not is that in the kind of confrontation SCO
and IBM are having, independent but willing allies are far better
value than lackeys and sock puppets. Allies, you see, have initiative
and flexibility. The time it takes a lackey to check with HQ for
orders is time an ally can spend thinking up ways to make your life
complicated that HQ would be too nervous to use. Go on, try to
imagine an IBM lawyer approving this letter.

The very best kind of ally is one who comes to one’s side for
powerful reasons of his or her own. For principle. For his or her
friends and people. For the future. IBM has a lot of allies of that
kind now. It’s an alliance you drove together with your
arrogance, your overreaching, your insults, and your threats.

And now you cap it all with this paranoid ranting. It’s classic,
truly classic. Was this what you wanted out of life, to end up
imitating the doomed villain in a cheesy B movie? Tell me, does that
dark helmet fit comfortably? Are all the minions cringing in proper form?
“No, Mr. Torvalds, I expect you to die!” I’d ask if you’d
found the right sort of isolated wasteland for your citadel of dread yet, but
that would be a silly question; you’re in Utah, after all.

It doesn’t have to be this way. Sanity can still prevail. Here’s
the message that Jeff Gerhardt read at SCOforum again:

In recent months, the company formerly known as Caldera and now
trading as SCO has alleged that the 2.4 Linux kernel contains code
misappropriated from it. We in the open-source community are
respectful of intellectual-property rights, and take pride in our
ability to solve our own problems with our own code. If there is
infringing code in the Linux kernel, our community wants no part of it
and will remove it.

We challenge SCO to specify exactly which code it believes to be
infringing, by file and line number, and on what grounds it is
infringing. Only with disclosure can we begin the process of
remedying any breach that may exist. If SCO is truly concerned about
protecting its property, rather than simply using the mere accusations
as a pretext to pump its stock price and collect payoffs from
Microsoft for making trouble, then it will welcome the opportunity to
have its concerns resolved as quickly and with as little disruption as
possible. We are willing to cooperate with that.

The open-source community is not, however, willing to sit idly by
while SCO asserts proprietary control, and the right to collect
license fees, over the entirety of Linux. That is an unacceptable
attempt to hijack the work thousands of volunteer programmers
contributed in good faith, and must end.

If SCO is willing to take the honest, cooperative path forward, so are
we. If it is not, let the record show that we tried before resorting
to more confrontational means of defending our community against

Linus Torvalds is backing me on this, and our other chieftains and
philosopher-princes will as well. Show us the overlaps. If your code
has been inserted in our work, we’ll remove it — not because
you’ve threatened us but because that’s the right thing to do, whether
the patches came from IBM or anywhere else. Then you can call off
your lawyers and everyone will get to go home happy.

Take that offer while you still can, Mr. McBride. So far your
so-called ‘evidence’ is crap;
you’d better climb down off your high horse before we shoot that
sucker entirely out from under you. How you finish the contract fight
you picked with IBM is your problem. As the president of OSI,
defending the community of open-source hackers against predators and
carpetbaggers is mine — and if you don’t stop trying to destroy
Linux and everything else we’ve worked for I guarantee you
won’t like what our alliance is cooking up next.

And in case it’s not pellucidly clear by now, not one single
solitary damn thing I have said or published since 6 March (or at any
time previously for that matter) has been at IBM’s behest. I’m very
much afraid it’s all been me, acting to serve my people the best way I
know how. IBM doesn’t have what it would take to buy me away from
that job and neither do you. I’m not saying I don’t have a price
— but it ain’t counted in money, so I won’t even bother being
insulted by your suggestion.

You have a choice. Peel off that dark helmet and deal with us like
a reasonable human being, or continue down a path that could be bad
trouble for us but will be utter ruin — quite possibly
including jail time on fraud, intellectual-property theft, barratry,
and stock-manipulation charges — for you and the rest of SCO’s
top management. You have my email, you can have my phone if you want
it, and you have my word of honor that you’ll get a fair hearing for
any truths you have to offer.

Eric S. Raymond

President, Open Source Initiative

Friday, 20 August 2003

Blogspot comments

Jun 14

Hacking and Refactoring

In 2001, there was a history-making conference of software-engineering
thinkers in Snowbird, Colorado. The product of that meeting was a remarkable
document called the Agile Manifesto,
a call to overturn many of the assumptions of traditional software development.
I was invited to be at Snowbird, but couldn’t make it.

Ever since, though, I’ve been sensing a growing convergence between
agile programming and the open-source movement. I’ve seen agile
concepts and terminology being adopted rapidly and enthusiastically by
my colleagues in open-source-land—especially ideas like
refactoring, unit testing, and design from stories and personas. From
the other side, key agile-movement figures like Kent Beck and Martin
Fowler have expressed strong interest in open source both in published
works and to me personally. Fowler has gone so far as to include
open source on his list of agile-movement schools.

I agree that we belong on that list. But I also agree with
Fowler’s description of of open source as a style, rather than a
process. I think his reservations as to whether open source can be
described as just another agile school are well-founded. There is
something more complicated and interesting going on here. and I
realized when I read Fowler’s description of open source that at some
point I was going to have to do some hard thinking and writing in an
effort to sort it all out.

While doing research for my forthcoming book, The Art of Unix
, I read one particular passage in Fowler’s
Refactoring that finally brought it all home. He

One argument is that refactoring can be an alternative to up-front
design. In this scenario, you don’t do any design at all. You just
code the first approach that comes into your head, get it working, and
then refactor it into shape. Actually, this approach can work. I’ve
seen people do this and come out with a very well-defined piece of
software. Those who support Extreme Programming often are portrayed
as advocating this approach.

I read this, and had one of those moments where everything comes
together in your head with a great ringing crash and the world assumes
a new shape—a moment not unlike the one I had in late 1996
when I got the central insight that turned into The Cathedral
and the Bazaar
. In the remainder of this essay I’m going to
try to articulate what I now think I understand about open source,
agile programming, how they are related, and why the connection should
be interesting even to programmers with no stake in either movement.

Now I need to set a little background here, because I’m going
to need to have to talk about several different categories which are
contingently but not necessarily related.

First, there is Unix programmer. Unix is the operating
system with the longest living tradition of programming and design.
It has an unusually strong and mature technical culture around it, a
culture which originated or popularized many of the core ideas and
tools of modern software design. The Art of Unix
is a concerted attempt to capture the craft wisdom
of this culture, one to which I have successfully enlisted quite a few
of its founding elders.

Second, there is hacker. This is a very complex term, but
more than anything else, it describes an attitude—an
intentional stance that relates hackers to programming and other
disciplines in a particular way. I have described the hacker stance
and its cultural correlates in detail in How To Become A

Third, there is open-source programmer. Open source is a
programming style with strong roots in the Unix tradition and the
hacker culture. I wrote the modern manifesto for it in 1997, The
Cathedral and the Bazaar
, building on earlier thinking by
Richard Stallman and others.

These three categories are historically closely related. It is
significant that a single person (accidentally, me) wrote touchstone
documents for the second and third and is attempting a summum
of the first. That personal coincidence reflects a larger
social reality that in 2003 these categories are becoming increasingly
merged — essentially, the hacker community has become the core
of the open-source community, which is rapidly re-assimilating the
parts of the Unix culture that got away from the hackers during
the ten bad years after the AT&T divestiture in 1984.

But the relationship is not logically entailed; we can imagine
a hacker culture speaking a common tongue other than Unix and C (in
the far past its common tongue was Lisp), and we can imagine an
explicit ideology of open source developing within a cultural and
technical context other than Unix (as indeed nearly happened several
different times).

With this scene-setting done, I can explain that my first take on
Fowler’s statement was to think “Dude, you’ve just described

I mean something specific and powerful by this. Throwing together
a prototype and refactoring it into shape is a rather precise
description of the normal working practice of hackers since that
culture began to self-define in the 1960s. Not a complete one, but it
captures the most salient feature of how hackers relate to code. The
open-source community has inherited and elaborated this practice,
building on similar tendencies within the Unix tradition.

The way Fowler writes about design-by-refactoring has two huge
implications for the relationship between open source and agile

First, Fowler writes as though he didn’t know he was describing
. In the passage, he appears unaware that design by
repeated refactoring is not just a recent practice semi-accidentally
stumbled on by a handful of agile programmers, but one which hundreds
of thousands of hackers have accumulated experience with for over three
decades and have in their bones. There is a substantial folklore, an
entire craft practice, around this!

Second, in that passage Fowler described the practice of hacking
better than hackers themselves have done. Now, admittedly,
the hacker culture has simply not had that many theoreticians, and if
you list the ones that are strongly focused on development methodology
you lose Richard Stallman and are left with, basically, myself and
maybe Larry Wall (author of Perl and occasional funny and illuminating
ruminations on the art of hacking). But the fact that we don’t have a
lot of theoreticians is itself an important datum; we have always
tended to develop our most important wisdoms as unconscious and
unarticulated craft practice.

These two observations imply an enormous mutual potential, a gap
across which an arc of enlightenment may be beginning to blaze. It
implies two things:

First, people who are excited by agile-programming ideas can
look to open source and the Unix tradition and the hackers for the
lessons of experience
. We’ve been doing a lot of the stuff the
agile movement is talking about for a long time. Doing it in a
clumsy, unconscious, learned-by-osmosis way, but doing it
nevertheless. I believe that we have learned things that you agile
guys need to know to give your methodologies groundedness. Things
like (as Fowler himself observes) how to manage communication and
hierarchy issues in distributed teams.

Second, open-source hackers can learn from agile programmers
how to wake up
. The terminology and conceptual framework of
agile programming sharpens and articulates our instincts. Learning to
speak the language of open source, peer review, many eyeballs, and
rapid iterations gave us a tremendous unifying boost in the late
1990s; I think becoming similarly conscious about agile-movement ideas
like refactoring, unit testing, and story-centered design could be
just as important for us in the new century.

I’ve already given an example of what the agile movement has to
teach the hackers, in pointing out that repeated redesign by
refactoring is a precise description of hacking. Another thing we can
stand to learn from agile-movement folks is how to behave so that we
can actually develop requirements and deliver on them when the
customer isn’t, ultimately, ourselves.

For the flip side, consider Fowler’s anecdote on page 68-69, which
ends “Even if you know exactly what is going on in your system,
measure performance, don’t speculate. You’ll learn something, and
nine times out of ten it won’t be that you were right.” The Unix guy
in me wants to respond “Well, duh!“. In my tribe, profiling
before you speculate is DNA; we have a strong tradition of
this that goes back to the 1970s. From the point of view of any old
Unix hand, the fact that Fowler thought he had to write this down is a
sign of severe naivete in either Fowler or his readership or both.

In reading Refactoring, I several times had the
experience of thinking “What!?! That’s obvious!” closely followed
by “But Fowler explains it better than Unix traditions do…” This may
be because he relies less on the very rich shared explanatory context
that Unix provides.

How deep do the similarities run? Let’s take a look at what the
Agile Manifesto says:

Individuals and interactions over processes and tools. Yeah,
that sounds like us, all right. Open-source developers will toss out
a process that isn’t working in a nanosecond, and frequently do, and take
gleeful delight in doing so. In fact, the reaction against heavyweight
process has a key part of our self-identification as hackers for
at least the last quarter century, if not longer.

Working software over comprehensive documentation. That’s
us, too. In fact, the radical hacker position is that source code of
a working system is its documentation. We, more than any
other culture of software engineering, emphasize program source code as
human-to-human communication that is expected to bind together
communities of cooperation and understanding distributed through time
and space. In this, too, we build on and amplify Unix tradition.

Customer collaboration over contract negotiation. In the
open-source world, the line between “developer” and “customer” blurs
and often disappears. Non-technical end users are represented by
developers who are proxies for their interests—as when, for
example, companies that run large websites second developers to
work on Apache Software Foundation projects.

Responding to change over following a plan. Absolutely.
Our whole development style encourages this. It’s fairly unusual for
any of our projects to have any plan more elaborate than “fix
the current bugs and chase the next shiny thing we see”.

With these as main points, it’s hardly surprising that so many of
the Principles
behind the Agile Manifesto
read like Unix-tradition and hacker
gospel. “Deliver working software frequently, from a couple of weeks
to a couple of months, with a preference to the shorter timescale.
Well, yeah—we pioneered this. Or “Simplicity—the art of
maximizing the amount of work not done—is essential.” That’s
Unix-tradition holy writ, there. Or “The best architectures,
requirements, and designs emerge from self-organizing teams.”

This is stone-obvious stuff to any hacker, and exactly the sort of
subversive thinking that most panics managers attached to big plans,
big budgets, big up-front design, and big rigid command-and-control
structures. Which may, in fact, be a key part of its appeal to
hackers and agile developers—because at least one thing that points
agile-movement and open-source people in the same direction is a drive
to take control of our art back from the suits and get out from under
big dumb management.

The most important difference I see between the hackers and the
agile-movement crowd is this: the hackers are the people who never
surrendered to big dumb management — they either bailed out of the
system or forted up in academia or industrial R&D labs or
technical-specialty areas where pointy-haired bosses weren’t permitted
to do as much damage. The agile crowd, on the other hand, seems to be
composed largely of people who were swallowed into the belly of the
beast (waterfall-model projects, Windows, the entire conventional
corporate-development hell so vividly described in Edward Yourdon’s
books) and have been smart enough not just to claw their way out but
to formulate an ideology to justify not getting sucked back in.

Both groups are in revolt against the same set of organizational
assumptions. And both are winning because those assumptions are
obsolete, yesterday’s adaptations to a world of expensive machines and
expensive communications. But software development doesn’t need big
concentrations of capital and resources anymore, and doesn’t need the
control structures and hierarchies and secrecy and elaborate rituals
that go with managing big capital concentrations either. In fact, in
a world of rapid change, these things are nothing but a drag. Thus
agile techniques. Thus, open source. Converging paths to the same
destination, which is not just software that doesn’t suck but a
software-development process that doesn’t suck.

When I think about how the tribal wisdom of the hackers and the
sharp cut-the-bullshit insights of the agile movement seem to be
coming together, my mind keeps circling back to Phil Greenspun’s brief
but trenchant essay Redefining
Professionalism for Software Engineers
. Greenspun proposes,
provocatively but I think correctly, that the shift towards
open-source development is a key part of the transformation of
software engineering into a mature profession, with the dedication to
excellence and ethos of service that accompanies professionalism. I
have elsewhere suggested that we are seeing a close historical analog
of the transition from alchemy to chemistry. Secrets leak out, but
skill sustains; the necessity to stop relying on craft secrecy is one
of the crises that occupational groups normally face as they attain
professional standing.

I’m beginning to think that from the wreckage of the software
industry big dumb management made, I can see the outline of a mature,
humane discipline of software engineering emerging — and
that it will be in large part a fusion of the responsiveness and
customer focus of the agile movement with the wisdom and groundedness
of the Unix tradition, expressed in open source.


May 13

A Taxonomy of Cognitive Stress

I have been thinking about UI design lately. With some help from my
friend Rob Landley, I’ve come up with a classification schema for the
levels at which users are willing to invest effort to build

The base assumption is that for any given user there is a maximum
cognitive load any given user is willing to accept to use an
interface. I think that there are levels, analogous to Piagetian
developmental thresholds and possibly related to them, in the
trajectory of learning to use software interfaces.

Level 0: I’ll only push one button.

Level 1: I’ll push a sequence of buttons, as long as they’re all visible
and I don’t have to remember anything between presses. These people
can do checklists.

Level 2: I’m willing to push as sequence of buttons in which later ones may
not be visible until earlier ones have been pressed. These people
will follow pull-down menus; it’s OK for the display to change as long
as they can memorize the steps.

Level 3: I’m willing to use folders if they never change while I’m not looking.
There can be hidden unchanging state, but nothing must ever
happen out of sight. These people can handle an incremental replace
with confirmation. They can use macros, but have no capability to
cope with surprises other than by yelling for help.

Level 4: I’m willing to use metaphors to describe magic actions. A folder
can be described by “These are all my local machines” or “these
are all my print jobs” and is allowed to change out of sight in an
unsurprising way. These people can handle global replace, but must
examine the result to maintain confidence. These people will begin
customizing their environment.

Level 5: I’m willing to use categories (generalize about nouns). I’m
to recognize that all .doc files are alike, or all .jpg files are
alike, and I have confidence there are sets of actions I can apply
to a file I have never seen that will work because I know its type.
(Late in this level knowledge begins to become articulate; these
people are willing to give simple instructions over the phone or
by email.)

Level 6: I’m willing to unpack metaphors into procedural steps. People at
this level begin to be able to cope with surprises when the
metaphor breaks, because they have a representation of process.
People at this level are ready to cope with the fact that HTML
documents are made up of tags, and more generally with
simple document markup.

Level 7: I’m willing to move between different representations of
a document or piece of data. People at this level know that
any one view of the data is not the same as the data, and lossless
transformations no longer scare them. Multiple representations
become more useful than confusing. At this level the idea of
structural rather than presentation markup begins to make sense.

Level 8: I’m willing to package simple procedures I already understand.
These people are willing to record a sequence of actions which
they understand into a macro, as long as no decisions (conditionals)
are involved. They begin to get comfortable with report generators.
At advanced level 8 they may start to be willing to deal with
simple SQL.

Level 9: I am willing to package procedures that make decisions, as long
as I already understand them. At his level, people begin to cope
with conditionals and loops, and also to deal with the idea of
programming languages.

Level 10: I am willing to problem-solve at the procedural level, writing
programs for tasks I don’t completely understand before
developing them.

I’m thinking this scale might be useful in classifying interfaces and
developing guidelines for not exceeding the pain threshold of an
audience if we have some model of what their notion of acceptable
cognitive load is.

(This is a spinoff from my book-in-progress, “The Art of Unix
Programming”, but I don’t plan to put it in the book.)

Comments, reactions, and refinements welcome.

Blogspot comments

Sep 17

Living With Microsoft

In today’s episode of the Microsoft follies, we learned that
Media Player 9
is un-uninstallable
. Deliberately.

A A Microsoft spokesthing confirmed that Media Player 9 is so deeply
integrated into the operating system that it cannot be removed without
doing a `system restore’. Which, incidentally, will wipe out your
Office installation.

It’s at times like this that, contemplating Microsoft users, one
feels as though one is wandering among people lashing themselves with
stinging nettles until blood runs off them in rivulets. One wants to
know why they don’t just stop. One is told “But it’s the

One shakes one’s head bemusedly.

They pay heavily for the privilege of lashing themselves, too.
Except for those blessed, blissful occasions on which they pay still
more, grease themselves, bend over, and prepare to be buggered by a
chainsaw. That’s called a “System Upgrade”.

One contemplates the uptime figures on one’s Linux box and
feels — admit it! — a bit smug.


Jul 29

Right back at ya, Captain

Last Saturday morning in San Diego I had breakfast with Steven den
Beste, the redoubtable captain of U.S.S. Clueless. One of the
side-effects of that meeting was a long
of open-source development. Herewith my response.

Steve and I agree on the scaling problem that has pushed software
development efforts to the ragged edge of what is sustainable even by
corporations with lots of money. Software project sizes are roughly
doubling every eighteen months, and for reasons Steve alluded to the
expected bug count per thousand lines is actually rising.

My assertion is that software development has reached a scale at
which (a) even large corporations can often no longer afford to field
enough developers to be effective at today’s project scales, and (b)
traditional methods of software quality assurance (ranging from formal
methods to internal walkthroughs) are no longer effective. The only
development organizations that seem to thrive on today’s complexity
regime are open-source teams.

Note that I am not claiming that open source is a silver bullet for
the software-complexity problem. There are no silver bullets, no
permanent solutions. What I am claiming is that at the
leading edge of large-scale software, closed-source development
doesn’t work any more. The future belongs to open source plus
whatever other practices and technologies we learn to use with
it to develop at ever-higher scales of complexity.

Steve’s analysis of the open-source phenomenon is very intelligent,
but doesn’t quite understand either the mode of organization, the
associated technology, or the factional politics within the movement.
Diagnostic of the slight disconnect is when he writes “For [the
zealots], the only true “Open Source” is governed by the strong form
of the GPL, and all other forms and licenses are harmful dilution of
the concept.” In fact, the people he’s talking about reject the term
“open source” entirely and insist on the ideologically-loaded term
“free software”.

A more serious error is when he writes “It is plausible that an OSS
project would require each participant to sign an NDA before being
given access to the source.” It is not plausible. The licenses
and community values of the open-source community would not permit this.
His two bullet points characterizing open source are missing its most
important characteristic: the entire practice is designed to facilitate
scrutiny by people with no institutional or contract relationship to the core
development team. The astringent effect of peer review by people who
have nothing to lose by reporting bugs is precisely the
point of the whole game.

Steve doesn’t undertand the importance or the power of this effect. This
slightly skews his whole essay; much of it is talking past what open-source
people do, rather than addressing us. He’s also unaware of a lot of the
real-world evidence for the success of the method. Some of the things he
thinks are technologically or economically impossible are actually being
done, routinely.

He’s correct when he says that most contributors are self-selected and
self-motivated. He overestimates the cost of training newbies, though. They
self-train; normally, the first time a core developer hears from a newbie
is typically when the newbie sends a patch — self-evidence that the newbie
has already acquired a critical level of knowledge about the
software. The “sink or swim” method turns out to work, and work well.

It’s incorrect to imply, as he does, that open-source development
is unsustainable because the people doing it are flaky amateurs.
Steve hasn’t absorbed the implications of the Boston Consulting
Group study that shows that about 40% of contributors to core projects
are professionals getting paid for working on open source by patrons
who need to use the results. In fact, what the open-source community
is evolving into is something very like a huge machine for bringing
newbies into apprenticeship contact with experienced developers and
professionalizing both groups.

He also writes “OSS by its nature tends to be reactive rather than
predictive. It doesn’t look into the future, try to predict a problem
which doesn’t exist now but will exist then, and be ready with a
solution. Rather, it tends to see problems that exist now and work on
solutions for them.” This is false — or, at any rate, no more true
than it is for closed-source development.

The open-source community built the Web and the Internet before it
had acquired a name for itself and full consciousness of its own
practices. Today, the cutting-edge work in operating systems
languages, desktop user interfaces, relational databases and many
other areas is being done either within the open-source community or
in cooperation with it by academics. These prodigious efforts of
imagination dwarf any “prediction” produced by closed-source software
development in the last two decades.

Steve’s “open source is reactive” claim strikes me as ironically
funny, because I can remember when the standard knock on my crowd was
that we’re great at innovation but can’t actually field product. How
quickly they forget…

He’s right enough about the difficulty of planning and high cost
of face-to-face meetings, though. These are real problems. It’s
a testimony to the power of our practices that we manage to ship large
volumes of high-quality software despite these obstacles.

What Steve called “player-killer” tactics have been tried — there
was a famous incident a few years back in which a TCP-wrappers
distribution was Trojaned. The crack was detected and the community
warned within hours. The black hats don’t seem to bother trying this
any more; our reaction time is too fast for that game to be very
rewarding. The technical design of Linux helps here in ways that
I won’t go into here — suffice it to say that it’s intrinsically
much harder to get a Trojan to do anything interesting than it
is under Windows or other single-user operating systems.

So far, the supply of open-source developers seems to be pretty
elastic — we’re not limited much by lacking bodies. Other factors
loom much larger; patents, the DMCA, intrinsically hard technical
problems. I don’t understand why this is as well as I’d like to, but
the facts are undeniable; the community is ten times the size my
wildest high-end scenarios predicted a decade ago and seems to be
growing faster as it gets larger.

Steve’s whole argument that open-source can’t win in embedded
systems is very curious, since it predicts exactly the opposite of
what is actually happening out there. Linux is taking over in
embedded systems — in fact, many observers would say it has already
won that space. If Steve had worked in the field within the last
three years he would probably know this.

Here are some data about the demand; the only non-general-purpose
open-source software magazine in existence is the Linux Embedded
Systems Journal. Open-source embedded developers like Monta Vista
Software are bucking the recession by growing like crazy. The first
cell-phone prototype running entirely open-source software just
entered beta testing.

I was in California to meet Steve partly because Real Networks
wanted me to be on stage when they announced the open-sourcing of
their RTSP engine. Their CEO, Rob Glaser, was quite frank about the
immediate business reasons: they needed to get ports to forty
different Nokia cellphones and just couldn’t figure out how to muster
the resources for that short of inviting every stakeholder on the
planet to hack the problem. Scaling bites. Hard.

In fact, some of the very characteristics that Steve thinks make
embedded systems like cellphones safe for closed development seems to
be the factors that are driving increased open-sourcing. The close
tie to hardware actually decreases the value of secrecy,
because it means the software is typically not easily re-usable by
hardware competitors. Thus open sourcing is often a great way to
recruit help from customer engineers without a real downside risk of

In fact, it’s an open secret in the industry that the most
important reason most closed-source embedded and driver software
remains closed is not nerves about plagiarism but fear of patent
audits on the source code. Graphics-card manufacturers, in
particular, routinely swipe patented techniques from their competitors
and bury them in binaries. (This is generally believed to be the
reason nVidia’s drivers aren’t open.)

Another trend that’s driving Linux and open-sourcing in embedded
stuff is the shift from specialty embedded 8-bit processors to 32-bit
chips with general-purpose architectures. Turns out the development
costs for getting stuff to run on the 8-bit chips are sickeningly high
and rising — partly because the few wizards who can do good work on
that hardware are expensive. The incremental cost for
smarter hardware has dropped a lot; it’s now cheaper to embed
general-purpose chips running Linux because it means you have a
larger, less expensive talent pool that can program them. Also,
when your developers aren’t fighting hardware limits as hard,
you get better time to market (which, as Steve observes, is

Steve is right about the comparative difficulty of applying
open-source methods to vertical applications. But the difficulty is
only comparative; it’s happening anyway. The metalab archive carries
a point-of-sale system for pizza parlors. I know of another case in
which a Canadian auto dealership built specialized accounting software
for their business and open-sourced it. The reasons? Same as usual;
they wanted to lay off as much as possible of the development and
maintainance cost on their competitors.

This is the same co-opetition logic that makes the Apache Software
Foundation work — it’s just as powerful for vertical apps, though
less obviously so. Each sponsoring company sees a higher payoff from
having the software at a small fraction of the manpower cost for a
complete in-house development. The method spreads risk in a way
beneficial to all parties, too, because the ability of separate
companies to sustain development tends to be uncorrelated — unless
they all sink, the project endures.

The way to solve the problem of not exposing your business logic to
competitors is to separate your app into an open-source engine and a
bunch of declarative business-rule schemas that you keep secret.
Databases work this way, and websites (the web pages and CGIs are the
schema). Many vertical apps can be partitioned this way too — in
fact, for things like tax-preparation software they almost have to be,
because the complexity overhead of hacking executable code every time
the rules change is too high.

Steve thinks the differences between Apache and Mozilla are bigger
than they are. In fact, the core groups of both projects are
full-time pros being funded by large users of the software.

So, let’s address Steve’s objections point by point:

For embedded software, OSS has the following problems:

  • It can’t be scheduled; timely delivery can’t be relied

    Timely delivery can’t be relied on for any software; see
    De Marco and Lister’s excellent book Peopleware: Productive
    Projects and Teams
    on the delusion of deadlines, especially
    the empirical evidence that the “wake me up when it’s done” strategy
    of not setting them actually gets your project done faster (also the
    implication of a recent Harvard Business School study of software
    project outcomes).

    Open source is at least not noticeably worse than closed-source on this
    axis. Arguably it’s better, because the rapid release cycles allow users
    to pick up on project results as soon as they’re good enough.

  • Debugging requires access to custom hardware which usually
    can’t easily be accessed across the net.

    There aren’t good solutions to this problem yet, but the increasing
    use of “overpowered” 32-bit processors using standard busses is
    tending to reduce it in scope. The development tools and interface
    hardware used in embedded stuff are rapidly getting more generic and closer
    to what’s used in general-purpose computers.

  • Active participation even for junior people requires substantial
    amounts of project-specific knowledge which isn’t easily acquired,
    especially remotely.

    This one puzzles me, because I think Steve ought to be right about
    it — but I’m not hearing the kinds of noises that I’d hear if it were
    slowing down the move to Linux and open source significantly.

    At least part of the answer is that embedded-systems work is
    getting de-skilled in a particular sense — more of it’s being done by
    application specialists who are training up to the required level of
    programming, rather than programmers who have acquired expensive
    application-specific knowledge.

  • A great deal of proprietary information is usually involved in
    the process, and if that’s released the company can be seriously

    It’s a question of tradeoffs. As RealNetworks found out when
    costing its Nokia contract, the choice is increasingly between giving
    up control of some of your proprietary IP and being too resource-bound
    to ship at all.

    There is no market for secrecy. There’s a market for product. If
    you can’t ship product, or your customers aren’t confident that you
    can maintain it after shipping, all that proprietary IP amounts to is
    a millstone around your neck.

    There will be more stories like RTSP in the future. Count on it.
    In fact, the day will come when most of your contract partners simply
    won’t accept the business risks of having someone else hold
    proprietary rights on the embedded software they use.

  • It’s nearly impossible to do embedded software without
    common impromptu face-to-face meetings with co-workers, either to ask
    questions or to brainstorm. Doing this electronically is sufficiently
    different as to not be practical.

    Yeah. They used to think that about operating systems, too. Obviously
    the Linux kernel is impossible, and therefore doesn’t exist.

    (At which point Oolon Colluphid disappeared in a puff of logic.)

For vertical apps, the objections are:

  • Security, security, security. You want me to trust my
    billing system to code written by anyone who happens to come along and
    volunteer to work on it, without any kind of check of credentials or
    checks on trustworthiness?

    One of the lessons the business world has been absorbing is that
    open-source projects are dramatically more secure than their
    closed-source competition — anybody who compares the Bugtraq records
    on Apache vs. ISS defacements, or Linux vs. Windows remote exploits,
    will notice that real fast.

    It’s not hard to understand why this is — I’ve found that even
    corporate executives grok the theory pretty quickly. I won’t do the whole
    argument here, but this article on Kerckhoff’s
    holds the crucial clue. When you rely on the obscurity of source
    code for security, it means that the bad guys find the bugs faster than
    you can plug them — there are more of them, and they have entropy on
    their side. Open source evens the odds for the good guys.

  • Recruitment: for most of the kind of people involved in
    OSS, vertical apps are boring. (Unless they want to figure out how to
    steal from it.)

    This remains a problem. On the other hand, open source makes it
    easier to train domain specialists to be good enough programmers to
    get the job done. It’s easier for physicists to learn to hack than
    it is for hackers to learn physics.

  • It takes a lot of knowledge of the specific aspects of the
    problem to make a significant contribution, which means things like
    observing the actual process of guests checking in at the front desk
    of the hotel.

    This just reinforces the tendency for vertical-app developers to be
    obsessives about something else who learn to program, rather than obsessives
    about programming who learn something else.

    Professional programmers tend to bridle at this thought. Well, better
    learn to live with it. As software becomes more pervasive, the amount
    of it done by application-specialist “amateurs” is going to increase.

  • The industry is full of horror stories of vertical apps
    which ran badly over budget and over schedule; the idea scares the
    hell out of business people. They’re unlikely to be very enthused by
    the use of a process which by its nature *cannot* be reliably
    scheduled. (Remember that Mozilla ran two years long.)

    Schedules — and the belief that deadlines make software happen
    faster — are a delusion in the mind of management, one not supported
    by the actual evidence about project outcomes. This delusion is
    so entrenched that managers fail to interpret the 70% rate of
    project failures correctly. It’s as if people were so determined
    to believe the Earth is flat that they ignore what their eyes tell
    them when ships sink over the horizon.

    No software larger than toy programs can be scheduled.
    Tactics aimed at doing so normally have the actual effect of
    increasing the time to market. `Aggressive’ schedules
    effectively guarantee failure. The sooner we learn these objective
    truths, and that the illusion of control that schedules give is not
    worth the real costs, the sooner rates of outright project failure
    will dip below 70%.

    Go read Peopleware. Now.

For short life apps:

  • Schedule is everything. If you’re six months late, you’re dead.

    See above. There are reasons open sourcing is less applicable to short-life
    applications, but this turns out not to be one of them.

  • Secrecy is everything else. If you’re on time but your
    competitor knows what you’re doing a year ahead, he’ll wipe you

    This argument has more force for short-life apps than for Steve’s other
    categories, but remember that increasingly the alternative to open source
    is not being able to ship at all. Your competitor is in the same boat
    you are.

  • How do you make money selling what anyone can get for free
    from any developer? If your product was developed out in the open, who
    exactly buys it afterwards?

    Steve has a stronger point here. It’s one that people used to
    think applied to almost all software, but which turns out to be mainly
    a problem for short-life apps. Actually the distinguishing
    characteristic isn’t expected lifetime per se, but something
    correlated with it — whether the product needs continued downstream
    work (maintainance and upgrades) or not.

    Long-life, high-maintainance apps create niches for service businesses.
    That’s the main way you make money in an open-source world. It’s
    harder to make that work with a short-life app. Sometimes it’s
    impossible. Life is hard.

For long life apps:

  • Will the participants be willing to work on what our
    marketing analysis says we need, or will they insist that they know
    what is required and try to add that instead? We don’t need feature
    creep, or people trying to change the direction we’re moving.

    In open-source projects, the function of “marketing analysis” tends to
    be taken be direct interaction with the user community. We find we
    do better work without a bunch of marketroids getting between us and
    our customers.

  • There is major learning curve involved in making a
    reasonable contribution to these kinds of programs; you don’t learn
    how a circuit board router works in a few days of study. In most cases
    you have to be conversant with the way that the package’s customers do
    what they do, and most programmers don’t know these things and can’t
    easily learn them.

    See my previous remarks about application specialists and the
    democratization of programming. And every time you’re tempted to
    say “But they couldn’t possibly get away with that in application
    area X” remember that they once said that about all the areas where
    open source now dominates.

    It’s just not smart to bet against the hackers. Not smart at all.
    We generally end up having the last laugh on the naysayers. As recently
    as 1990, “serious analysts” laughed at the idea of ubiquitous Internet.
    As late as 1996, they said Unix was dead. We showed them — and there
    are more of us now, with better tools, than ever.

Steve is right that one of the most effective ways to head off bugs
is to have a core group of professional engineers do a clean design.
Where he’s mistaken is in believing this truth has anything to tell
us about open vs. closed development. Us open-source guys, it turns
out, are really good at clean design.

This something to do with the fact that, as individuals, we tend to
be exceptionally capable and self-motivated — an elite selected by
dedication to the art of programming. It has more to do with not
having managers and marketroids pissing in the soup constantly,
telling us what tools to use, imposing insane deadlines, demanding
endless checklist features that don’t actually benefit anyone.

But mostly it has to do with the ruthless, invaluable pressure of
peer review — the knowledge that every design decision we make will
be examined by thousands of people who may well be smarter than we
are, and if we fail the test our effort will be pitilessly
discarded. In that kind of environment, you get good or you get

Blogspot comments

Jul 21

Run Silent, Go Feep

Warning: The following blog entry provides way more than the
recommended daily allowance of geeking. If you don’t have a serious
propeller-head streak, surf outta here now before it’s too

I’m mainly a software guy, but occasionally I build PCs for fun.
Design them, rather; the further away I stay from actual hardware the
happier it usually is for everybody. Last year, I designed an Ultimate
Linux Box
; the good folks at Los Alamos Computers built it and
will cheerfully sell you one. It was a successful design in most
respects, but unpleasantly noisy. This year, as we do the 2002
refresh, I’m going to be working hard at getting the most noise
reduction I can without sacrificing performance. I’m experimenting
now with ways and means.

So I spent a couple of hours today disassembling the case of my
wife Cathy’s machine ( and lining three sides of it
with Dynamat, a kind of stick-on
rubber acoustic insulation often used in car-stereo installations.
The malevolent god that normally attends me when I futz with hardware
must have been off tormenting some other hapless ex-mathematician; no
hardware was destroyed, no blood was shed, and I’m typing this on the
selfsame reassembled machine.

Minx is a pretty generic mid-tower system made with cheap Taiwanese
parts in mid-2002 by my local hole-in-the-wall computer shop: I
spent only $150 to have it built, recycling a few parts from an only
slightly older machine. It has a 300W power supply, Athlon 950 mobo
with stock CPU cooler fan, one 80mm case fan, 7200RPM ATA drive. I
succeeded in lining both 14″-square side panels and the case top; this
used up the 4’sq piece I bought so efficiently that there was only
about 10″sq in two small piece left over. I used those to cover the
only exposed solid section of the back panel.

If you want try this yourself, the tools I found useful were a
utility knife and a metal footrule, the latter useful both for
measuring to fit and as a cutting guide.

I took before and after measurements with the db meter. dbA scale,
measurements made with the probe one inch above the center-rear edge
of the case.

Machine off: 44dbA
Machine on, before: 63dbA
Machine on, after: 61dbA

In other words, only a 2dbA drop — marginal when you consider
that the meter is only rated 1.5dB accurate! but it’s worth bearing in
mind that the scale is logarithmic; 2dbA is more than it looks like.

I have studio-engineer ears and sensitive musician fingers. I took
before-and-after measurements with those, too, listening to the sound
tambre and feeling for case resonance.

My ears tell me that the box is only slightly quieter, but the noise
spectrum has changed. The proportion of high-frequency noise has
dropped; more of what I’m hearing is white noise due to turbulant
airflow, less is bearing noise. This is a good change even if total
emission hasn’t dropped much.

My fingers tell me that the amount of case resonance has dropped quite
dramatically, especially on the side panels.

Was it worth doing? I am not sure. There would probably be more
benefit on a system emitting more bearing noise from 10K or 15Krpm
drives. On this one, I think the power supply is emitting most of
the noise, and acoustic lining can’t do much against that.

In fact, my clearest take-away from this is that the big gains in
noise reduction on conventional PCs are likely to come from
obsessing about power-supply engineering — including details like
whether the fan blows through a slotted grille or a cutout with a
wire-basket finger guard (the latter will generate less turbulence

I’d like to retrofit minx with a Papst 12dbA muffin fan and see if
that makes a measurable difference. But the best change would
probably be one of the Enhance
300W PSUs that are supposed to only emit 26dbA. I’ll bet that would
win big.

Blogspot comments

Jun 19

Beating software version fatigue

In his latest
Tech Central Station column, Glenn Reynolds complains
of `version fatigue’, his accumulating angst over the fact that since the
emid-1980s he’s had to migrate through three word processors and several
different versions of Windows.

I can’t fix the sad fact that every new VCR and remote control you get
has a different control layout. But if we’re talking software, baby, I have
got your solution.

I have been using the same text editor since 1982. I have been using the
same command-line shell since 1985, and the same operating system since 1993.
But that last date is actually misleading, because I still get use out of
programs I wrote for the previous dialect of my OS as far back as 1982,
without ever having had to alter a line.

The last time I had to learn a new feature set for any of the tools
I regularly used was when I decided to change window systems in 1997,
and that was not a vendor-forced upgrade. Yes, that’s right; it means
I’ve been getting mileage out of essentially the same user interface
for five straight years. Half a decade.

Does this mean I’m using software tools that were feature-frozen when
dinosaurs walked the earth? No, actually, it doesn’t. The text editor,
which is what I spend my screen time interacting with, has grown tremendously
in capability over the twenty years I’ve been using it. The shell I use
has a lot of convenience features it didn’t in 1985, but I’ve only had
to learn them as I chose.

I don’t have a version-fatigue problem, and never have. I get to
use cutting-edge software tools that probably exceed in capability
anything you are directly familiar with. And I have every confidence,
based on my last twenty years of experience, that my software will both
continue to both offer me the innovative leading edge and remain
feature-stable for the next twenty years if I so choose.

How do I achieve this best of both worlds? One word: Unix.

I’m a Unix guy. You may have heard that I have something to do
with this Linux thing, and Linux is indeed what I use today. But
Linux is only the most recent phase of a continuous engineering
tradition that goes back to 1969. In that world, we don’t have
the kind of disruptive feature churn that forces people to upgrade
to incompatible operating systems every 2.5 years. Our software
lifetimes are measured in decades. And our applications,
like the Emacs text editor I use, frequently outlast the version
of Unix they were born under.

There are a couple of intertwined reasons for this. One is that
we tend to get the technology decisions right the first time — Unix
is, as Niklaus Wirth once said of Algol, “a vast improvement over
most of its successors”. Unix people confronted with Windows for
the first time tend to react with slack-jawed shock that any product
so successful could be such a complete design disaster.

Perhaps more importantly, Unix/Linux people are not stuck with a
business model that requires planned obsolescence in order to generate
revenue. Also, our engineering tradition puts a high value on open
standards. So our software tends to be forward-compatible.

As an example: about a year ago I changed file-system formats from
ext2 to ext3. In the Windows world, I’d have had to back up all my
files, reinstall the OS, restore my files, and then spend a week
hand-fixing bits of my system configuration that weren’t captured in
the backups. Instead, I ran one conversion utility. Once.

Most of the consumer-level problems with computer software —
crashes, bad design, version fatigue due to the perpetual upgrade
treadmill — are not inherent in the technology. They are, rather,
consequences of user-hostile business models. Microsoft, and
companies like them, have no incentive to solve the problems
of crashes, poor security, and version fatigue. They like
the perpetual upgrade treadmill. It’s how they make money.

Want to beat software version fatigue? It’s easy, Glenn. Take
control; dump the closed-source monopolists; get off the treadmill.
OpenOffice will let you keep your MS-Word documents and your Excel
spreadsheets and PowerPoint presentations. Join the Linux revolution,
and never see a Blue Screen of Death again.

UPDATE: A reader complains that Linux is difficult to install.
Answer: Get thee to the Linux user group near you, who will be more
than happy to help you get liberated. Or get thee to Wal-Mart, which
is now selling cheap machines with Lindows, a Linux variant tuned to
look like Windows, for $299.


May 21

Closed Source — Who Dares Call It Treason?

The cat is out of the bag. During testimony
before a federal judge
, Microsoft executive Jim Allchin has
admitted that some code critical to the security of Microsoft products
is so flawed it could not be safely disclosed to other developers or
the public.

Allchin was arguing against efforts by nine states and the District of
Columbia to impose antitrust remedies that would require Microsoft to
disclose its code. He constructed dire scenarios of U.S. national
security and the war against terrorism being compromised if such
disclosure were required.

Now turn this around. Allchin has testified under oath in a Federal
court that software Microsoft knows to be fatally flawed is deployed
where it may cost American lives. We’d better hope that Allchin is
lying, invoking a “national security” threat he doesn’t actually
believe in to stave off a disclosure requirement. That would merely
be perjury, a familiar crime for Microsoft.

If Allchin is not committing perjury, matters are far worse — because
it means Microsoft has knowingly chosen to compromise national
security rather than alert users in the military to the danger its own
incompetence has created. Implied is that Microsoft has chosen not to
deploy a repaired version of the software before the tragedy Allchin
is predicting actually strikes. These acts would be willful
endangerment of our country’s front-line soldiers in wartime. That
is called treason, and carries the death penalty.

Perjury, or treason? Which is it, Mr. Allchin?

There is another message here: that security bugs, like cockroaches,
flourish in darkness. Experience shows that developers knowing their
code would be open to third-party scrutiny program more carefully,
reducing the odds of security bugs. And had Microsoft’s source code
been exposed from the beginning, any vulnerabilities could have been
spotted and corrected before the software that they compromised became
so widely deployed that Allchin says they may now actually threaten
American lives.

Thus Mr. Allchin’s testimony is not merely a self-indictment of
Microsoft but of all non-open-source development for security-critical
software. As with many other issues, the legacy of 9/11 is to raise
the stakes and sharpen the questions. Dare we tolerate less than the
most effective software development practices when thousands more
lives might be at stake?

Closed source. Who dares call it treason?