Open Letter to Steve Lohr & John Markoff

You’ve described only symptoms in Windows Is So Slow, but Why?, not the underlying problem. Closed-source software development has a scaling limit, a maximum complexity above which it collapses under its own weight.

Microsoft hit this wall six years ago, arguably longer; it’s why they’ve had to cancel several strategic projects in favor of superficial patches on the same old codebase. But it’s not a Microsoft-specific problem, just one that’s hitting them the worst because they’re the largest closed-source developer in existence. Management changes won’t address it any more than reshuffling the deck chairs could have kept the Titanic from sinking.

Apple has been able to ship four new versions in the last five years because its OS core is open-source code. Linux, entirely open-source, has bucketed along even faster. Open source evades the scaling limit by decentralizing development, replacing top-heavy monoliths with loosely-coupled peer networks at both the level of the code itself and the organizations that produce it.

You finger backward compatibility as a millstone around Microsoft’s neck, but experience with Linux and other open-source operating systems suggests this is not the real problem. Over the same six-year period Linux has maintained backwards binary compatibility as good as (arguably better than) that of Windows without bloating.

Microsoft’s problems cannot be fixed — indeed, they are doomed to get progressively worse — as long as they’re stuck to a development model premised on centralization, hierarchical control, and secrecy. Open-source operating systems will continue to gain at their expense for many of the same reasons free markets outcompeted centrally-planned economies.

The interesting question is whether we will ever see a Microsoft equivalent of glasnost and perestroika.

157 comments

  1. IIRC Apple is withdrawing from its OSS flirtation and its Darwin code no longer to be publicly available. I believe this was driven by the marketing department/”strategists” attempting to ensure hackers could not port OS X to non-Apple machines. “About turn! CHARGE!!!”

    What’s a good word for this common human/social tendency? “Centralisation” does not quite capture the motives. Putinisation? Fascism? Sovietisation? Parasites’ need for hierarchy? “Clench”?

  2. Saltation says: “IIRC Apple is withdrawing from its OSS flirtation and its Darwin code no longer to be publicly available”

    Where did you hear this? The closest I’ve heard was that Apple were holding back on select pieces required for building a replacement OS X kernel – presumably some DRM-related silliness – but that a Darwin kernel build was still fully possible.

  3. Agreed.

    “Microsoft, O giant of desktops, where is the new desktop O.S.?” No answer. It seems like all recent news items relating to any new O.S. from Microsoft are stories of all the planned new features that won’t, after all, be part of their next major release.

    At this point I’m wondering why they don’t just scrap everything and start fresh, building on top of some flavor of *N?X. Microsoft has nothing to lose by doing so, and would maybe even gain some respect… somewhere… I imagine. Certainly they would gain some much needed flexibility and not have to do much work to get it. Someone big (?) at Microsoft is letting their emotional investment in products of the past ruin their present and their distant future.

    Everyone sees it coming, but I’m not bothered by it. I use Linux, and when Microsoft finally collapses under its own weight, I won’t be suffocated underneath it.

  4. Eric,

    Most of the important bits of OS X are closed source; yet they are still better engineered by far than the corresponding bits in Windows. In fact, the existence of OS X, its predecessor NeXTStep, BeOS, RISC OS, Amiga, QNX, etc. put the lie to the idea that development of a sophisticated, multitasking, multimedia desktop OS with a GUI hits a wall of complexity that can only be overcome with Open Source Pixie Dust(tm). Windows is the way it is due to an emphasis on marketing and empire-building rather than engineering, and this letter of yours actually gives Microsoft far more credit than they deserve, with its tacit assumption that Windows is the best the closed-source world can provide.

  5. Let’s see. NexTStep, BeOS, and AmigaOS are all dead as the dodo despite repeated attempts to revive them on modern hardware. RISC OS was never a contender. QNX is on life support under pressure from Linux, surviving only in a fortified niche near real-time systems.

    And these are supposed to be evidence that closed-source development is still viable? It is to laugh. Why don’t you ring in the bad joke that is OS/2 while you’re at it?

    It’s not just the “sophisticated, multitasking, multimedia desktop OS” that has hit a complexity wall in closed source; it’s everything. ERPs and database-centered middleware are only two of the more obvious places this effect has been biting hard. And it was completely predictable from a simple combinatorial analysis of how bug rates scale as the size of a monolithic unit of code increases.

    Open source isn’t magic pixie dust any more than free markets are. But F. A. Hayek showed in 1936 that planned economies are unsustainable, and since 1997 we’ve been learning that closed-source development is unsustainable for the same reason. The planning problem invariably outstrips the capacity of planners, so truly complex systems must decentralize and self-organize or else fail.

  6. NexTStep, BeOS and AmigaOS being dead are irrelevant to the argument that highly complex software is possible in a closed source environment. As products they failed, as projects they did not, and it could be argued that the problem was more one of marketing than development.

    I fail to see how open source development will allow a project to scale simply in its own nature. No single project in the OSS world is on the same level of complexity as windows, a linux desktop is an amalgamation of independant projects, and on average the individual pieces do not interoperate to the level that windows does. Any argument that open source development will solve complexity problems on this level is based on theory, not evidence.

    All that aside, it could be argued that the management and communication methods of open source software are the true success. OSS generally tends to avoid most of the reporting and coordination problems imposed in closed source development projects by forcing developers to handle coordination themselves instead of imposing a strict chain of command.

    I realize this is getting kind of hand-wavey, but I feel that the advantages realized by open source over closed source are due to a higher degree of developer involvement in project decisions. Many OSS projects, either by design or practice, have a small, relatively consistent set of developers who work on the code. They closely resemble a closed source development team with the exceptions that: A) There are few if any managers, and B) The source is available for everyone to see. I think both of these facets are important, but that the first contributes more to the success of OSS projects in handling complexity.

  7. Funny, I see similar behavior to what is found here in many “open source” projects.

    Microsoft doesn’t have a “scaling problem” (and please, fire any consultant who’s first answer is “that won’t scale”), it has a problem with management that is stuck in “generating process as a substitute for ability”.

    This tends to not happen in publicly-visible “open source” projects because the developers are free to flee (or fork), but it does happen inside companies that are using a lot of “open source” in their codebase.

    Open Sourcing Windows won’t fix Microsoft’s problem (that would be a farie dust wish). Still, check this out:


    For what it’s worth, I did propose a way around this impasse a few years ago, before it was visible as an impasse, in an open letter to Bill & Steve. Release the MS Win9x and the MS WinNT 4.x source trees under an open source license, and let the greybeards play with it!

    Then see which team – the MS WinLongHorn or the FOSS greybeards – came up with a working product first. With the rider that Microsoft could not use the FOSS greybeards’ code until the end of the competition, but afterwards, if the greybeards had won, Microsoft could use whatever had been developed. But they could not prevent anyone else from using the same source tree/s.

    Microsoft serves the McDonalds “food” of the computer world. Convenient, and, at first glance, quite cheap.

    The secret of Big Macs, and all other food served at McDonald’s is that they’re not very good, but every one is not very good in exactly the same way. If you’re willing to live with not-very-goodness, you can have a Big Mac with absolutely no chance of being surprised in the slightest.

    The other secret of Big Macs is that you can have an IQ that hovers somewhere between “idiot” and “moron” and you’ll still be able to produce Big Macs that are exactly as unsurprisingly bland as all the other Big Macs in the world. The process for making food @ McDonalds has been specified and documented to the Nth degree.

    For anyone who enjoys cooking their own food, McDonalds is an afront to all that is good. And just like Windows, McDonalds is not good for you

  8. To be fair, NeXTStep isn’t dead, it just had a name change. It’s now known as “Apple Mac OS X” — all of the old NeXTStep/OPENStep APIs are the “Cocoa” APIs in OS X.

    Granted, NeXTStep would have died, had Apple not bought them, but it’s still very much alive…

  9. While Darwin may be open source, Apple did not fully embrace the bazaar model, and their development policies have steadily shifted away from the community driven module. Robert Braun, an OpenDarwin developer, outlines Apple’s retreat from the open source model in his article “A Brief History of Apple’s Open Source Efforts” posted at Daemon News http://ezine.daemonnews.org/200602/apple.html

    In his opinion, the key problems with OpenDarwin were the lack of information regarding in house development practices, the insufficient replacement to the proprietary, in-house build system (only two outside developers could ever successfully build the entire system), the inability of outside developers to commit changes to the HEAD branch, the removal of ‘live’ changes from the externally-viewable CVS repository (creating a lot of duplicate efforts and code incompatible with the latest source tree), and the shift from a Darwin source to an OSX source tree.

  10. The scaling issue is definitely a big part of the problem. The fact that they started with DOS twenty years ago, and slowly patched the codebase until it was XP is the other side of the problem.

  11. >This tends to not happen in publicly-visible “open source” projects because the developers are free to flee (or fork), but it does happen inside companies that are using a lot of “open source” in their codebase.

    If there’s no right to fork, it isn’t open source. Don’t change the subject while pretending you’re sticking to the same one.

  12. >> The fact that they started with DOS twenty years ago, and slowly patched the codebase until it was XP is the other side of the problem.

    Give them some credit! NT is not a descendant of DOS; it is a ‘true’ 32-bit protected-mode operating system, based on VMS and some *minor* *nix concepts (do not ask me what they are, this is just what I have heard). Of course, NT is a direct descendant of OS/2 (I think), and OS/2 1.0 was released in 1987, so really Microsoft has just been patching the operating system for close to 19 years. If DOS was still the core of the Windows OS, Microsoft would have been patching the system for 25 years.

  13. WinXP is essentially a WinME GUI on top of a Win2K kernel. Its not true that Microsft slowly patched the DOS codebase until it became XP.

    In the 1980s, Dave Cutler worked at DEC on the Emerald project to port VMS to an Intel platform. When Emerald was cancelled, Dave Cutler was hired by Microsoft to move VMS 4.x concepts into a new GUI OS which became known as Windows-NT (New Technology) which latter evolved into products like Windows-2000, Windows-XP and Windows 2003 Server.

    Windows NT is just VMS re-implemented.

    Eric, shall we review the OSI licenses to see which include a “right to fork” now?

  14. > Eric, shall we review the OSI licenses to see which include a “right to fork” now?

    The Open Source Definition was written to guarantee that right, by excluding licenses which disallow or restrict forking. So the correct answer is “All of them do”, because none of them prohibit redistribution of modified code. OSI’s calls have been disputed once or twice, but never over this issue.

  15. What about the ongoing issue of where you want to guarantee that something will stay binary-compatible across versions, and where you don’t?

    Greg K-H makes a good case why the interface between a device driver and the rest of the kernel is a bad place to try to do that. And the kernel hackers don’t. You can write a userspace application for Linux using an old Unix book, but if you’re looking at drivers without a guide at least as new as the 3rd edition of Linux Device Drivers you’re in trouble.

    How much of the QA and testing burden of new MSFT Windows releases is made necessary by the need to support old driver ABIs and weird not-quite-working but still deployed 3rd party drivers?

  16. > How much of the QA and testing burden of new MSFT Windows releases is made necessary by the need to support old driver ABIs and weird not-quite-working but still deployed 3rd party drivers?

    Some of it. Don, to be sure. But not all, or even most. You cannot, for example, lay the blame the failure to deplay WinFS or re-base their services on .NET there. Most of the promised-but-not delivered features are well within in the OS core. And their ongoing security nightmare has nothing at all to do with the binary-driver problem.

    But suppose you were right. We do OK at supporting old hardware in open-source-land, without it causing a massive development jam-up, because our drivers are open source; when the kernel ABI changes, they can be fixed. The closed-source assumption is precisely what makes “fix them all” an approach that won’t scale in Windows-land.

  17. >NexTStep, BeOS and AmigaOS being dead are irrelevant to the argument that highly complex software is possible in a closed source environment. As products they failed, as projects they did not,

    More fully, my argument is that quality software development was at one time possible within the closed-source mode but is no longer as the SLOC sizes implied by current hardware and user demands have gone up. Thus, the fact that BeOS and AmigaOS are dead is indeed relevant; they may have been successes in their time, but they can’t just cut it in today’s environment. We know this because of the failure of the die-hards who keep trying to revive these designs.

    Granted NeXTStep is a more interesting case because some of its tech got absorbed into the closed-source part of OS X. But that that tech has only been viable as the frosting on an open-source cake. Which rather reinforces my original point…

  18. It doesn’t matter. None of this matters. Platforms don’t matter. It’s what’s on the platforms that matter.

    This is a lesson which I forgot when I went on a gaming sabbatical (coinciding with my exploration of Linux and
    OSS), but since I got back into the ol’ past-time it leapt back into my brain with the force of an epiphany. In
    fact, this is something which any gamer knows, though perhaps just implicitly. And any Sega fan, such as myself,
    has had their face rubbed in the fact to an extent which is painful. The story goes something like this:

    A Prelude to War

    When the Sega Genesis first came out in 1988, it faced quite an uphill battle against the entrenched NES, which had
    managed to become practically synonymous with the word “videogame” after it’s 1985 release. Indeed, to this day,
    many people say “nintendo” when they mean “videogame,” just as many people say “xerox” when they mean “photocopy.”
    The Genesis was certainly more powerful — it’s primary processor was 7 times as powerful as the NES’ — but
    power does not conjure good games out of thin air. And without good games, a console is no more than a
    paperweight. Sega’s previous console, the Master System, was 3 times as powerful as the NES, but since it’s 1986
    release, it only sold 13 million units to Nintendo’s 60 million, simply because it didn’t offer a compelling
    library of games. Sega learned from this mistake, if only once.

    When they launched the Genesis, courtship of third parties was intense. They were willing to offer developers
    better licensing terms than Nintendo, who was enjoying monopoly status at the time, and managed to do what, at the
    time, was the unthinkable: they fought Nintendo, and in the US market at least, they won. In large part, this was
    due to Sega taking a chance on an unknown startup that was desperate for a platform for their football game.
    Nintendo simply wouldn’t offer the little corporation terms it could survive on, and besides, the NES was ill
    suited to doing sports games justice. That little company was EA, and the game was Madden. Both became smashing
    successes.

    With the help of this and other games, including some in-house smash titles such as the Sonic the Hedgehog
    franchise, Sega exploding onto the scene to history altering effect. To put it into perspective, the success Sega
    experienced would be like Apple gaining 50% marketshare upon the release of OSX. Even more mindblowing, this
    growth was coming at the *expense* of Nintendo’s installed base. By this I mean that old Nintendo users were
    abandoning the NES platform and buying Sega systems in droves. Though Sega’s hyper-clever marketing probably
    didn’t hurt (slogans such as “Sega does what Nintendon’t” still make the ears of any elder gamer perk up), it was
    the plethora of games that were only playable on the Genesis which produced this success.

    It’s On like Donkey Kong

    After three years of hemorrhaging market share, Nintendo fought back with the technically superior (save for
    processing speed) SNES in 1991. And while the SNES did absolutely everything correctly, and has rightfully earned
    it’s place of high regard in the annals of gaming, it completely and utterly failed to unseat the Genesis. In
    Japan it’s marketshare ended up exceeding Sega’s, but in the US it lagged, and Sega enjoyed reigning champion
    status in other parts of the world.

    This was the dawn of the “console wars” as we know them today, and the 16-bit era is still regarded by some (likely
    through nostalgia tinted glasses, but hey, we’re only human) as the halcyon era of gaming. For every top-notch
    exclusive game that the SNES had, the Genesis had one as well. And so long as the game libraries of both platforms
    looked equally compelling in the eyes of the consumer, the entities were mostly locked in a dead heat. But time
    always marches on.

    A Taste of Things to Come

    It had been half a decade since a new system was released, and consumers were ready for the next generation. The
    arcades were taking business from the console market, offering an innovative and immersive gaming experience that
    the now underpowered 16-bit consoles couldn’t match. (Incidentally, Sega has been and still is a leader in the
    Arcade market.) The time was ripe for Something New — sadly, both Sega and Nintendo seemed to have forgotten the
    lessons they had learned from their battles with each other, a mistake which ultimately proved fatal to the former.

    It all started in 1988, the year of the Genesis’ release. At that time, games were provided on a solid state
    medium known as a cartridge, which offered fast access as a benefit, but provided very limited capacity, and cost
    quite a bit to manufacture. Nintendo had been looking at a way to address these shortcomings by moving to a cheap,
    high-capacity disk-based medium. However, Nintendo was not able to satisfactorily surmount the stability problem
    of magnetic media, nor the concomitant ease of piracy. But Sony had just the ticket, since they were working on a
    then-revolutionary technology which would allow them to store data on CDs, which were currently restricted to just
    audio.

    So it was that Nintendo contracted Sony to develop a CD based add-on system for them. And in 1991, they were
    expected to announce the new designs at the yearly CES expo — but when Nintendo president Yamauchi discovered
    that the contact with Sony would give the latter 25% of all profits off the system, he broke arrangements with them
    in a fury. Instead, Nintendo contracted with Philips to perform the same task, but with a contract that gave
    Nintendo full control of the system. It was this partnership that was announced at CES, much to Sony’s chagrin.

    Ultimately, the Philips peripheral never materialized. But Sony refused to throw out their work. They spent years
    retooling the foundation into a 32bit console called the Playstation, and, determined to swallow Nintendo’s
    marketshare whole (hell hath no fury like a multi-billion dollar Japanese corporation spurned), they aggresively
    pursued third party developers, and launched an ad campaign that was arguably more Sega than Sega in its edginess.

    But I’m getting ahead of myself.

    No Cigar, Not Even Close

    Back in 1991, Sega was releasing it’s own CD based add-on to the Genesis, aptly named the Sega CD. It was quite
    the technological breakthrough, but it didn’t come cheap. And as has been established previously, a platform is
    only as good as the games on it: in the case of the Sega CD, this amounted to a big pile of suck. They even
    managed to create a Sonic game for the console that was, in effect if not intent, a turd with peanuts. Only 17% of
    Genesis owners ever bought a Sega CD — not a one of them doesn’t regret it.

    Then, in 1994, Sega blundered again with the release of the 32x — a $170 add on which would turn the Genesis into
    a fully fledged 32 bit system. With the 32bit era imminent, the idea of gaining access to the future on the
    (relative) cheap was immensely appealing to many gamers. The console was pre-ordered on a scale of millions, but
    Sega completely dropped the ball. In a dash to make it to the holiday season, games developed for the platform
    were rushed, and many of them curtailed (the version of Doom found on the 32x has half of the levels of its PC
    version). The system was one of the biggest letdowns in gaming history (next to the completely unremarkable

    Nintendo Virtual Boy — a portable gaming system which failed to be either portable or provide entertaining
    games). This was the beginning of what would become an insurmountably bad rep for Sega hardware.

    Don’t Tell me You’re Pissed, Man

    In 1995, Sega then released it’s true 32bit console, the Saturn. They released it a few months ahead of Sony’s
    Playstation, and actually enjoyed an upper hand in the marketplace at first. Sony did not fight against Sega the
    way they did against Nintendo, having no vendetta to settle. But unfortunately, Sega begat its own undoing. For
    the release of the Saturn, with its quality games and good 3rd party support, was seen as a sign of abandonment of
    the 32x — largely because it was, in fact, an abandonment of the 32x. Almost over night, legions of Sega fans
    became distrustful of the company.

    Completely unwittingly, Sony managed to swallow up Sega’s marketshare simply by not being Sega — and, therefore,
    appearing less likely to screw the gamer. The Playstation pulled far ahead of the Saturn, and Sega never made any
    real effort to combat this very real threat to their dominance — the hubristic assumption was that Sony was not a
    gaming company, and therefore couldn’t win. However, the larger market share made the Playstation (or PSX) more
    appealing to third party developers. And although the Saturn was a little bit more powerful, the Playstation was
    vastly easier to develop for.

    The result was that third party support for the PSX outstripped that of the Saturn by an order of magnitude. A
    lack of quality games results in a dead system, and in practice, a lack of third party developers is the same
    thing. The death blow for the Saturn came when EA, a monolith in the world of gaming which owed its existence to
    Sega (and vice versa), jumped ship and declared the PSX as its primary platform. Quite ironically, the Saturn was
    now doomed. And although Sega’s next console, the Dreamcast, was perfection in nearly every sense of the word, and
    the first console to provide online gaming, Sega never effectively garnered the third party support necessary to
    survive. In march of 2001, Sega exited the console market.

    I See you Baby

    Flashback to 1996, and Nintendo is bypassing the 32bit generation entirely to release it’s N64, technically
    superior to anything at it’s time (although some people were and are turned off by its distinctively aggressive
    hardware anti-aliasing). Coming out behind the PSX, and still being cartridge based, it couldn’t quite capture
    third party support the way the PSX did, but it managed to snag a marketshare equivalent to 1/3 that of Sony’s.

    While Sony failed to slay Nintendo, the combined blows dealt to it by Sega and Sony demolished its monopoly
    position. There’s a lesson here that anti-capitalists could learn about the nature of free markets, if they
    happened to actually be interested in the truth — but that is neither here nor there.

    What kept Nintendo alive was it’s stable of quality in-house games. Super Mario 64 is still regarded by many as
    the best 3D platforming game of all time, and Goldeneye stands unrivaled as the most playable and enjoyable
    adaptation of a movie ever. By contrast, Sega never had a proper Sonic game for the Saturn (apart from the lame
    isometric platformer Sonic 3D Blast, and the sucky racer Sonic R). Once again, the lesson is that quality games
    are the secret to a gaming platform’s success.

    And so it is with the modern era. The Playstation 2 (PS2), Sony’s successor to the immensely successful PSX, rode
    the coattails of its predecessor to it’s currently unrivaled installed base of more than 100 million systems,
    giving it around 60% market share. The remaining 40% is split between Microsoft’s XBOX console (surviving because
    of exclusive titles such as the Halo franchise) and Nintendo’s Gamecube (once again surviving off of excellent
    in-house games, although now at the bottom of the totem pole in terms of market share).

    So has it always been. And so shall it always be.

    They’re Like Mopeds…

    A lot of you have probably read this paper, called Worse is Better:

    http://www.jwz.org/doc/worse-is-better.html

    (If you haven’t, considering doing so.) Equally likely, you’re seeing a connection. Indeed, it would seem the
    ramifications of Worse is Better are incredibly far reaching, although I think the more general and correct
    statement is the following:

    Technical merits are usually a lot less important than you might think.

    Or, as I’ve said previously, a platform is only as good as what’s on it. A console is only as good as its games,
    just as a data medium is only as good as its ubiquity, just as an operating system is only as good as its
    applications. Empirically speaking, the technical merits of a platform seem to be a marginal factor (at best) in
    determing how it gets to a position of application dominance.

    What this means is that when debating the merits and demerits of OSS vis-a-vis closed source in terms of potential
    for success, where success is defined as market share, it is generally pointless to bring up technical points.
    Windows is not popular because of Windows, it is popular because of everything that runs on Windows. Contrary to
    the original article’s opinion, Microsoft is absolutely correct to maintain backwards compatibility, because the
    totality of what runs on Windows is the “secret” to it’s success. Apple’s policy may be technically superior, but
    it hasn’t helped it get anywhere near posing a challenge to MS.

    So Linux and Apple have faster releases than Microsoft? Big whompin’ deal. The debate over which system is
    better, or progressing more rapidly, simply does not matter. What matters is what people can do with the system,
    and for the desktop things most people want to do, Windows crushes all. In fact, if you look at OSS itself as a
    platform, than it’s an objective failure in the desktop market if the goal is replacing proprietary software. How
    good OSS is at producing quality software matters a lot less than how good it is at attracting software producers,
    and in that regard, it would seem to suck. There is a large range of computer oriented tasks that you simply
    *cannot* perform on Linux. And until OSS produces a game better than BZflag, it should be a self-evident fact that
    not only is not a silver bullet, it might barely be an arrow.

    I Don’t Have the Answer, but I Know who Doesn’t

    I use Windows, Linux, and Mac on a regular basis — I like Linux the system the most, followed by Windows,
    followed by the Mac (sorry, but I think the GUI is a weapon of mass gayness). But I actually spend most of my time
    in Windows simply because of the things I can do in it that I can’t do with the alternatives, or that I can’t do as
    cheaply, or that I can’t do as well, or some combintation of all three. Microsoft has done an extremely good job
    of attracting the people who actually make a system worth using to their platform, and as a result, it fits
    practically every users needs. Hence its market share.

    Of course, things change when you go to the backend, and sure, that’s partly because the requirements are
    different. But regardless, people don’t just put Linux on the web — they put Apache on the web. Or vsftpd. Or
    whatever. The fact that Linux has these highly sought things is what really makes it a success. The fact that
    these things offer the most generally popular price/performance ratio is why they are highly sought. The fact that
    OSS seems to be good at attracting developers of such things is why they are OSS. But it *doesn’t* mean that, even
    if OSS is an inherently technically superior development model (and in the future I’ll make the case that that’s
    bullshit), it is destined to dominance. Reality is much, much, much more complicated than
    that.

    Postscript

    On an unrelated note, the GNU people can suck my cock. I don’t even want to think about the time I wasted drinking
    your koolaid. I hope Emacs becomes a sentient entity and bites every single one of you on your GNU/scrotum. And
    fuck VI too.

  19. > I like Linux the system the most, followed by Windows, followed by the Mac (sorry, but I think the GUI is a
    > weapon of mass gayness).

    You may be unaware that almost everything you can do (in terms of sysadmin and development) with the GUI on OSX is also possible from the hash prompt.

    As for your semi-obvious repression, well, esr already chided me for going off-subject.

    Yes, “Worse is Better” is a very good read, (though oft misunderstood) and yes rpg has forgotten more about software than esr, you *and* I combined will ever know. You may wish to consult the original version of “Worse is better”.

    You may also wish to read the book Patterns of Software.

    If you really want to understand why (and when) Open Source works, I recommend Innovation Happens Elsewhere.

  20. OK, my argument may not have been worded perfectly. My point is that the XP/2000 kernels are based on a fundamentally different design idea than Unix kernels. Yes, Unix kernels started with Richie & Thompson’s early work in 1969 and slowly patched up to AIX et. al; and, yes, Linux started with Torvald’s personal project and slowly patched up to today.

    My point is that “The Design and Implementation of BSD 4.4” and “Linux Core Kernel Commentary” talk about a vastly different design than any similar Windows books. The Windows kernel does several things the wrong way in the name of backwards compatibility. But it’s still the wrong way, no matter how patched up it gets.

  21. >even if OSS is an inherently technically superior development model (and in the future I’ll make the case that that’s bullshit)

    That should be…interesting.

    Beware of strawmen. As Fred Brooks wrote in 1975, there is no silver bullet. One of my jobs as a principal theoretician of open source is to puncture the messianic idea that open source is the final answer to the question of life, the universe, and everything. On the other hand, whatever negative things you can say about open source, it is pretty clear that closed source sucks worse.

  22. > I fail to see how open source development will allow a project to scale simply in its own nature. No single project in the OSS world is on the same level of complexity as windows, a linux desktop is an amalgamation of independant projects, and on average the individual pieces do not interoperate to the level that windows does. Any argument that open source development will solve complexity problems on this level is based on theory, not evidence.

    Gee, you think that might be the key there?

  23. Jim, thanks for the links. Not sure what repression you’re referring to, but I’ll keep an eye on it (o.O). I really should have read this stuff long ago. Looks like lots of good food for thought. My initial impressions are mixed — I think the distinction between competition and collaboration contributing to innovation and creativity through diversity and refinement respectively is spot on, but I am not convinced of OSS competence in both departments.

    The fact stands that Linux was created in 1991, and 15 years later, we still don’t have a game that doesn’t suck. The fact also stands that Linux has exploded into webserver dominance in that time. I think there’s something very significant about that, but I’m not quite sure what it is.

  24. >The fact stands that Linux was created in 1991, and 15 years later, we still don’t have a game that doesn’t suck.

    Huh? Have you played TuxRacer or FreeCiv lately?

  25. Yeah, I have, and that’s exactly my point — they suck. The only thing they manage to accomplish is technical competence. The only way they could be evaluated as not sucking is if we use a standard of quality that is completely outside the gaming community, or we treat it as a member of the freebie subcommunity.

    As far as free games go, sure, they’re not bad. Neither is Ski Free. But compared to, oh, say, Halo 2, or Jade Empire, or Splinter Cell, or Ninja Gaiden, or Burnout 3, or Half-Life 2, or Doom 3, or Dead or Alive 3, or… well, see see below (confing ourselves to just the xbox):

    [link redacted because it was flagging the comment as spam, but check out ign.com editor’s choices]

    Anyway, compared to any one of those games, TuxRacer and FreeCiv basically aren’t even real games. It’s not a matter of whether they can hold candles, it’s a matter of whether they can stand before the awe inspiring might of the truly awesome games of today without defecating themselves inside out.

    Maybe there’s something wrong/stupid about my (and the rest of the gaming community’s) tastes, but if that’s wrong, I don’t wanna be right.

  26. >Yeah, I have, and that’s exactly my point — they suck. The only thing they manage to accomplish is technical competence. The only way they could be evaluated as not sucking is if we use a standard of quality that is completely outside the gaming community,

    I guess that’s me, then. I’m a hard-core gamer from ten years before you were born, and my expectations weren’t formed by computer games. I don’t do twitch games at all, and fancy graphics don’t interest me much. Gimme interesting game logic and repeated play value, and I’m happy.

  27. Pete, are you talking about a lack of quality ‘open-source’ games or just games that run natively on the Linux platform? Every title made by Id Software is available for Linux, including Doom 3 and Quake 4, and the source code to the Doom/Quake{1,2,3} engines have even been released under the GPL, and there are hundreds of mods that work with the engines. Unreal Tournament 2003/4 are both available for Linux. Neverwinter Nights also has a Linux binary (I hope the sequel does to). Check out http://www.linuxgames.com for a more thorough listing.

    If you only want to consider open source games, you may still have plenty to choose from (depending on your taste). FreedroidRPG is a nice little isometric CRPG, and I have heard good things about Vega Strike. There are even two Free/free *beta* MMORPGs: Planeshift and Eternal Lands. If you like Interactive Fiction (i.e. Text-Based Adventures), there are plenty of *nix-compatible games at http://www.ifarchive.org. Of course, you could always play the classic dungeon crawls: Rogue, NetHack, Angband, etc; if you absolutely *MUST* have graphics, there is a 2D isometric port of NetHack called Falcon’s Eye.

    Though not fully open source, there are several games, such as Ultima VI/VII/VIII, Doom I&II, Quake, Infinity Engine games (Baldur’s Gate I&II, Planescape: Torment), that have open-sourced engines, and there are emulators for many old systems.

    Failing that, there is always Wine/Cedega.

    Just keep looking; the pinnacle of Linux gaming is not Frozen Bubble. (unless, of course, you *REALLY* like Frozen Bubble)

    >> I don’t do twitch games at all,

    Isn’t TuxRacer a twitch game? It is a racing game after all. Oh, and have I left out anything important that has not been mentioned previously?

  28. >Isn’t TuxRacer a twitch game? It is a racing game after all. Oh, and have I left out anything important that has not been mentioned previously?

    It is. I don’t play it, I just notice that it looks pretty slick.

  29. > I think the distinction between competition and collaboration contributing to
    > innovation and creativity through diversity and refinement respectively is spot on,
    > but I am not convinced of OSS competence in both departments.

    Open Source is an answer to the question, “How do we take advantage of innovation that occurs outside the organization?”

    Note that it is an answer, not >the

  30. >Open Source is an answer to the question, “How do we take advantage of innovation that occurs outside the organization?”

    No, it’s much more than that. It’s the answer (not just ‘a’ answer) to the question: “How do we bring to software development the only institutional process for error-checking and bullshit-avoidance that has stood the test of time in science and other branches of engineering?”

    Open source is not novel. Four centuries ago, transparency and decentralized peer review made the critical difference between doing alchemy and doing chemistry — between secretive arcane art and the beginnings of experimental science. The historical aberration is that not that software developers re-learned this lesson, it’s that they ever forgot it.

  31. I highly recommend Battle for Wesnoth as a quality open source game. The graphics are not impressive compared to modern games, but they are still pleasing to the eye. The gameplay is amazing though. Actually, the fact that the game is not on store shelves might be an advantage; David White, the lead developer said that the game uses 2d graphics rather than 3d graphics since it much better for gameplay, but costs the “ohhhh-ahhhh” factor needed to sell pretty boxes in retail stores.

    Surely, you can find a way to not like it, but it has made my daily hour-long commutes into and out of New York seem all too short.

  32. In my experience, developers of open-source software care a lot more about portability than do their closed-source counterparts. I wouldn’t like to say whether that’s a beneficial effect of peer-review, or something that came out of our academic/Unix heritage (remember, “real computer scientists don’t program in anything less portable than a number 2 pencil”). Anyway, this means that the Linux kernel guys should have a much easier backwards compatibility problem than the Windows kernel guys. This could be regarded as a win for open-source, of course, but it requires most or all of your applications to be open-source rather than just your OS.

  33. >In my experience, developers of open-source software care a lot more about portability than do their closed-source counterparts.

    Historically speaking, it’s the Unix heritage. I can speak with authority on this because I was there :-). Back in the mid-1980s I wrote the first book on good Unix portability practice. (It’s the only book I’ve done that’s dated enough to have gone out of print)

    I’d go so far as to say that nowadays open-source people carrying forward Unix tradition may obsess about portability a bit more than is strictly rational given that 98% of their market is a PC monoculture. But I wouldn’t want to see that change, because some of the discipline of portability has valuable collateral effects on transparency and readability.

  34. >> I’d go so far as to say that nowadays open-source people carrying forward Unix tradition may obsess about portability a bit more than is strictly rational given that 98% of their market is a PC monoculture.

    With the rise of programmable PDAs/Cellphones/etc., we may see a sharp increase in the demand for portable applications. Since many open source games do not require the processing power or memory that commercial games require, many open source games could more easily be ported to these new platforms. Since Sony will install Linux on the hard drive of every Playstation 3, another appealing platform has been opened to open source developers. It is about time to shake-up the staid PC monoculture.

  35. I guess that’s me, then. I’m a hard-core gamer from ten years before you were born, and my expectations weren’t formed by computer games. I don’t do twitch games at all, and fancy graphics don’t interest me much. Gimme interesting game logic and repeated play value, and I’m happy.

    I suck at BZflag, but that doesn’t mean that BZflag itself sucks.

    If you only want to consider open source games, you may still have plenty to choose from (depending on your taste).

    If you follow contemporary gaming culture and agree with its general standards for “good,” it is clear that the proprietary world has infinitely more good games because, by these standards, open source has produced none (yes, I’m talking about open source games — and not hand me downs, quake 2 doesn’t count). What you guys are essentially saying is that you employ a different standard — your desires are met by the open source offerings. But that doesn’t reflect my own opinion, nor that of the rest of the multi-billion dollar gaming community. And it doesn’t matter who’s wrong or right (as if there is such a thing), because like it or not, things like this will be a deciding factor in the marketshare of a platform. If you consider OSS to be a platform, it’s a pretty abysmal failure in the gaming market.

    The same applies to many other application spaces. Sure, you might not need anything more than Ardour for your music creation requirements — but the rest of us want Cubase and Logic and the hundreds of thousands of awesome proprietary plugins on kvraudio.com. You might not need anything more than the Gimp — but the rest of us want to be on the cutting edge with Adobe products. Perhaps you think your tastes are the correct ones to have, but that doesn’t matter because it’s a (relatively) free market, and we’ll vote with our dollars — thus far, the platform of OSS fails in these areas also. In the time it has taken Ardour to not even reach 1.0 status, Cakewalk completely rewrote their DAW, and when they released Sonar to great commercial and critical acclaim, nobody knew or cared how it was developed. The fact that the core of it is still a one man show is something to think about, though.

    Clearly, if things keep going like this, OSS is far from achieving dominance. In fact, I’m promulgating my gut prediction that it will never achieve desktop dominance. My gut also tells me that the future of OSS is going to be heavily intwined with the web and vice versa.

  36. “How do we bring to software development the only institutional process for error-checking and bullshit-avoidance that has stood the test of time in science and other branches of engineering?”

    But a lot of proprietary software advances do actually happen out in the open, in academia and elsewhere. The knowledge needed to write a compiler, or a top shelf video game, or an awesome audio application — none of it is secret, and anybody can do it. Same for methods and practices of software development. It’s the doing of the dirty work that actually happens behind closed doors. The QA systems feasible under proprietary software might not be able to compete with the Linux kernel completely, but as I’ve established previously, he who has the least bugs does not necessarily win. And beyond that, the bulk of OSS projects have *less* resources at their disposal than their proprietary alternatives. I don’t see how we can expect Linus’ law to work when there’s only one guy doing anything development wise for a project — in the audio side of things, at least, such operation was the rule, not the exception.

    Really, things like Linux the kernel are not all that representative of most open source projects.

  37. >It’s the doing of the dirty work that actually happens behind closed doors.

    And it’s the doing of the dirty work that actually matters. Theory and practice are less alike in practice than they are in theory.

    >The QA systems feasible under proprietary software might not be able to compete with the Linux kernel completely, but as I’ve established previously, he who has the least bugs does not necessarily win,

    Both your statements are true, but they don’t add up to the conclusion you seem to want to reach. At most, they show that the effects of a superior development method can be swamped by other factors. This is not news.

    You and I are not as far apart on this as you might think, Pete. I’ve written in one of my papers that there are specific areas of short-lifetine software in which the underlying economics that drive most software towards open source don’t apply as strongly. I called out games as an example and I did so for specific and logically worked-out reasons. Go read The Magic Cauldron.

  38. Both your statements are true, but they don’t add up to the conclusion you seem to want to reach. At most, they show that the effects of a superior development method can be swamped by other factors. This is not news.

    I’m not entirely sure what my point is, necessarily. A lot of this is ex-GNU shell shock, I think. I actually have a copy of your catb book on my shelf, and I’ve read it cover to cover, although it has been a while.

    I think what I’m trying to say is: where’s the pudding?

    I mean, I now use the three major desktop systems in existence, and while my opinion of what makes a good system puts Linux at the top, my opinion of a good platform puts Windows waaaaaay at the top. I see plenty of application spaces where the proprietary offerings far outshine the OSS alternatives — and we’re not talking short term stuff. My own audio application of choice, FruityLoops, has been around for 8 years. In fact, the year it was released, 1998, was the same year that Linux started “blowing up.” And 8 years later, FruityLoops is called FL Studio and it is an incredibly sleek, powerful, fast, and stable music production environment, and the Linux offerings are clunky, weak, slow, and *crazy* buggy. Why? If OSS is better, whence the suck?

    Further, off the top of my head, I honestly can’t think of any OSS thing that is eminently superior than its proprietary alternatives — I just can’t. Sure, I can come up with some debatables, like PHP vs. ASP and stuff, but nothing clear. The only clear arguments I can make in favor of OSS is that it’s cheaper, the OS is more secure, and it’s OSS if that’s your thing for whatever reason.

    The only argument that consistently makes sense for me is the “just for fun thing,” although there people making a buck off of this so it’s not necessarily that. But superior? Yeah, in theory, but I’m having a problem with the reality I’m bearing witness to.

  39. >Further, off the top of my head, I honestly can’t think of any OSS thing that is eminently superior than its proprietary alternatives

    That part’s easy. Firefox and Apache are just for starters. Python beats the living crap out of any proprietary language I’ve ever seen. Going to the more obscure, I’ve bever seen a better FTP client than lftp. And I haven’t found any better way to compose a book than DocBook-XML.

    I gather you have some kind of bad history with Emacs, but if you want a fully programmable editor nothing else even comes close.

    >If OSS is better, whence the suck?

    No developer got enough of an itch, I guess. That’s the usually-true answer, anyway.

  40. I guess a distinction needs to be made between open source as a software-conjuring mechanism and open source as a software development mechanism, if that makes any sense.

    As for the applications, Opera is certainly on par with Firefox. Python is definitely the heat. I don’t know enough about servers, but all I’ve ever used are the linux offerings and they certainly suit me fine. But Linux on the server is a given. Further, I don’t know anything about writing books, so I’ll leave that be as well.

    But for FTP — lftp is probably the best cli client in existnce, but mounting an ftp directory and using it graphically is better. Windows, Linux, and Apple all do this basically the same way, and it’s equally good on all platforms.

    And if somebody rewrote a spiritual descendent of emacs for the 21st century, with non-archaic keybindings and UI design, it might be awesome. But what it is is horribly outdated.

    And like I said, this doesn’t exactly make OSS look like the reigning champ. It certainly produces some good stuff, but I’m not seeing it as the gengis khan ass whupper of software.

  41. “If there’s no right to fork, it isn’t open source.” True, if you are working for yourself. If you are getting a salary to write or modify open source code for a commercial product, and you want to keep that salary, you will stick to the fork that your manager likes.

    Most of us have to make a living, and very few of us are lucky enough to be able to do that without conforming to someone else’s idea of the path to a salable product…

  42. >“If there’s no right to fork, it isn’t open source.” True, if you are working for yourself. If you are getting a salary to write or modify open source code for a commercial product, and you want to keep that salary, you will stick to the fork that your manager likes.

    You seem to miss the point. The right to fork is still present, it’s just that no open-source license can magically solve your office-politics problems or make your management smarter.

    It’s not magic pixie dust. (Thank you, JWZ.)

  43. >I guess a distinction needs to be made between open source as a software-conjuring mechanism and open source as a software development mechanism, if that makes any sense.

    Pete, dude, I love ya — but no, it doesn’t. I think you might benefit from actually getting involved in an open-source project, as a developer, yourself.

    >But for FTP — lftp is probably the best cli client in existnce, but mounting an ftp directory and using it graphically is better.

    Nah. Every try to wild-card a fetch in a graphical client? It sucks. I’ll take lftp every time.

    >And if somebody rewrote a spiritual descendent of emacs for the 21st century, with non-archaic keybindings and UI design, it might be awesome. But what it is is horribly outdated.

    How? I’m not asking to be tendentious, I’m really curious how you see this. I grok Emacs so thoroughly it’s probably reverse-transcriptased into my germ plasm by now, so it’s difficult for me to see it from outside.

    >And like I said, this doesn’t exactly make OSS look like the reigning champ. It certainly produces some good stuff, but I’m not seeing it as the gengis khan ass whupper of software.

    No. It’s more like the tide coming in. Doesn’t look like much until you notice all the proud sandcastles are melting…

  44. On Microsoft and back-compat: I do agree that their biggest problem is their development model (in an interesting way, see below). I don’t think back-compat has a very low cost for them either. Part of the reason for this is that I read Raymond Chen’s blog daily, and I see some of the contortions he’s gone through in the shell to stop broken programs from failing in new versions of Windows. (And along the way, I’ve learned a few things that I should never do while executing my day job.) I usually don’t think those were good decisions (after all, the programs were *broken*, and allowing programmers to continue to do broken things is *NOT* a good idea), but I do see the reasons for those decisions. Most of the time, it’s because if they broke those programs, people wouldn’t blame the programs, they’d blame the OS. (Because people are, as a rule, extremely stupid.)

    However, the reasons for the back-compat decisions that Microsoft makes are almost *always* that both the OS and the third-party programs are closed-source. (And if that isn’t the reason, then the reason is that an accountant is doing the programming. Yes, this happens; batch files and macros don’t *look* to be that hard at the outset, to the point where your average accountant thinks “I can do that!” and fails miserably.) Taking this into account, Eric’s right; the problem is that the programs are not open. But it’s not that “the programs are not open” and “back-compat is a huge problem” are exclusive; back-compat is a huge problem precisely *because* the programs are not open.

    (I’m not sure how the open-ness of the OS itself factors into this. It is true that a closed OS encourages closed programs, and vice versa; perhaps that’s the extent of the relationship.)

  45. Whoops, forgot to mention: When the Linux system changes in an incompatible way, they *do* leave broken programs out in the cold. For instance, the move from pthreads to to NPTL broke a few programs that expected to use undocumented pthreads internals. But that didn’t stop the adoption of NPTL; instead, those broken programs got fixed. (Eventually.) Even devfs, which was a decent way of approaching the problem of device node management, eventually got to the point where it was just too big of a wart to deal with anymore (plus it had several unfixable bugs), and it got removed. Programs that depended on the devfs device names were left out in the cold at that point.

    But even breakages like those are infrequent, and I started to wonder why. Then I realized it’s because the Right Design often wins in the beginning (mostly due to the whole thing being open); when there’s nothing to fix in the interface between userspace and kernelspace, it’s hard to break programs, because you’re only adding stuff to that interface. (And only after a peer review of what’s being added.) When the earliest pre-pre-alpha code is available and you’re actively soliciting opinions on it, you’re going to release a better product when 1.0 comes around, and it’s going to need fewer incompatible changes as time moves on.

  46. >back-compat is a huge problem precisely *because* the programs are not open.

    I tried to make this point in an earlier response, but I think I didn’t say it loudly enough. Thanks for reinforcing it from an independent viewpoint.

  47. One of the biggest omissions I see when people compare Linux to Windows is that Windows is really something you should be comparing to a substantial portion of a Linux distribution, but people always try to just compare it to the kernel.

    But that aside, the biggest problem with proprietary computing is that there is no benefit to keeping your source code secret. If the proprietary software companies would just distribute source code to licensees who wanted it, those licensees would have a protected legal right to produce and distribute their own modifications as patch files. Many companies that produce development libraries already do this, and there has been no increase in piracy.

    There is, however, a benefit in retaining the exclusive right to distribute. There is in the short-term a financial benefit, and in the long-term an architectural benefit. As long as the open source community equates these two things as being similarly essential, the proprietary computing world will not give us either of them.

    So if we simply decouple the redistribution freedom from the availability of source code, we can show the proprietary software world the real benefits that arise from releasing the source, and we might make more headway. But as long as we attach an economic condition to a political argument and call it a technical necessity, smart people will readily perceive that we are full of shit and ignore everything we say. It may not be fair to throw out the whole open source platform because of one stupid argument, but it’s also not fair to make that stupid argument in the first place.

  48. Another thing got to bugging me.

    > Apple has been able to ship four new versions in the last five years
    > Linux, entirely open-source, has bucketed along even faster.

    What exactly are the changes made in Apple’s O/S and Linux over this time?

    See, the difference between XP and Vista is shocking. There are new things all over the place that are good for developers, administrators, power users, and casual users alike. When I compare Vista to XP, I find whole stacks of features that make me say “okay, THAT is COOL”.

    Linux just looks like Linux. I find some feature in the new Linux kernel that effectively lets me have a four year uptime instead of just forty-two months. Gee. What a fantastic benefit. Everything works the same. Everything looks the same. Most of the tools I use will be the same. Wow, what a great upgrade.

    Vista lets me navigate among my windows on a 3D display with my scroll wheel. That actually makes my life easier; I switch windows a lot. You know what I always envied on Apple machines? The dashboard. Well, now we have the sidebar, where we can put gadgets. There’s speech recognition and synthesis built right into the O/S. And don’t even get me started on Office 12, which makes everything else look downright primitive.

    So forgive me for breaking with the party line, but I just can’t get excited about an 0.4% performance increase in software RAID drivers the way I can about Volume Shadow Copy. Most of the changes in Linux don’t affect me at all, but Microsoft’s changes will make a distinct difference in my everyday life.

  49. Pete, dude, I love ya — but no, it doesn’t. I think you might benefit from actually getting involved in an open-source project, as a developer, yourself.

    I actually did — two such projects. See http://gazuga.net/specimen. I’ve since abandoned them, for reasons which I plan to document online. Suffice it to say, things did not go according to plan at all.

    But as for what I meant by my comment: ceteris paribus, OSS might produce better software than proprietary methods. But all things aren’t equal, and there are a lot of areas where it doesn’t look like OSS has been able to pick up the requisite steam to get going. IMHO, that’s just as important as any technical merits it may have.

    Nah. Every try to wild-card a fetch in a graphical client? It sucks. I’ll take lftp every time.

    Yeah, but navigating graphically is better for selecting multiple disparate entities at once, and you have all the other useability benefits of a GUI to boot. And I believe wild-card selection can be performed in the Gnome file manager (nautilus) — actually, come to think of it, I don’t think you can do that with Windows or OSX. Although with third-party ftp clients like WS-FTP and LeechFTP it’s possible.

    I grok Emacs so thoroughly it’s probably reverse-transcriptased into my germ plasm by now, so it’s difficult for me to see it from outside.

    Well, that’s exactly the problem that I ran into. When I finally became one with emacs, I became incompetent when interacting with the rest of the world. For instance, I’d be using Firefox to write a post for a forum, and I’d go to delete a line, and next thing I know, my entire post had been replaced with a k (C-a C-k). So I try to undo, and now I just have an underscore next to my k (C-_). OK, take a deep breath, find the undo button… what is it… C-Z! Got it.

    So, that kinda stuff sucked. And if I got used to some application which abided by modern UI standards for its platform, I’d go back to emacs and I’d be a retard. Pushing C-s to save, C-f to search, etc.

    As for emacs itself, it’s horribly undiscoverable in the context of the modern UI. Once you know how it thinks, it’s not that bad, but it’s no better than a help search in VisualStudio. The nice thing about the modern UI idiom is that, where ever you go, you can expect some base level of functionality. But since emacs breaks that, you’re really starting at ground zero all over again, and it’s just not fun. When I first started using it, I spent *two hours* trying to get it to indent my C the way I wanted it to (GNU mode is SO FREAKIN’ WEAK). I’ve never had an experience that bad before or since.

    Of course, now that I know how it works, getting it to what I want within it’s capabilities is a cinch — but it’s not anymore of a cinch than it is in the rest of the world, and I’m speaking a language that is useless outside of emacsland (so to speak). And there are a lot of things it just does not do at all, like intellisense and project management, which I really miss from VS and co. CDET tries to fix this, but it just does not work.

    The one thing emacs objectively does have going for it is ease and depth of programmability — and I’ve just never used that, nor have I been compelled to. So ultimately, I just have no reason to use it, as opposed to an application which doesn’t require me to carve away and devote an entire chunk of my brain to it. And I’m no expert, but I’m pretty sure Eclipse is highly programmable, so the value of emacs in general might be questionable — but that’s all behind me now.

  50. Caliban Darklock:
    > And don’t even get me started on Office 12, which makes everything else look downright primitive.

    This is also from the perspective of someone reading Raymond Chen’s blog, but Office 12 is SO EXTREMELY STUPID that it’s going to cause another wave of back-compat problems for Microsoft in the future. Yes, maybe it looks cool. (I do not agree with you. But then, I use twm at home, so I’m not a good person to be asking about this kind of thing.) But it looks cool at the expense of actually working with the OS — pretty much everything it does is owner-drawn, including the caption bars, for $DEITY’s sake. So if Microsoft decides to change something (i.e. making real caption bars required) in the future, Office 12 is going to force them to abandon that (IMO) useful idea.

    Yet more evidence that Stupid Things get done in a closed-source project. (And yes, I know, Stupid Things don’t always get avoided in open-source projects, but as Eric has said already, even if OSS isn’t always the best, it’s still orders of magnitude better.)

  51. > But it looks cool at the expense of actually working with the
    > OS — pretty much everything it does is owner-drawn, including
    > the caption bars, for $DEITY’s sake.

    Let me clarify.

    Office 12 WORKS so much better, everything else seems to be the digital equivalent of sticks and rocks. And yes, that means it doesn’t use the common controls, because the controls it needs are by definition not common. Office has historically evolved the user interface with every release, and it has hit a wall in how much it can do. It’s had the same UI concept for about twenty years, and proceeding farther along the same path will not make things better. IMO, it has not made things better in quite a long time, and has instead been making things worse. The recent advances in Office have been in new applications, e.g. OneNote and InfoPath, and in packaging decisions like the inclusion of Publisher in the Office Pro suite. Microsoft noticed this, and they did something about it: they reinvented the Office UI in a way that *does* make things better, and that provides fertile ground for further improvements.

    So there are two choices: you could create new common controls just for Office 12, thereby adding a new O/S component for the sake of one application… or you could use owner-drawn controls, with the expectation of adding common controls once they prove to be useful and stable. Microsoft have done the former in the past, and it has bitten them in the ass with immediate backward compatibility problems. Doing the latter, on the other hand, faces them with backward compatibility problems *later*, when they can give ISVs proper notice and detailed guidance on how to address those problems.

    I believe they have made the right decision.

    > So if Microsoft decides to change something (i.e.
    > making real caption bars required) in the future,
    > Office 12 is going to force them to abandon that
    > (IMO) useful idea.

    I don’t believe Microsoft are stupid enough to be “forced” by Office 12 into not implementing a useful idea. One release of Office actually included a combo box in a menu, apparently on the grounds that combo boxes were useful in toolbars – which were, after all, originally designed as iconified menu equivalents – so maybe they would be useful in menus. It was a stupid idea, so it disappeared. Likewise, a former rule that toolbars could not contain any controls other than buttons was dropped from the UI guidelines when combo boxes proved useful on them.

    Office is historically incompatible with the formal Windows UI guidelines, and what has value in a given release of Office usually makes its way into those guidelines, while what does *not* have value tends to be dropped from Office. Which is exactly as it should be.

    > even if OSS isn’t always the best,
    > it’s still orders of magnitude better

    I have yet to see the OSS community produce a stable and reliable application that can be termed a “success” in less than ten years.

    With an average release cycle roughly twelve to eighteen months long, and an average of three major releases before “success”, the closed source community habitually produces them in three to five years.

    I don’t think either of these is definitively better than the other. I believe a ten-year development cycle is something you rarely get in closed source, and I think it produces a much better product on that twenty percent margin where most people aren’t working. If you’re on that twenty percent margin, the OSS development cycle is the only thing producing quality products you can use wuth any level of frequency. There are exceptions; WinZip and UltraEdit32 are closed source, have been around for over a decade, and both represent best-of-breed apps that quite firmly support the marginal user.

    However, there is still the matter of that initial eighty percent. Closed source products, by virtue of their shorter development cycles and need to hit market windows, do a much better job of identifying and prioritising that eighty percent. So if you’re NOT on the margin, a closed source product is frequently more in touch with what you want right now.

    Case in point, Office 12. I’m not a margin user of Office 12. I write reports, build graphs, and prototype databases. The eighty percent solution is what I need, and when I compare MS Office to OpenOffice.Org, MS Office works reliably enough for me. Office 12 makes doing what I do even easier. OpenOffice.Org frequently crashes and loses my work, so it’s infinitely worse than MS Office.

    But on the open source side, there is the question of a web server. I do weird things with my web server. I’m definitely a margin user. And when I stack IIS up next to Apache, IIS looks like crap because it doesn’t even do what I need. When I compare a Windows web server to a Linux web server, Windows is doing an awful lot of things I don’t need. I look at Server 2003 Web Edition, and it doesn’t run standalone – it has to connect to a separate machine for database services, for example. I want my web server on one box, completely isolated from my corporate network. So Windows and IIS lose, Linux and Apache win, and consequently I don’t even have to consider MSSQL or ASP. I just put up a LAMP stack, and it’s infinitely better than a Windows box because the Windows box won’t do what I want in the first place.

    So open source does have its place, and closed source does have its place. But there are serious problems with the politics of the open source movement that are impeding the commercial adoption of real useful practices from the open source community. It’s this whole free redistribution thing; I just can’t get behind it. It’s a dumbass idea. The OSI guidelines provide for a perfectly acceptable set of license terms that protect redistribution rights, so why not make those terms the norm in our presentations to business representatives?

  52. See, if scientific research is the model of software development, then we should note that scientific research is two-tiered: general research made by universities etc. is open, but specific product researches by companies built on top of those general research results are not, simply because they give a competitive advantage.

    I think we can expect the same for software development too in long run: general level, such as OS, device drivers, and general backends and engines will tend to be open as they can greatly benefit from peer review, but for direct, “sexy” end-user applications closed source ones will always be more popular because delivering something cool that no competitor can and looks nice in an ad will bring more profits. I mean the sexy features will be cloned by your competitors at most in a year or so, so why would you make their life easier?

    Open Source is suitable for those development challenges that are technically hard – and these tend to be the backend.

    For those features that are easy to develop, and only the idea itself makes them valuable and once the idea is out, everybody can easily clone them and you don’t need peer review because it’s not a development challenge, and you need to milk as much cash as possible in a short period of time until competing products appear, closed source works better.

    A game is a good example. The 3D engine is better of being open. But why would you open up your models? Or the script defining the game rules?

  53. It’s this whole free redistribution thing; I just can’t get behind it. It’s a dumbass idea. The OSI guidelines provide for a perfectly acceptable set of license terms that protect redistribution rights, so why not make those terms the norm in our presentations to business representatives?

    Can you clarify? These statements seem contradictory to me, so I don’t think I’m interpreting properly.

    Shenpen:

    I agree with the idea that this stuff tends to naturally get two-tiered, but I have some other facts to interject with the rest of your post. First, it’s not just a matter of looking sexy. I certainly didn’t leave Linux and go to Windows for sex appeal. I went because it was simply impossible to create the music I wanted to create on Linux. Maybe there’s something wrong with me, but after 6 years of trying, and personally coding what ended up being the bulk of my toolset, I just had to admit that it wasn’t happening. I see no reason why this should be confined to the audio realm, either. Movie making and CAD seem to be in a similar boat, and I’m sure there’s more. So while sex does sell, so does functionality.

    Second, open source is *not* inherently better suited to hard technical challenges. Examples abound of the proprietary offerings whooping up on the OSS alternatives. The havok physics engine makes ode look like a toy. The Doom 3 engine kills Ogre3d. Photoshop simply does a lot of stuff which the GIMP doesn’t, and in CMYK to boot. FruityLoops has no equal on Linux. And you know what I think of the games. Maybe these proprietary products would turn out better if they were developed in an OSS fashion, but without proprietary licensing, would they end up being developed at all? OSS seems to be nowhere near as good at motivating programmers as proprietary software.

    So it follows that, empirically, a game engine is not better off being open. The OSS engines are weak sauce. Proprietary companies like Epic an Id make awesome engines, and then they license them to other game developers, and they use the profits to motivate development of newer and awesomer engines in the future. Once again, ceteris paribus, OSS might produce a better engine, but it doesn’t look like people can be arsed to do awesome work for free. If somebody could figure out a way to make money off OSS the way you can off of proprietary software, the story might be different. But it is what it is, and it does not make OSS look like a godsend.

    Plus, there is a lot of religious dogmatism in the OSS community that stifles development, but I’ll get into that on my own later. Suffice it to say, I flip flop between thinking that without the GNU people, we’d have nothing, to thinking that the GNU people are the root of all that sucks.

  54. >OSS seems to be nowhere near as good at motivating programmers as proprietary software.

    This seems obviously wrong. Open source is so good at motivating programmers that they build entire operating systems (not just kernels but userlands too) on their own time.

    >Second, open source is *not* inherently better suited to hard technical challenges.

    I think the record shows that it is, but I’ll admit that the fact basis for that claim could be argued for weeks.

  55. >See, the difference between XP and Vista is shocking. There are new things all over the place that are good for developers, administrators, power users,
    >and casual users alike. When I compare Vista to XP, I find whole stacks of features that make me say “okay, THAT is COOL”.
    I’ve seen a few that are fairly sweet, most of which are related to the new GUI. Pretty much all of which I can do in Linux right now using XGL, not in some asymptotic period of time, assuming Vista ever actually gets released.

    >Linux just looks like Linux. I find some feature in the new Linux kernel that effectively lets me have a four year uptime instead of just forty-two months. Gee.
    >What a fantastic benefit. Everything works the same. Everything looks the same. Most of the tools I use will be the same. Wow, what a great upgrade.
    Well yeah, it seems bland when they mumble on about some obscure scheduler improvement in the kernel, but I found myself compiling a large C++ program in Windows the other day. During the half-hour it took, the interface was a dog. Doing anything remotely significant (say, loading Firefox) took indeterminate periods of time – I kept getting bored of waiting for things and wandered off.
    By comparison, I’ve forgotten that I’m compiling things in Linux before, and only remembered when I wondered why UT2004 was so slow. I find myself surprisingly grateful to all those boring sounding kernel improvements.

    >Vista lets me navigate among my windows on a 3D display with my scroll wheel. That actually makes my life easier; I switch windows a lot. You know what I
    >always envied on Apple machines? The dashboard. Well, now we have the sidebar, where we can put gadgets. There’s speech recognition and synthesis built
    >right into the O/S. And don’t even get me started on Office 12, which makes everything else look downright primitive.
    I didn’t want to bang on about XGL too much more, but the task switcher is pretty cool…
    Hasn’t Dashboard only been in OSX 10.4? So when you say you’ve always envied it, that’s for about ten months or something?
    Either way, there have been plenty of alternatives for all three platforms for many moons.

    >So forgive me for breaking with the party line, but I just can’t get excited about an 0.4% performance increase in software RAID drivers the way I can about
    >Volume Shadow Copy. Most of the changes in Linux don’t affect me at all, but Microsoft’s changes will make a distinct difference in my everyday life.
    Those changes may make a difference to a lot of people, _if_ they ever actually get released, whereas in Linux I can see real changes all the time. I’ve only been using it for a few years, but it is massively better than it was when I started – if I still had a copy of Windows on here, I’d still be using XP with the exact same features. Not to mention that I’d have had to pay another few hundred dollars for a new license when I upgraded my machine, in order to get the exact same OS again…

  56. Just my little comment: I hardly know a thing about economy, but I’ve had enough history lessons to really appreciate you comaring open source to free market. It is probably one of the best analogies I’ve heard in a while.

  57. Isn’t this all actually politics disguised into technical terms? I mean closed source resembles the classical right-wing view, where private property is considered sacred and treated as a big lump of treasure to sit on. On the other hand, the fuckwits on Slashdot who attack everybody in packs who has a slightly different viewpoint resemble the left-wing political correctness – fanatics quite closely. It is also easy to draw a line from Eric’s libertarian political views to his role in OS. Also, RMS’s GNU culture is also almost directly political, it resembles the flowerchildren of the sixties protesting against ‘Nam. So it quite looks like it’s all politics.

  58. Eric: going back to your question about what Emacs looks to an outsider, I remember reading your criticisms of vi and Emacs in TAOUP and thinking you had it exactly backwards. In vi, adding features is relatively hard, and there are rigid user-interface guidelines in the form of the command+number+movement structure and the format for ed commands. This tends to give the interface a certain orthogonality and mnemonicity. In Emacs, adding features is relatively easy, and there isn’t much in the way of user-interface guidelines, or at least not that I’ve noticed – sure, there are things like “C-a for beginning-of-line and M-a for beginning-of-logical-unit”, but they don’t cover much of the interface, and tend to run out when you need them. This makes the interface spectacularly ad-hoc. C-x ` for “next compiler error” – I mean, what? The vim equivalent is :cn, which at least has a mnemonic: “compiler next”. Previous compiler error is :cp, for which Emacs doesn’t seem to have an equivalent. And finding what you want in Emacs help is often a massive exercise in frustration. Compare the vim help, which I find to be pretty good. I’m certainly not saying that vi/vim is perfect, but to accuse it of ad-hocity in a comparison with Emacs seems somewhat unfair :-)

    And, as previously mentioned, Emacs has keyboard shortcuts that are completely different to everything else. vi has this problem too, but I find that vi is so different that it doesn’t cause confusion: it’s like the difference between me visiting the US and, say, Viet Nam. In the US things look similar and sound similar, so I keep getting tripped up on the differences, whereas in objectively more different countries I’m more aware of the differences and more able to adapt to them.

  59. This seems obviously wrong. Open source is so good at motivating programmers that they build entire operating systems (not just kernels but userlands too) on their own time.

    With the implication being that Open Source + money would be totally rad — but this equation has thus far been mostly unattainable. As I’ve pointed out a few times now, there are a lot of things where OSS falls down.

    I think the record shows that it is, but I’ll admit that the fact basis for that claim could be argued for weeks.

    My standard of proof is that the better the methodology, the more hard technical problems we can expect to see solved using that methodology. Like I’ve said, I’m not seeing this weighing favorably on the OSS side. I know it sounds immature, but if OSS is so great, why are there so many areas where it is weak sauce compared to the proprietary offerings? Or is that not a factor in evaluating it’s utility?

  60. Shenpen:

    Odd. RMS’s mother, who actually *was* a hippie, heard her son start talking about freedom when he was a young man and was afraid he’d turned into quite the fascist.

    One side effect of the elements of his politics that makes him interesting to listen to (even if you don’t always agree with him, which I don’t) is that he is not classifiable according to conventional politics. He thinks issues through instead of reciting nostrums.

  61. I am going to repeat my controversial but imho quite correct claim that Windows is such a compelling platform because Microsoft is a monopoly. To cite my previous example (one which I know is dear to Bessman’s heart), the games on Windows will always be better because Windows has Direct3D, which can fold in support for advanced 3D rendering hardware faster than the OpenGL ARB ever could. Why? Because it’s developed by Microsoft. There’s one company which controls both platform and APIs, one stop for NVIDIA and ATI to turn to for support for their new features, and for game devs to turn to to learn what those features are and how to exploit them.

    You can talk about how much skulduggery and bad business practices it took for Microsoft to get there all you like, but the fact of the matter is they did it, and their customer base prefer the Microsoft monoculture over the OSS bazaar, with its profusion of often-incompatible choices and often conflict-ridden decision making process. For developers, the advantage is: the less you have to think about your platform, the more you can think about your applications. For end users, the advantage is: the less you have to think about your applications, the more you can think about your work. And at the end of the day, people just want to browse the Web, chat, send e-mail, play games, watch movies or photos or listen to music, type up a letter to Grandma or tweak their 3rd quarter sales analysis spreadsheets.

    When it comes to giving end users an easy way to do this, Bessman is right: open source is weak sauce. And all the ideology in the world won’t fix it.

  62. There’s one company which controls both platform and APIs, one stop for NVIDIA and ATI to turn to for support for their new features, and for game devs to turn to to learn what those features are and how to exploit them.

    This only applies to the world of PC gaming, and that is arguably as much to do with the fact that 99+% of desktops are running Windows — sort of a chicken-and-egg scenario where the lack of games makes an alternative platform less appealing, and the low appeal of an alternative platform makes it less interesting to game developers. But in the gaming world as a whole, Microsoft’s Xbox has an installed base of a mere 25 million units, compared to 20 million for the Gamecube and 100+ million for the PS2. So, their platform is far from dominant.

    Ironically, I actually fear that if you’re going to do “just the basics,” it’s actually 6 of one, half a dozen of the other when it comes to choosing a platform. It’s when you’re on the cutting edge that disparities become more evident. And while there actually are plenty of top-notch commercial games available for Linux, my point is that there are *no* top notch open source games, anywhere, period. My wild guess why? Nowhere near as much money — effectively none.

  63. Jeff, Peter: Perhaps you missed the part in The Magic Cauldron where Eric talked about when it makes sense to close the sources. Basically, it’s any point where the rent you can extract from the closed source exceeds the return you can get from widespread peer review.

    With games, the rent is quite high, and the return is fairly low. It doesn’t make a lot of sense to throw open the sources for games. (I believe Eric already conceded this earlier in the discussion here, but it seems to have been ignored.) See, for instance, chapter 10 of tMC:

    In a competitive market, therefore, customers seeking high reliability and quality will reward software producers who go open-source and discover how to maintain a revenue stream in the service, value-add, and ancilliary markets associated with software.

    (Emphasis mine.) Note that in the games market, customers are not seeking high reliability and quality, just a medium level. As long as the game only rarely crashes, it’s good enough, even if it does have huge memory leaks and it interoperates very poorly with anything else. (As if games had to interoperate at all…) Whereas server software can’t crash at all, can’t have any memory leaks, and usually has to interoperate with various other programs. These conditions imply that server software has a much higher return from being open than games do.

    Which is exactly what you both seem to be saying. ;-)

  64. Looks like a very good point mon frer, but I wouldn’t be able to refute it if it was wrong because I have nowhere near the depth of knowledge of serverland that I do of desktopland. Still damn interesting, though.

  65. Bryan, your points are valid, but I was referring to the platform for games and not games themselves. I don’t know about Pete; it seems like he was approaching from a different angle.

    Windows is the most compelling platform for people who are interested mainly in playing cutting edge games. It also has an edge against Linux when it comes to being a platform for people who want to get their work done. For hackers and people interested in writing server applications, Linux has a definite edge. (BSD operating systems, likely, even more so.) Windows is a more compelling platform for more people because of, instead of in spite of, Microsoft’s dirty business practices.

    We’ve got RMS in one corner arguing that all software must be free for moral reasons. And we’ve got ESR over here, arguing that free software is better for technical reasons. What you find in the world of business, and what Pete and I have been saying is that unless you are a true geek, you don’t care about any of that stuff. You care whether it’s usable and offers you value for your money — real or perceived. Social reasons, in short, and as we all know, the geek contingent of society is by definition short on social capability.

    That’s why Microsoft wins.

    (Disclaimer: I *am* a true geek, and for all the computer work I do I find it completely unpalatable to use a ‘Doze box, as they simply are not reliable enough to be there any time I need them. Nevertheless, what I feel personally as a geek, and what I observe in the business world are two different things; and I wish more geeks had the nous to understand that their philosophies and methodologies do not translate to the “real world” and may never so translate.)

  66. Another social issue that I forgot to mention is that developers always know exactly what they’re targeting simply by going to MSDN and keeping up to date. In that respect, Microsoft’s monopoly has been advantageous in building a much more compelling platform for developers to target, and hence, for users to use, than Linux or the BSDs.

  67. /me nods in agreement

    Come to think of it, what exactly are MS’s dirty business practices? I remember hating them growing up, but I wonder if that was just because that’s what all the cool geeks did. Something about Java, I barely recall.

  68. >> Come to think of it, what exactly are MS’s dirty business practices?

    Well, the practice of using monopolistic tactics to attempt to stifle competing products rather than competing on merit. They do not do this much anymore, but I heard they tried to kill all other servers besides IIS on the NT platform by changing the NT Workstation EULA to not allow for use as a server. I heard that the only difference between NT Server and Workstation was that the Server version included IIS, but some companies used NT Workstation and a competing web daemon (Netscape Server?) to create a fully functional web server for several hundred dollars less than the cost of an NT Server license. Instead of lowering prices, Microsoft changed the EULA to disallow NT Workstation from hosting any servers, so if a business wanted to run a competing web server, it had to pay for the competing software in addition to NT Server, which killed a lot of interest in commercial webservers. Now, Apache works fine under NT/2000/XP, I heard it had problems under NT in those days. Also, there was the decision to integrate IE into the kernel, and the decision to offer manufacturer discounts only to businesses who ONLY shipped desktops running Microsoft Windows operating systems.

  69. Yeah, that’s pretty lame, although hardly the death-star building drama I would have expected. I think ESR is right that MS ought to get the bird for doing stuff like that, but bringing the DOJ in is sand blasting a saltine (and anti-libertarian, if that’s your thing).

  70. >>Yeah, that’s pretty lame, although hardly the death-star building drama I would have expected.

    Microsoft has probably made more decisions which some people do not like, but I listed the best examples that I remembered. Many other tech companies (and companies specializing in other areas) have made decisions that I (and others) do not like; Microsoft is not the ‘Evil Empire’ in the technology field, and anyone who labels it as such betrays ignorance of the past and the present.

    >> I think ESR is right that MS ought to get the bird for doing stuff like that, but bringing the DOJ in is sand blasting a saltine (and anti-libertarian, if that’s your thing).

    I agree that the DOJ should not have interfered with Microsoft’s business. I also do not agree with the EU’s decision to force Microsoft to sell Windows Media Player separately from Windows; I do not think the un-bundling will help any competing proprietary media players, since the best media players available on Windows (in my opinion) are VLC and Media Player Classic, which are both Free/free. I use VLC anyway, and I encourage everyone else to use it as well, although I prefer MPlayer.

  71. >Well, the practice of using monopolistic tactics to attempt to stifle competing products rather than competing on merit. They do not do this much anymore, but I heard they tried to kill all other servers besides IIS on the NT platform by changing the NT Workstation EULA to not allow for use as a server.

    As I recall, it also affected you using NT Workstation to run IIS as well – understandably, Microsoft wanted you to run NT Server rather than Workstation; IIRC, there was a hard-coded 10-connection limit on workstation (this affected Microsoft products as well).

    >I heard that the only difference between NT Server and Workstation was that the Server version included IIS

    Fairly sure that’s not true; at the time IIS was available via the NT4 Option Pack. You could download or order this from Microsoft, but it was often included in the NT Server or Workstation CD box; or you got the latest Service Pack instead, or both. I seem to recall a Frontpage CD was another favourite to be included. It was only with Windows 2000 that IIS was included with the OS (both Professional and Server), but it’s not installed by default.

    >but some companies used NT Workstation and a competing web daemon (Netscape Server?) to create a fully functional web server for several hundred dollars less than the cost of an NT Server license. Instead of lowering prices, Microsoft changed the EULA to disallow NT Workstation from hosting any servers, so if a business wanted to run a competing web server, it had to pay for the competing software in addition to NT Server, which killed a lot of interest in commercial webservers.

    What killed other webservers was IIS4 being a decent web server with comparable features to other vendors offerings, being free didn’t hurt, but with the major advantage of introducing ASP (IIS2 was pretty much forgettable and 3 wasn’t great either). ASP made writing database driven websites far easier than other alternatives and was easily extendable via COM (indeed, a huge market of COM components for ASP sprouted up).

    Netscape Server was primarily designed to run on UNIX and never really worked as well on NT (most people ran it on NT if they used SQL Server as drivers for SQL Server on UNIX were pretty much non-existent or not very good). Server Side JavaScript (like ASP / PHP, but well before them, the first real attempt that I know of at something better than CGI) was very good for the time, but ASP was easier to use, develop and extend (not to mention that SSJS also had memory management issues). Java was supported as well, but as I recall it was so much more complicated than just using SSJS (for limited benefit) that it wasn’t worth the effort.

    >Now, Apache works fine under NT/2000/XP, I heard it had problems under NT in those days.

    Apache was designed to run under UNIX; it required some architecture changes to run well under NT IIRC.

    >Also, there was the decision to integrate IE into the kernel

    I don’t think it was the integration per se, but the other things they did to try and ensure that users wouldn’t download and install Netscape, or have vendors like Dell install it before shipping out a PC.

    >and the decision to offer manufacturer discounts only to businesses who ONLY shipped desktops running Microsoft Windows operating systems.

    No argument there.

  72. As far as evil business practices go, a good chunk of that probably goes back to the Halloween Documents. (Perhaps you remember them? If not, see here. The first 3, which are actually internal memos leaked from Microsoft, along with several other commentaries and debunkings of certain press releases, PR campaigns, etc., are all there.)

  73. OK, ESR, here’s something on an unrelated note: how the hell did you come up with the politics of J. Random Hacker as being “moderate-to-neoconservative?” I just went browsing through a bunch of hacker blogs for the Gnome project, and it’s friggin’ sickening! As far as I can tell, they’re all damn smelly che guevara cock sucking hippies!!!

    Not putting you on the spot, just curious as to what other hackers might have a blog I can read without wanting to kill things.

  74. Hey, Pete, I’ve wanted to know the same thing for some time. Hackers, like most intellectuals, tend to be extreme leftists. ESR and perhaps David Gelernter being the preeminient exceptions that I can think of. In fact, recently RMS gave a speech on free software in which he lionized Hugo Chavez as someone who stands up to the corporate hegemon.

  75. I think Peter and others have outlines the real failing of Open Source, its a model that just doesn’t work outside of a narrowly defined set of conditions. Conditions that permit decent programers with spare time to contribute to infrastructure projects, such as the linux kernel.

    If esr’s claim that “open source scales” were actually true, then the BSD systems would dominate the landscape, since these are arguably both more “open” than linux and are simultaneously more “business friendly”, both by way of their less-restrictive license(s).

    But they don’t, and it would be good to understand ‘why’. Part of the reason is that “Open Source” is a marketing term, invented “because people get confused by (the word) Free”, and promoted and promulgated by people who wanted to promote themselves as “heroes of the revolution”.

    In the meantime, RMS’ original goal (a system where he could write free programs) is largely available, and RMS has moved on to issues (such as TMC and DRM) that would place restrictions on “GNU/linux”.

    Free Software has largely reached its goal(s). Has Open Source, or does OSS have to first capsize Oracle, SAP and Microsoft?

  76. As a user, I feel myself like a table tennis ball, bounding between open and closed source.

    Early 2004, I bought some uber-hardware: 3GHz hyperthreading proc, 512M memory and a SATA HDD – the SATA bus is so amazingly fast that when installing something from CD the 52X CD-ROM was the bottleneck of the process… Excel started in less that two seconds. I was delighted. Half a year later, Windows worked about three times slower than before, despite my best efforts to procect it against spyware etc. I was furious. So I decided to turn more attention to that neglected partition of the disk where an outdated SUSE 7.3 was installed, I deleted it and install UHU Linux. It mostly worked, however, UHU has a small team and packages are often quite outdated, so I decided to go for Gentoo, because I bought into that “it’s sooooo fast” urban legend. It wasn’t noticeably faster than compiled distros and compiling the whole KDE or OOo with all dependencies with only one command was quite an amazing feature that made me wow. However, even basic things like USB sticks I couldn’t make work so I deleted it and went for Ubuntu Breezy. Ubunti worked amazingly well. But after a while I noticed packages are outdated and while I compiled Ruby 1.8.4 from source, I couldn’t make Rails understand that it’s not the 1.8.3 package I removed long ago, so Rails is not working for me. Disillusioned, I booted the neglected Windows partition and installed Instant Rails (Rails, Ruby, Apache, MySQL) with about three clicks. Wow. And writing code in FAR Manager is a lot easier than in MC – don’t come with EMACS or vi, I just can’t imagine writing code in an editor that’s not built in a two-panel commander application: when I try that, I feel insecure like a kid lost in a forest, as being too far from my directories and files. So now I am booting into Linux for browsing the web and into Windows for programming, which is quite a strange situation I think… when will this be resolved?

  77. Stephan,

    Buy a Mac, you’ll be really amused at how “it just works”. I’ve recently moved my primary desktop to a macmini with 2GB of memory. Previous it was a 3.0GHz hyper-threaded dual (RAID) SATA SFF box with 2GB. It too ran gentoo then Unbuntu.

    The Mac is *faster*, and, things like rails “just work”. Now I’m looking for a decent SATA to firewire or USB enclosure so I can backup the drives in the SFF box, move them to the mini and bring all that data back on-line.

    Linux belongs in a co-located rack, not on the desktop.

  78. >> The Mac is *faster*
    Benchmarks (non-Photoshop)?!

    >> Buy a Mac, you’ll be really amused at how “it just works”

    Try running Half Life 2 on your Intel iMac and tell me how easy it is. Sure, everything from Apple ‘just works’, but driver support for a lot of random hardware is absent. Granted, Linux has the same problems, but at least Linux has ndiswrapper.

  79. BeOS got crushed due to marketing (and market-driven) forces.

    The first marketing force was that if you were a screwdriver shop building PCs, you pretty much have to sell them with Windows on them. Fine – make them dual boot.

    Except if you do this, your price per copy of Windows doubles – and it’s already the most expensive thing in the PC you’re building. It took a court injunction to prevent Microsoft from pulling this stunt on Dell.

    BeOS is a technical marvel, it’s an absolutely beautiful thing. The BeFS kicks the asses of every other OS filesystem so hard that they wonder how they got into orbit without noticing.

    This shows Microsoft’s anticompetitive business practices….and it shows some of the fundemental problems with screwdriver shop economics, where the sole determinant is price, price, price. (We eventually sold single boot machines with a partitiion ready for BeOS to be installed, and a copy of the OS CD in the CD-ROM drive…)

  80. User Interface Development:

    Eric, having read your “howtos” on upgrading to FC5, I am struck by this set of economics.

    Your time is valued at, what, about $40, $50 an hour?

    At a guess, you spent about 4 hours on upgrading that software to the point where everything worked as expected. So, you spent about $160+ on upgrading a free OS. (This isn’t counting the time spent making an ISO image you could install from.)

    When I upgraded from Win2K to WinXP, the CD cost me $79. My time is valued at about $15/hour – and I spent 45 minutes, total, doing the upgrade. Round that to a full hour. Call it $95 total.

    Open Source is free if your time is worth nothing.

    I had NONE of the device driver issues you specified with your HowTo. Getting a DVD to play was simply a matter of putting it into the drive and pressing play. Every piece of hardware I own, from my ancient Wacom tablet to my USB printer worked fine, the first time.

    I didn’t have to manually search the web for updated drivers, and hope that someone out there had been pissed off enough to write them.

    To me, a computer is like a toaster. And I’m /far/ more typical of most users than you are. I’m not a programmer.

    Closed Source Development has one edge that Open Source Development never acknowledges.

    They have to please their customers. They understand that 90% of their customers are NOT programmers. They’re writers, graphic artists, musicians, architects, lawyers, draftsman, accountants and more…but they aren’t programmers. They have to make something that a non-programmer can use.

    Open Source coders can get very impressive results by catering to the programmer monoculture.

    And, guess what…

    Developing software to please a customer is /FAR/ more difficult than developing software to please a programmer. Fixing stupid luser complaints is a lot less sexy and a lot more drudgery than coming up with that way clever hack.

    You actually have to pay attention to user interface issues. Forcing a user to type:

    rpm -ivh http://rpm.livna.org/livna-release-5.rpm
    yum install madwifi

    (which is, in essence, telling them they have to mount a remotely loaded batch file, then run one specific command to load the part they want…)

    is NOT an acceptable answer.

    You have to actually write manuals for someone with an 8th grade reading level. Which means associating with someone with an 8th grade reading level, and treating their confusion as a problem YOU have to solve.

    You have to write tutorials, and examples…and that means you have to actually FINISH the project, rather than stop developing it because it does what you want it to do.

    I’ve used the GIMP. I’ve used Paintshop Pro, I’ve used CorelPaint.

    I use Photoshop for anything that matters.

    Of the four programs mentioned, the GIMP is by far and way the least useful to someone who actually does press work. After all, you’re only going to use this app for web stuff, who needs CMYK and SWOP? The font rendering, recognition and embedding (which is critical in sending files to press) sucks the genitals of viridian donkeys.

    There is NOTHING Open Source that competes with Illustrator. To be fair, there’s nothing closed source that competes with Illustrator.

    To the Open Source community, TeX and LaTeX are all you need for laying out books – which is akin to saying that DOOM 3 is played best on a 240×320 green screen monitor with a command line interface.

    Give me InDesign any day.

    Open Source software (not OSs, the stuff you actually buy a computer FOR) is generally harder to use, less capable, crashes more, and takes longer to do the same things.

    Case in point – OpenOffice.org’s spreadsheet is significantly slower at doing the things I use every day than Excel is. When it can open them at all. The word processing app does not have a particularly good autosave, and crashes regularly on long documents with indexing notes.

    Thunderbird is a great web browser – but it’s a great web browser because there are two competing closed source projects (Opera and IE) that it can steal features from.

    In the end-user application space (and ultimately, an OSs window manager is an end user application), I see far more innovation in closed source development than I do in the open source model.

    When Vista is released, I’m willing to wager that the cycle will be:

    “It sucks”

    four months later

    “Hey, I hacked my copy of KDE to do something like that Vista three dee window navigation trick.”

  81. 1) OO.o sucks because it originated as Sun’s StarOffice — a very much closed-source project. It’s still working out of the “I’m crap” stage, like Mozilla did for several years after 1998 while it got basically rewritten from scratch.

    If anything, this illustrates that throwing open a huge project late in its life is *worse* than having it open from the beginning, because you’ll get a long period of not much being added or fixed. It takes people a while to get accustomed to a huge base of code (the amount of code in FF that I’d have to figure out to be able to hack on it still boggles my mind, and it’s nowhere near as big as OO.o).

    2) Thunderbird is a mail client, not a web browser. ;-P

    As far as web browsers go, perhaps Opera had tabs or something close to it (MDI) before FF did, although I don’t remember all the way back to FF 0.6 or whatever it was, so I don’t know for sure. But IE (7) stole tabs, RSS, and a bunch of other things from other browsers, not the other way around. Mozilla had a crash reporter long before I remember Windows getting it. And what other browser has something like Flashblock? (Well, IE6 on XP will soon, because Microsoft conceded the patent suit to Eolas, so you’ll have to click on any in-HTML activex control to be able to use it. But that’s several years after flashblock was released; it was hardly copied.)

    3) Doesn’t XGL *already* do “that Vista 3D window nav trick”? I’ve heard a lot of good things about it from people that like eye candy (though I am not one of them). And unlike Vista, XGL actually *exists*, now; it hasn’t been pushed back to Q1 2007.

  82. >>When I upgraded from Win2K to WinXP, the CD cost me $79. My time is valued at about $15/hour – and I spent 45 minutes, total, doing the upgrade. Round that to a full hour. Call it $95 total.

    This is why I do not like distributions that prefer a complete system reinstall; I prefer a system that I install once and then naturally upgrade packages, so I can spend ~5 minutes actually upgrading the system (while doing other things). Plus, people rarely stick with the base software that comes with Windows, so you can spend several hours installing all the software you need and customizing it; this is the same with ALL systems.

    >>Give me InDesign any day.
    I once worked with one of the Adobe products to layout my high school literary magazine, and I spent most of my time fighting with the software to do what I want. Document-style programming languages seem a much more natural way of expressing what you want, especially if you have poor motor skills, like me.

    >> Thunderbird is a great web browser – but it’s a great web browser because there are two competing closed source projects (Opera and IE) that it can steal features from.

    Thunderbird is a crappy web browser, you have to send html files to your email account to make Thunderbird render it; Firefox is a much better browser :)
    I do not think that Firefox has leeched nearly *ANYTHING* from IE that IE did not leech from someplace else. (Netscape? Mosaic?) Plus, I do not think that any browsers had anything resembling Adblock, Pop-up blocking, and NoScript before Mozilla/Firefox; plus, there are probably many other Firefox-only features in the Extensions section.

    >>“Hey, I hacked my copy of KDE to do something like that Vista three dee window navigation trick.”
    I believe that Desktop Environments can already do this; Microsoft appears to be the leech here.

  83. You miss my point.

    Give me software that I can use to do the things I need to get done, that doesn’t require me to be a developer to use it, and I will.

    The comment of:

    >Document-style programming languages seem a much more natural >way of expressing what you want, especially if you have poor motor >skills, like me.

    Is EXACTLY the mindset I find illustrative of the core problem.

    You want the user to adapt to the tool, not adapt the tool to what the user feels is appropriate to get the job done. I’ve used text-flow-and-markup based layout programs. If you want to do something remotely challenging on a layout, or balance columns, it’s like fighting ants with a hammer.

    Open Source development rarely has an appreciable incentive to A) research what the end user wants or needs for a work flow or B) throw resources into making this sort of research turn into actual usability code.

    After all, who cares if it’s arcane. The users can tinker with the code and get what they want – aside from the vast majority of users who aren’t coders, and become just as beholden to their coding wonks as they would be to the Microsoft help desk.

  84. Hmm… random observations.

    – I don’t think any one person can really Know The Score. This situation is just too vast and complex. The most we can say is that we’ve had 8 years of Linux, and MS still runs the desktop scene, so OSS isn’t the end all be all — or if it is, it moves like a creeping jesus.

    – This conversation really indicates just how much user experience can vary. I first made the switch to linux after my Win98 partition shit a brick for the last time. I used to have my hard drive split in half, with one partition dedicated to my data and the other to the OS and applications. (This was prior to using any sort of Unix, so I didn’t know that I wasn’t actually the inventor of this clever idea.) The reason was because I had to reformat and reinstall Windows about once every 1-2 months. When I went to Linux (RedHat Linux 7.2, to be exact), that just wasn’t the case. Apart from hardware support, which I fixed with about $50 worth of parts, everything was vastly more reliable. I have never once had to format a Linux partition because it has imploded on itself — not even close.

    Fast forward to the present, and I haven’t had any problems with Windows XP to speak of — once I got it installed. That was the most abysmal thing I have gone through in recent memory. But now that it’s up and running, it’s been fine. However, in the few weeks I’ve been running Windows, I’ve also been reminded a few times of what a tremendous pain it is to deal with crashing apps compared to Linux. In fact, one time, an *application* crashed and brought down my entire system — I only ever experienced hardlocks on Linux when fucking around with my kernel. But everything else is peachy, it boots up faster, and it runs all the software I want it to run, it just comes up short in the “reach into my guts” department.

    OSX, on the other hand, has been a ceaseless pain in my ass. The interface feels sluggish to me, and I vastly prefer the sharper rendering of Windows to the sorta fuzzy Nintendo64 style anti-aliasing that’s pervasive on the Mac. The aesthetics also wear on me, all those damn pin stripes, and I find the interface to be a cluttered mess. And I *hate* that damn menu bar at the top paradigm, it drives me bonkers. Not to mention I can never be sure if, when I close a window, the application will stay resident in the background or terminate. And I seem to have the touch of death with Macs, because mine locks up 2-3 times a week. The last time this happened, it occurred just from ejecting a CD.

    – As I’ve said before, platforms don’t matter, it’s what’s on the platforms that matter. Shenpen is in the bizarre situation he is because, for him at least, one platform provides a better development suite, and the other provides a better day-to-day suite. It’s only when platforms match each other for application/hardware support that the differences in the platforms themselves becomes an issue.

    – ESR did an analysis of “open source” vs. “free software” nomenclature. While I don’t quarrel with the results, they just don’t map to my actual experience with other humans in this field. It has been nothing but the hard left, as far as I can see, and many more people ascribe themselves to the GNU side of things than OSI. It seems that the OSI terms are used as pure convenience, but my experience (YMMV) has been that the dominant ideology is straight up stick-it-to-the-man GNU/hippie bitches. If what Jeff said about RMS and Hugo Chavez being true… holy fuck. Come to think of it, I don’t think I’ll ever go back to Linux until the hippies migrate to something else. I will never forget the words of a particularly spectacular asshole in #LAD who told me that “9/11 just gave you bigots what you deserved.” A month or so later, he sent me a patch for Specimen and asked me to add some features.

    – I didn’t go into Linux as a GNU/fag, but I basically collapsed under peer pressure and became one. In retrospect, it is *amazing* just how much pressure I felt to walk the party line when I was hanging out in *any* of the community forums. I now feel that that ideology is absolutely poisonous and virulent, and so long as it remains functionally indistinguishable from communism, I will regard it as such.

    I hate to admit it, but the truth is that I feel like I’ve wasted a good deal of the past 5 years of my life because of this ideology. Because I couldn’t watch movies without behaving immorally. And even if it was a morally distributed movie (like revolutionOS), I couldn’t watch it on the couch because my DVD player ran immoral firmware. And I couldn’t use the microwave for the same reason. And I couldn’t listen to the radio and buy CDs because that too would be immoral. And I couldn’t play games because that would be immoral. Basically, the majority of what constituted my previous life — a completely noncoercive existence — was now immoral.

    Geeks are never going to be the big man on campus, and I am more than cool with that — I would never want to be that guy, anyway. But I couldn’t even relate to most other geeks because of their immoral practices. That bitter pill I swallowed had me spending the last couple years of highschool, and the first few years of college, in an almost complete state of social isolation. Hell, I couldn’t even relate to *other* hippie dousche bags because they were all listening to mp3s, which was immoral… I was so far removed from normal life that I now think this dang fucking bullshit agenda I signed up for was the predominant factor in what was an extremely fucked up existence for a while — I just could not have been right in the head. (Trust me, I need to write a book or something about this — at least the parts that won’t get me a jail sentence.) Hope that’s not TMI :-\

    – I really resonate with the Just For Fun mentality. That just seems right. If the fun ends up being profitable, that’s obviously really cool — but with no guarantee of monetary return, I think the idea is to put on some chillaxing grooves such as the Eagles and just have fun being a nerd and writing/tinkering with OSS.

    – I also really resonate with enabling the poor and the young. As a younger lad, I was neck deep in the warez scene, and I’m not so sure I regret it. I learned an incredible amount of things with my stolen copies of MSVC++, Photoshop, Flash, Sonar, etc. There was simply no way I could have bought all the software that I used, and I certainly didn’t profit at the expense of any of the creators. I did greatly increase my personal worth as human capital, though, and now that I do make money I’m buying all the software I use. Companies like Jasc seemed to be aware of this sort of ecosystem between kids stealing their software, learning from it, than growing up to buy it, get hired using it, and then negotiate a site license on behalf of their company. PaintShop Pro never had any sort of copy protection to speak of.

    But things appear to be changing, and as warez becomes easier and easier to come buy, the software producers can’t assume that only the gifted kids who really bust their asses to learn will be acquiring it — now there really is a significant problem of businesses just plain stealing their software because it’s so damn easy to do. So it looks like the old ways might die, and I’m saddened by that. But OSS could give kids an alternative avenue to learn from. Maybe the tools won’t ever be as feature-replete and powerful as the proprietary offerings, but they can certainly do a lot. The fact that they might not be as useable doesn’t really matter. And all the free development tools and the OSS culture means that kids these days can even learn more in the realm of software development.

    Back in my day, there wasn’t really any collaboration going on since everything was illegal. Any cracks you came up with got distributed under an alias, and the only way you’d get any sort of joint effort would be with a tightly knit group. It was all cathedral style. But nowadays, any kid can install linux, download the gimp source code, fire up emacs, and start working on that feature he really wants — and learn a helluva lot in the process.

    In this regard, I think OSS is a wonderful thing. But GNU needs to get the FUCK out.

    – Xara can compete with Illustrator very effectively — in fact, it does everything live, which is a huge plus. And it’s now gone open source on Linux and OSX, so who knows what the future holds.

    – Is XGL/Xegl plug-n-play yet? I’m pretty sure it requires some fierce hacker-fu to get going. But regardless, it’ll probably be refined and standard before Vista comes out. And the 3d desktop has been on Linux for quite some time, although it hasn’t been very popular (just google).

  85. >Your time is valued at, what, about $40, $50 an hour?

    $150-$300/hr, depending on what I’m doing and the contract term.

    >At a guess, you spent about 4 hours on upgrading that software to the
    point where everything worked as expected. So, you spent about $160+
    on upgrading a free OS. (This isn’t counting the time spent making an
    ISO image you could install from.)

    Good guess on the time. Most of the installation stuff happened in background
    while I was doing something else.

    >Open Source is free if your time is worth nothing.

    When you’re totalling up life-cycle costs, don’t forget to include time
    lost on Windows systems coping with spyware/malware/worms, random crashes,
    database corruption due to the crock-of-shit nature of Access, and the
    well-known “Your mouse has moved. Windows must be rebooted for this change
    to take effect.” phenomenon :-).

    The numbers, compiled by people who pay system-administrator salaries,
    are pretty clear; Windows is, over its entire life cycle, a
    far bigger time sink than Linux. The pain is distributed
    differently so it seems less, but there’s more pain.

    This, however, is not to detract from your main point.

    >They have to please their customers. They understand that 90% of
    their customers are NOT programmers. They’re writers, graphic
    artists, musicians, architects, lawyers, draftsman, accountants and
    more…but they aren’t programmers. They have to make something that a
    non-programmer can use.

    You’re right, and it’s a point I’ve spent the last three years
    hammering home to open-source developers. We are not yet
    good enough at UI design and identifying the needs of non-technical
    users.

    The good news is that there is nothing inherent in open-source development
    that makes this so. It’s a cultural problem with open-source developers, a
    legacy of history.

    I just made a lot of trouble on the Fedora development list by insisting
    that we need to support MP3 and other proprietary codecs, even though
    they’re not 100% open-source pure, because ordinary users want them and
    won’t take seriously any OS that doesn’t support them.

    Open-source developers are waking up to these issues. Slowly. I’m pushing
    as hard as I can.

    There will come a day when most open-source programmers realize that
    designing for end-users is the last frontier, because all the back-room
    problems have been solved (or at least claimed by projects that have
    as much help as they can use). I first predicted this in 1999 as part
    of a general trend of open source moving up the stack towards the user;
    reality has tracked my expectations fairly well since.

    When we really get focused on this problem, we will leave closed-source
    applications in the dust. The harbingers are already here — have
    you tried Kmail or Audacity?

    You’ll do better, in arguing with open-source hackers other than me, if
    you challenge them by framing good UI design as the hardest engineering
    problem there is. Personally I don’t actually think this is true, but
    it’s a defensible position — and nothing elicits peak performance
    from a hacker like saying “Betcha you can’t do that!”.

  86. Eric, the problem with MP3 is that it is patented, and therefore illegal to use without paying a license fee. Here again, open source loses because it is not possible to use patented algorithms and technologies in a manner compatible with the open source ethos. With closed source, part of what you pay for the software goes to cover the license fees for such technology.

    And as time goes on and people demand a rich, hi-def media experience from their computers, they will want to use the patented technologies that companies like Sony and Microsoft have successfully established as standards — not weak-sauce OSS “alternatives” like Ogg that no one uses. The Stallman communist-revolutionary approach of saying “no” to software patents and trying to undermine them with GPL3 will get no play in the marketplace. That means Windows and Macintosh will be the only legitimate media platforms for the foreseeable.

  87. One of the biggest problems facing open source software is that the diversity of applications can sometimes lead to a NIH policy. To see evidence for this theory, one should examine the iPod-Linux interoperability efforts (NOTE: I do not own an iPod, so I cannot easily verify the following information.) One can easily mount an iPod and use it as a removable hard drive (if you have USB/Firewire and HFS+ support in your kernel). Unfortunately, Apple uses a proprietary interface for storing mp3s in a usable format, but there are several efforts to write software that stores mp3 files in an iPod usable format: gtkpod, pypod, and one or two others whose names I cannot remember. Since this project involves reverse engineering, and since there are competing efforts, the projects do not interoperate well, so one must choose one and stick with it. If the projects could agree on a standard library and stick with, the interoperability issues would disappear. Open Source software still faces enough competition from proprietary offerings that developers should not desire to compete with each other (albeit indirectly).

  88. Again, disclaimer:

    I’m not interested in hi-def DVDs, I don’t play Counter-Strike or Call of Duty (ever since discovering Rez on the PS2 my entire notion of computational ludology has shifted entirely), and I could give a squirt of piss about all-singing, all-dancing 3D window effects. The Macintosh “Genie” effect was cute the first couple of times I saw it, but I still use WindowMaker which is rather no-frills for a WM, and these days I am seriously considering switching to Ratpoison on my Linux boxen, to help rein in the overlapping-window clutter that currently inheres to every major GUI.

    But that’s just me. I know I’m way out there on the spectrum of computer users.

    As for RMS and GNU, yes, Stallman is a commie nutcase but he was one of the ones fighting the good fight back when virtually no one else would. So in that respect, in order to preserve some shred of the hacker culture and take it out to the world, the GNU movement was necessary. And if Richard Stallman didn’t exist we would have had to invent him. In its traditional form GNU may well be obsolete. But that’s not a judgement call I can make. I think abandoning Linux because the hippies use it is less well-considered than abandoning Windows because the PHBs use it. The GNU movement’s strong claims of immorality are based on the assumption that copyright and patent law are to be adhered to stringently at all times, so using proprietary software gives the vendor real power over you. A member of the warez community wouldn’t give a fuck. In fact, everyone who’s ever used MPlayer to play WMVs and DVDs on their Linux box are not only warezoring the codecs used for those formats, they’re violating the DMCA to boot. And there are quite many of those!

  89. > In fact, everyone who’s ever used MPlayer to play WMVs and DVDs on their Linux box are not only warezoring the codecs used for those formats, they’re violating the DMCA to boot. And there are quite many of those!

    I believe libdvdcss is an open source library, so using it is not pirating the codecs, but it still violates the DMCA.

  90. Phil, that’s true, however, many codecs in the WMV, AVI, QuickTime, and Real formats are proprietary with no open source equivalent. The MPlayer home page offers codec packs so you can play them; what they are doing is unauthorized distribution of proprietary software. WMV is something of a standard for internet video now, though Flash video may well supplant it, so it almost becomes necessary to take these kinds of steps in order to enjoy video on the internet with an open source OS.

  91. > at least Linux has ndiswrapper.

    ndiswrapper doesn’t work on non-Intel CPUs, since it depends on running a Windows binary (driver), and these are all x86.

    Now that Apple is moving to Intel, there is no real reason why an equivalent of

    > benchmarks

    builds of a large software package, which is all I really care about. The only games I’ve played for 20 years is ‘(g)cc’ and gdb. I do understand that they’re important to a very large number of people, but I think platforms like Playstation and Xbox are far better suited to gameplaying than workstations. (And yes, I know that the innards of the Xbox are essentially a PC.)

    The essentially facts that Xbos is a (Windows) PC and the next-gen Playstation runs linux shows that Peter is quite likely correct in his assertion that if Open Source had been able to produce great games it would have. The only remaining conclousion is that it doesn’t because it can’t.

    > I just made a lot of trouble on the Fedora development list by insisting
    > that we need to support MP3 and other proprietary codecs, even though
    > they’re not 100% open-source pure, because ordinary users want them and
    > won’t take seriously any OS that doesn’t support them.

    Who else is astonished that esr, Mr. “catb”, “the world’s biiggest open source advocate” wants proprietary software in linux distributions. Apparently, for esr, convenience is more important than freedom.

    Eric, I suggest that “pulling” (leading) or actual shoulder-to-shoulder efforts are a lot more effective than “pushing as hard as (you) can”.

  92. >Who else is astonished that esr, Mr. “catb”, “the world’s biiggest open source advocate” wants proprietary software in linux distributions. Apparently, for esr, convenience is more important than freedom.

    Increasing our uptake rate is more important than 100% doctrinal purity, because if we don’t have overwhelming market power we won’t be able to put enough pressure on the hardware vendors and other monopolists to make them play nice.

  93. Apparently, for esr, convenience is more important than freedom.

    That’s not really relevant though, is it? Isn’t “freedom” all in the GNU camp? I don’t think I’ve *ever* heard ESR talk about the moral imperative of OSS.

  94. >>but I think platforms like Playstation and Xbox are far better suited to gameplaying than workstations. (And yes, I know that the innards of the Xbox are essentially a PC.)

    It depends on the game. Simple button-mashers, like the side-scrollers of yore, and platformers fit perfectly with a controller, but games that require more interface complexity (to supposedly provide more depth) require a mouse and keyboard. Also, the ability to save anywhere, a feature that is mostly seen on PC only/original/simultaneous-release games, can be useful. Granted, the save anywhere feature can easily be implemented by game developers, especially if the console has a hard drive, and keyboard & mice are available for consoles, but they are nonstandard and not well supported, since game developers are leery of taking advantage of features that not every consumer will have.

    > The essentially facts that Xbos is a (Windows) PC and the next-gen Playstation runs linux shows that Peter is quite likely correct in his assertion that if Open Source had been able to produce great games it would have. The only remaining conclousion is that it doesn’t because it can’t.

    I am not sure how the Xbox architecure and the Playstation 3’s OS are related to the failure of open source games.

    Computer games are an interesting case, because they are a combination of software and ‘content.’ A game could not be played without an engine loading and interpreting all the data, but an engine is useless (to everyone except developers) without data-files: dialogue, scripts, audio files, textures, models, etc. Content has not traditionally been ‘open’, and while there some movements, such as the Creative Commons, exist to promote ‘open-source’ style content licensing, I think that most commercial games will remain closed. However, a game developer could benefit from opening their engine if they have not licensed the engine from another developer and they do not plan to earn revenue by licensing their engine to other developers.

    Opening the engine would allow outside parties to help them quickly fix the major stream of release-time bugs, which has been a longstanding problem with many games. Also, an open source engine would provide them free porting work, especially if the game is popular.

    Now, the best solution for having more games with open-sourced engines is to provide a quality open-sourced engine for developers to use. There are three projects (that I know of) that could (with some work) provide a quality open-source game engine: CrystalSpace (http://www.crystalspace3d.org/tikiwiki/tiki-view_articles.php), Irrlicht (http://irrlicht.sourceforge.net/), and Ogre3D (http://www.ogre3d.org/).

    There may never be an abundance of games, whose content is released under a hippy-style ‘share-and-share-alike’, ‘information wants to be free’, Creative Commons license, but that does not mean that a sufficient engine could not power a commercial game. After all, when we (at least, when *I*) purchase games, I want *CONTENT*; the engine only provides a means to explore that content in an enriching way.

  95. >That’s not really relevant though, is it? Isn’t “freedom” all in the GNU camp? I don’t think I’ve *ever* heard ESR talk about the moral imperative of OSS.

    For the excellent reason that I decided it was better tactics to shut the fuck up about them. You can sell CEOs and CTOs much more effectively with instrumental rationality than you can with moral argument. Or, as I sometimes put it: when you’re trying to change the world, your own idealism is your own worst enemy. Fear, greed and vanity — other peoples’ fear and greed and vanity — is your best friend.

  96. There may never be an abundance of games, whose content is released under a hippy-style ’share-and-share-alike’, ‘information wants to be free’, Creative Commons license, but that does not mean that a sufficient engine could not power a commercial game. After all, when we (at least, when *I*) purchase games, I want *CONTENT*; the engine only provides a means to explore that content in an enriching way.

    Well, first of all, games (like music and movies) always have a novelty factor to them. With current Linux audio capabilities, you have more power at your disposal than the best studios of the seventies had. Unfortunately, the market for music that sounds like it was made in the seventies (in terms of production quality) is basically nill.

    Same goes for games. If you compare Supertux to the first Super Mario Bros. game, Supertux blows it away. Trouble is, there isn’t much interest in playing Super Mario Bros++; It’s a bit of a first come first serve thing, I think. I still enjoy playing the original SMB, but a lot of that is cultural nostalgia, just like I enjoy listening to old AC/DC (even though it’s before my time). But I don’t enjoy new games that look like they were made in 1982 any more than I like listening to music that sounds like it was recorded in 1977. And regardless of theory, proprietary engines eat the OSS offerings for breakfast.

    Continuing in that vein, sure, the theory is all well and good, but the proof is in the pudding. And the OSS pudding is BZ flag. Blargh. Take a look at how long vital state has been doing *nothing* to get an idea of just what a failure OSS is at making games.

  97. > With current Linux audio capabilities, you have more power at your disposal than the best studios of the seventies had. Unfortunately, the market for music that sounds like it was made in the seventies (in terms of production quality) is basically nill.

    I am not a musician, nor am I an audiophile (well, not really), but what exactly does proprietary software do that makes recordings sound better? Is it just that they support better quality recording equipment? Your average MP3 recording will sound a lot worse than a quality gramophone and a vinyl record (in good condition), so one could say that music sounded better in the pre-compression days. With my $50 headphones, both Doom 3 and Quake 4, under Linux, sound amazing, so it seems that OSS drivers and sound APIs (OSS & ALSA) do pretty well. I heard that the Linux Audigy 1/2 drivers did not support all the features that the Windows drivers supported, so I guess Linux does lag a little. ESR has previously mentioned how wonderful Audacity is, and I second his praise. Does Linux lag behind in creating synthesized music? I heard that Rosegarden was a decent synthesis tool.

    > But I don’t enjoy new games that look like they were made in 1982

    Well, it depends on the game. An enjoyable game will stand the test of time without its graphics. Computer hardware has rapidly progressed to the point where a high-end game will look dated a mere three years after its release, but a good game will still be playable decades from now. I still enjoy playing Ultima V, and I first played it in 2000 (a screenshot of the game can be found at http://www.c64gg.com/Images/U/Ultima_V_ingame.gif). Ultima VII is also one of my favorite games, and so is Chrono Trigger, Super Mario 64, Fallout, etc. (Yes I am a CRPG nut! So sue me! With the exception of Super Mario 64, I have only played those games within the last six years. Granted, open source does have any games (yet, hopefully) to compare with those classics (okay, I will concede Nethack is cool, but I do not like dungeon crawls much).

  98. I am not a musician, nor am I an audiophile (well, not really), but what exactly does proprietary software do that makes recordings sound better?

    If you’re doing 100% recording, no synthesis to speak of, then Linux is basically on par with everything else, although our effects tend to suck which is an extreme problem. Compression is the biggest factor in why modern music tends to “pop” more than old recordings, and Linux doesn’t have anything that can compete with the offerings from UAD. In fairness, there isn’t really anything that can compete with UAD, period, so that’s why I don’t really deck Linux that many points in this department.

    Now, if you want to do any sort of electronica under Linux whatsoever, that’s when things get blatantly horrible. The right tools for the job just are not there. Take a listen to http://lam.fugal.net sometime to see what I mean. The only halfway decent stuff, IMHO, comes from James Shuttleworth, who uses CheeseTracker for everything he does. CT is a great tracker, but tracking only really clicks with a very tiny percentage of musicians (I’m not one of them) — everyone else prefers stuff like Cubase, Orion, FruityLoops, etc. Linux has nothing to be taken seriously in that department.

    And let me reiterate with emphasis:

    “But I don’t enjoy new games that look like they were made in 1982.”

  99. Take a listen to http://lam.fugal.net sometime to see what I mean.

    I just did. My ears are still throbbing from listening to ‘Coding in PERL’. The melody was alright, but it sounded very fuzzy and distorted.

    everyone else prefers stuff like Cubase, Orion, FruityLoops, etc. Linux has nothing to be taken seriously in that department.

    From my (non musical artist) perspective, Studio to Go! http://www.studio-to-go.com/ looks rather interesting.

    “But I don’t enjoy new games that look like they were made in 1982.”

    Point taken.

  100. From my (non musical artist) perspective, Studio to Go! http://www.studio-to-go.com/ looks rather interesting.

    The idea is on the right track (same goes for MuSE), but the problem is the dang thing just doesn’t work right. Timing issues, stability, modulation, synthesizers (lack thereof) — it’s just missing the refinements necessary to make it actually useable.

    On a technical note, one of the primary reasons I left the linux audio development world was because of incessant featuritis. Rosegarden has been in development for over a decade and it still can’t reliably do the basics, for instance. Everybody just piles on the features without stopping to make sure that what’s there already works right. And it seems like the only people using the software are cool with that.

    It’s kind of like Juggling. You might be able to handle three balls, but you throw in a fourth one and everything goes to hell. Likewise, it does no good to pile on all these features when the core functionality is non-functional. I’m actually getting quite physically mad just thinking about it now.

    Specimen was the first real program I ever wrote, and in the year and a half it existed, it never once had a problem with just not working — yet it is now as functional (nearly) as the built in sampler in FruityLoops. It’s also only 20K lines of code. I’m going to come right out here and admit it — I do not really know what I’m doing when it comes to code. I don’t have years of experience to draw upon, and the only design book I ever read was “The Practice of Programming.” If I can do it, seriously, anybody should be able to. It’s just a matter of having the discipline to make sure that the code always works, and as far as I can tell, nobody has it and nobody cares >:O

  101. Well if you’re looking for code that always works, there’s Csound and CLM. Both of these programs are quite capable synthesizers. I’ve heard Csound-composed music, and while it may have been abstract it really didn’t lack in the quality department.

    They don’t have convenient interfaces. They are approximately to FruityLoops what LaTeX is to Quark. Flame them for that if you like, but they work. I think there are people producing more quality music with these programs than there are using Rosegarden. (I tried it a few years ago, and it was butt.)

  102. Some random samples of semi-knowledge and thought about audio
    software:

    1. compression

    >so one could say that music sounded better in the pre-compression
    days…

    vs.

    >Compression is the biggest factor in why modern music tends to pop
    more

    There seems to be some confusion here between data compression and
    dynamic range compression. Dynamic range compression, in combination
    with some standard effects (chorus, spatial effects etc.), adds to the
    `density’ (perceived loudness) of modern (pop) music. Its aesthetic
    value is debatable, but the tastes of the mass market are not. You can
    get away with two or three songs without this stuff, but if you want
    to sell music, you’d better get rid of the garage/bathroom sound. If
    Peter is right, closed source software is currently the cheapest
    solution.

    Of course, there are other, more `authentic’ ways that work entirely
    without digital processing, but they are not suited for every kind of
    music or instruments, and they are invariably more expensive. That’s
    why there are absolutely and proportionally more decent-sounding
    recordings today than in the 70s.

    If you are a Jazz trio, stuff the microphones halfway into your
    instruments/mouths, and use a valve amp. The `valve sound’ is
    basically a special kind of dynamic range compression that is an
    artefact of the distortions single-ended amplifiers produce.

    More generally, do a `live’ recording in a well-chosen environment,
    with the mics in exactly the right places and little or no
    processing. This is very difficult and mostly goes wrong.

    Data compression (more precisely: data reduction), on the other hand,
    requires the removal of information to be done effectively in the
    audio case, and generally results in poor sound. But incidentally,
    music that has undergone heavy digital processing before doesn’t seem
    to lose as much in the process as `raw’ recordings do. Synthesized
    music is even more mp3-friendly (if it is not too complex), which is
    not surprising if you know a little bit about audio encoding.

    2. open-source audio software (Why is there none?)

    Interesting question, and I think it’s a special case and cannot be
    explained with the `general failure’ (if there is such a thing) of OSS
    on the desktop. The field is somewhat geeky (as viewed from the
    outside, audio guys and hackers don’t seem very far apart), highly
    technical, and requires robust, long-term solutions as well as tight
    interaction between developers and users. According to the Gospel of
    Eric, we should see first-class, open-source audio programs popping up
    everywhere. But we don’t.

    My gut-level explanation lies in the mindset and special kind of
    conservatism of the semi-professional audio folk. The culture still
    smells heavily of warez, C64s and Microsoft. Peter’s story seems to
    confirm this.

    It is standard practice there to share software, just as one copies
    and shares recordings and printed music. Friends share with friends,
    teachers with students. When it comes to making money with your music,
    you eventually start buying the stuff. I don’t see why this should
    fundamentally change as long as music itself isn’t usually `open
    source’. This analogy lacks mathematical rigor, but it explains partly
    why users and developers of audio software don’t give a shit about
    open-ness.

    As to the conservatism: Some people were (are?) using MS-DOS boxes as
    sequencers until well into the third millennium. It just worked, and
    there wasn’t much of an operating system which could mess up the
    timing. I don’t know if this paradigm is applicable to studio
    software, but maybe the Linux desktop just hasn’t yet been around long
    enough. It’s not too late to wait with Eric for the tide of open
    source to reach the audio realm.

    But maybe the question isn’t if it is possible to `sell’ open source
    to the audio world, but if the software that is used now would benefit
    from open source. The development of quality audio software is often
    done by a one-man or handful-of-men show (as Peter said). This suggests
    that the overall complexity of these programs is not overwhelming, so
    they may not need any more eyeballs, and it isn’t clear whose eyeballs
    that would be. The audio guys I know personally aren’t, despite the
    superficial semblance, really hackers. Bright, technology-affine, yes,
    but I wouldn’t trust them with more than a few lines of code, so the
    user-developers that are abundant in the field of, say, kernel
    development, may simply not exist. Additionally, the users can and do
    already ask for new features and bug fixes, but at the same time, they
    (even as hobbyists) do their work with a very serious attitude and
    know their tools very intimately, so they are happy to work around
    minor glitches.

    To cut a long story short, audio software probably just doesn’t suck
    badly enough. Nobody wants exciting features or new
    possibilities. Nobody wants superior or ethically clean development
    models, either. Most people couldn’t even figure out how to apply
    ethical standards to the annoying, stinking heap of shit that is
    computer software (those in the proprietary camp dare to call it a
    `product’). They just want something that doesn’t get in their way of
    solving the real problems.

    P.S.: from studio-to-go.com:

    How exactly is Studio to Go! licensed?

    Studio to Go! does not come with an End-User License Agreement. We
    believe the situation is simple and ethical: packages are either
    provided under an open-source license, or else normal copyright law
    applies.

    Congratulations. Where I live, this means `all rights
    reserved’. Technically, I’m not even allowed to use this crap.

  103. Very interesting points bru, and I’ve had thoughts on similar lines — but here’s something I can never figure out. Barring the country genre, musicians are overwhelmingly left-leaning. Google for pro-war music some time — one of the few hits I actually got was a liberal magazine trying to find any sort of pro war music to balance out a piece it was going to do on contemporary war music (afaict, they found nothing).

    And it’s not just war where the politics of musicians are slanted to the left. There is an ungodly large number of songs out there equating capitalism with fascism. “Power to the people” through socialism is the general consensus. And even if it’s not sung about, in interviews with musicians, its clear that conservatives are regarded as coarse, ignorant, and just plain stupid. I could go on, but I believe you get the idea.

    What’s really strange is that electronic music has historically been even more left than usual. Industrial bands in particular have been capitalizing off of anti-capitalist sentiments since their inception. Kill Switch Klick — Fascist Smash. KMFDM — A Drug Against War. Velvet Acid Christ — Dead Flesh. Each one of these are an important part of the industrial culture, and each one is a leftist anthem of some sort.

    So really, you’d figure that these anti-property types would rally around OSS. I was curious about this myself, and started poking around the KVR audio forums to see what the deal was. And the amazing thing is, there actually is tons of support for OSS — except for the real kind. What I mean is there is an endless stream of people singing the praises of OSS morality, but they don’t actually use it.

    And what makes things even stranger is that proprietary audio software has done an amazing amount to empower the starving artist. All of these modern production effects make it possible to get by without that multi-million dollar studio. With enough ingenuity and about $2K in gear (including computers, software, and instruments) you can produce a pro-quality record in your basement. This is absolutely amazing, and you’d figure it would create strong pro-capitalist sentiments — but it doesn’t. People tend to resent paying $400 for Reason, when doing things the old fashioned way (such that you don’t reward the Capitalist Pigs) would cost closer to $400,000.

    What’s even more bizarre is a trend I noticed in Linux audio of recommending the use of hardware in areas where software was lacking. In other words, of going backwards, technology wise. It kinda blows my mind — you’d figure the OSS guys would be championing the low-cost solutions, but such is not the case. “All you need is Ardour and a good recording environment.” But that’s actually a whole lot less helpful to the little guy than Sonar is.

    So we have anti-capitalist musicians, benefitting from the enabling progress of capitalism, singing the praises of open source software, but not using said software, and the producers of said software advocating the use of old-fashioned hardware to overcome deficiencies in the hardware, and said old fashioned hardware taking us back to the bad old days when the costs of music production clamped down on the diversity of music.

    Man, I get hella confused when I think about it.

  104. Jeff, your absolutely right about Csound and CLM, and there’s also SuperCollidor and Pd. I know that Aphex Twin uses Max/MSP, so the stuff is legit. But then, Aphex Twin sounds like Aphex Twin. I’m guilty of the sin of liking pop music (or at least pop-sounding music), and doing bread and butter stuff in programmable synthesis engines is a huge PITA. If all I want is basic substractive synth sound, which is usually all I want, I can just fire up DreamStation. And if I want a crazy experimental FM sound, I can fire Sytrus. The kind of flexibility which justifies the existence of Pd et al is one which I personally have never had a need for — using those tools just slows me down by orders of magnitude.

    But they are good tools, no doubt.

  105. Pete, Eric has written in the past about the confusing paradox that open source doesn’t appeal to lefties, being an actual implementation of a sort of workers’ revolution that Marxists like to wank about. Maybe it’s because Marxist tenets are nothing more than a bunch of them panchrestons, like the No True Scotsman fallacy. After the USSR fell, leftists defended their position by saying that Soviet Russia was not an implementation of true communism. By that theory open source wouldn’t be a true worker’s revolution. That way the wankers and ideologues still retain control over what qualifies as real progress.

    The fact that music professionals are lefties therefore won’t necessarily mean that open source will be a big deal to them. In fact, with the exception of some old tracker-scene guys, every music professional I’ve known — to a man — has been a staunch Macintosh supporter where their musical work intersects with computing. That kind of brings me to another paradox, which is capitalist companies who increase shareholder value by appealing to armchair revolutionaries, of which Apple is the king. I’m sure you’ve seen the old Trashdot trolling tactic in which trendy, happy raver types are shown using their PowerBooks and iPods, and contrasted against the dumpy, pudgy geeks in the Linux booth. Without a doubt, musicians buy into this mentality, especially if they are the pop/rock variety.

    It’s all social horseshit. High school lunch-table clique stuff. There’s no reason why open source can’t be much better than it is in virtually any area. But that would require solving the strong social problem of making everybody a bit geekier, that is, more able to think about problems at a deep and abstract level.

    See, Windows and Macintosh, as platforms, are not just a set of APIs and user interface components. They are also a set of best practices as to how applications should look and behave, and what kinds of models to user problems they should present. Linux and other traditional Unix-based OSes have their own set of best practices, too, most famously elucidated in ESR’s own The Art of Unix Programming. I’m going to cover a lot of territory previously trod by Edsger Dijkstra and Alan Kay here when I say that the Windows/Mac methodology is to present a model of the problem to the user that lets them get basic things done with a minimum of mental effort. Then, they think they are competent, but the model breaks down as the system becomes more complex and there’s little room for thinking outside the box. Since much of our society — our business practices, government, and even our music — is based on constructing boxes to contain our thoughts, in such a society Windows and Macintosh will always rule the roost. Their modality is about itemizing the things people want to do and then providing check list features to match that list.

    Because the Linux modality is different. The Linux modality is all about choice, and I don’t mean KDE or GNOME, Konqueror or Mozilla, Perl or Python. The classical Linux/FOSS best practices consist of presenting a deep, abstract model to the user which they can then mold to suit their particular needs. There is a steep learning curve for as long as it takes to get used to the abstract model, and to some extent you must unlearn what you think you know about the problem domain in order to acquire the new model, but once you become facile with it it yields dividends in terms of what you can do.

    This modality is imho superior, because I think anything that challenges the mind and presents new perspectives and deep models on human problems is a good thing for society. But it will not get anywhere as long as society is the way it is, which is a world wherein fast action and the appearance of achievement is preferred over deep grokking. To some extent, in order to create software that fits that world, you need a benevolent dictator to say “this is how things are going to be done”. Apple and Microsoft fill that role. There is no corresponding entity in the open-source Unix world. This is a good thing, but bad for open-source uptake in government, business, and other areas of life where social rules, not the ability to think, understand, and solve problems, dictate everything. The Microsoft monopoly exists because it is precisely what people want.

    As a side note, I’m throwing my own hat into the software-synth ring with a little creation of my own I call Valkyree. Not publicly available yet, it is written in Scheme and I plan on using it to experiment with generative music that’s actually listenable. Playing a lot of Rez and Lumines does weird things to how you perceive music. I don’t anticipate it being the choice of professionals any time soon, but I am going

  106. Games like Rez are the main reason why I have such a soft spot in my heart for Sega. They have this ability to just come out of nowhere and develop completely novel gaming concepts — from Sonic to Jet Grind Radio to Rez to Rub Rabits. But back to the topic at hand…

    I definitely agree with you on the evaluation that musicians are stuck in high school and they don’t realize it. It makes me wish I was born mathematically inclined, but the undeniable fact of the matter is that art is what I do best by a large measure. Thus I’m resigned to be among a crowd that will never cease to annoy me >:O Love art — fucking hate artists, though.

    But the bit about modalities and stuff, I think that’s over my head. Granted, I’m busy cramming it with economics for an exam at the moment, but I’d really have no way of telling if what your saying is true or not. But it does sound very intriguing.

    Let me know when you actually set Valkyree free. Between music videos, music, and video games, I’m getting a sense that we’re on the verge of Something New. Not quite sure what that is, obviously, but this is my gut talking — precision is not its strength. I’ve been developing this idea about art, in that it never really exists in vacuum, but this fact isn’t widely acknowledged. What makes a great song isn’t necessarily just the song, but the context in which it is heard. Would I love the old Sonic the Hedgehog soundtrack if I didn’t associate it with soaring at breakneck speeds through beautiful and exotic scenery? Would I have the same love of Weezer if I hadn’t seen the music video for Buddy Holly and thought, “Hey, I can relate to these dorks?” It’s almost an elephant in the room that the popularity of music is influenced by much more than just the music or the marketing — the mythos surrounding it is very important as well. Likewise for Rez — if you just heard the music on its own, would it impress you? Now that you’ve experienced the game, can you separate that sensation from the music, or will the music always ride the coattails of the greater experience?

    I don’t even want to think about how incoherent I must sound right now, but hey, that’s entertainment.

  107. Has Linux really had binary compatibility for six years now? I think libc5 boxen were still plentiful six years ago.

  108. Jeff,

    > By that theory open source wouldn’t be a true worker’s revolution.

    Most non-hackers are not aware of any theory behind software. More likely,…

    >It’s all social horseshit. High school lunch-table clique stuff

    …it comes down to conformism, which is universal, widespread and orthogonal to the political spectrum. It’s still puzzling, given that anarcho-syndicalism and anti-capitalism are responsible for whole genres of music.

  109. The next time esr claims expertise in some field, be sure to read this paper from the APA whcih shows that self-confidence has a very strong negative correlation with actual ability:

    People tend to hold overly favorable views of their abilities in many social and intellectual domains. The authors suggest that this overestimation occurs, in part, because people who are unskilled in these domains suffer a dual burden: Not only do these people reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the metacognitive ability to realize it.

  110. Not really — the thrust is that the incompetent will overestimate their abilities. Half the tests indicated a positive correlation between perceived and actual competence when score > 70. The other half of the tests showed that perceived competence stayed in the range of 60 to 70 regardless of actual competence. While this is strong evidence in favor of the hypothesis, it does not support the conclusion that confidence negatively correlates to competence.

    And what, exactly, do you have against ESR, anyway? You seem interminably pissed at him, for reasons I cannot fathom.

  111. Those who can do, those who can’t join the marketing group, or otherwise engage in marketing-based activities.

    Is “Open Source” really just “Free Software” made safe for (“marketed to”) business, even though we “don’t tell them the whole truth”?

    I’m not pissed “at” ESR (I’d gladly buy him a beer), but I dislike his choice of action.

  112. >Those who can do, those who can’t join the marketing group, or otherwise engage in marketing-based activities.

    Sometimes those who can do, do marketing anyway — that is, when we don’t want our marketing to be (a) nonexistent, (b) done by soulless droids, or (c) done by self-defeating zealots.

    >Is “Open Source” really just “Free Software” made safe for (”marketed to”) business, even though we “don’t tell them the whole truth”?

    Basically, yes. The only substantive difference is that people who use the term “open source” tend to be less willing to characterize proprietary software as being some kind of moral failure (as opposed to just a screwed-up way to do engineering). But even that is a rule with exfceptions.

  113. >Those who can do, those who can’t join the marketing group, or otherwise engage in marketing-based activities.

    metablablah. If you think ESR can’t do x, your argument should actually involve ESR and x, for the sake of stringency.

    >Is “Open Source” really just “Free Software” made safe for (”marketed to”) business[…]?

    The software is obviously the same, the incentives to `buy’ it are not. The open source campaign makes (theoretically) testable claims about software quality. If they are eventually proven wrong, we’ll have a problem. The truth will be out—“Open Source Is Crap, But Still Free, Evangelists Say”—but what difference would it make? Where is your problem with a little marketing campaign?

  114. Hey, I owned a ‘marketing’ hat once too. (The company had marketing in-charge of product development. I owned two product families womb-to-tomb.)

    ESR is the #1 proponent of the marketing term “Open Source”. He promulgated it.

    Some of us think that morals count, and that “instrumental rationality” pales in the clear light of moral high ground.

    If you’re all about the development model producing superior software, then why care about “DRM” and GPLv3? Produce a superior MP3/MP4 codec and get it in Fedora/Debian/… Buy a Tivo and download the source to your heart’s content.

  115. ESR,

    good you mentioned KMail. Usually Open Source apps tend miss that little, small ideas that would be easy to implement, just nobody really spent effort to understood what users want. KMail is brilliant difference, it has an amazing feature: arrow left-right goes to previous/next mail, and arrow up/down scrolls a given mail up/down,so you can read it fully. Such an easy, small, stupid thing, isn’t it? It still makes reading 100 mail a day a hell of a faster and comfortable feeling. And the surprise is that depsite the millions spent on user interface research and end user testing, it has never occured to the developers of MS Outlook. Quite amazing.

    Such a shame that most of the FLOSS community doesn’t take KDE seriously – some people still clinging to the old “Qt is not really free” bullshit while it’s long GPL-ed – if there will be a usability revolution in OS, it will come from them.

  116. “(ever since discovering Rez on the PS2 my entire notion of computational ludology has shifted entirely)”

    Jeff, I remember somebody mentioned here Rez as an example of a modern LISP application, was it you? I could only find a website on Rez in Japanese. Do you know of any English websites?

  117. Pete Bessman:

    well, I think no software currently around sounds like the analogue synths, like the old 303/909’s or the newer stuff such as Virus. You can tell what was created by decent machinery and what was by software even on a CD, not to say a vinyl – I mean, just look at the first Café del Mar, Anima Sound System sticks out like shit on a dinner table. So, I think it does not really make sense to compare crappy to even crappier. Better wait until SW catches up to synths.

  118. OK, I’ve seen the light and it is vanilla and morals and pop culture and vapor 2.0—and have to admit that I absolutely don’t see how this could possibly support your moral high ground. Language barrier or something.

    But my simple question remains: What does `Free Software’ lose—what do you lose—if people are using it in the dim shadows of immorality?

  119. Jeff:

    about the Linux modality and understanding the problem domain – I think there are a lot of basic misunderstandings here. I have a friend, who has been a coal miner – not an exactly intellectual position. He was still a clever guy, only just not very educated, He got into a marketing school and started to work in this field. As he is clever, he quickly understood that real marketing – at least for B2B – is not about company logos and stupid ads but about efficient statistical analysis of data. So he learned the the for-him quite foreign idea of databases, tables, and relations and created a customer database for himself in MS Access – a slow, painful, gradual process. Then, he learned VBA, and started to add features to his Access database. Later on, he quited his job and started to custom-build CRM databases for small companies, and now he has a good job of an internal consultant of a big-time CRM software in a bank.

    The point is that the learning curve was quite gradual, from a spreadsheet-like simple one-table customer database. He did not have to understand the theory beforehand – when he had five tables or so, he hit some walls, asked some questions from a programmer friend who directed him to books about normalization, BCNF, database design principles – which he guessed before by himself, because every clever guy understands that anything, that could be, should be designed as 1:N and that’s actually BCNF – so he picked up the theory when and only when he needed that for the practice.

    This is the general problem of FLOSS premature optimalization of human resources – forcing you to understand the theory of something before you should actually need to. Even today, there is still no FLOSS equivalent of Access – Knorr’s Datenbank is a joke, and if we strip business logic code from TinyERP, that might be something like an FLOSS Access, or maybe Rails might be something like that in the web-based way, but none are really close. To be able to develop a business/database application with FLOSS tools, you both need to understand programming and database creating principles first and then, for example, look at PostgreSQL + PyGTK, but of course the code generators to generate forms from tables you need to write for yourself… currently, FLOSS cannot turn a clever, but not IT savvy marketing guy to a CRM consultant/developer, because it forces users to understand the theory beforehand and does not allow to start by something small but still useful and with an ugly hack and evolve your understanding only when you the circumstances are ripe for it. It’s a premature optimalization of human resources.

    And I jus can’t understand it. I mean how CAN you do without an equivalent of Access??? I mean most of my fellow programmers our first program that was really used by people (and were paid for it) was an Access database for managing the monthly tickets of the local gym or the vide casette rental store of Uncle Joe when we were 16 years old. Or in Clipper, under MS-DOS, if one’s older. How could one do without it? I just don’t understand. I would never become a programmer if wouldn’t see around myself that programming is not only a theoretical game for geeks, but something to solve the real problems of real people with, that it empowers me to solve Uncle Joe’s problems and therefore it gives me a job to be useful for the society I live in and paid and respected for it, and not be an outcast, a fringe freak, but as a normal guy as a car mechanic or construction engineer. I just can’t image becoming a programmer without it.

    Of course, now this problem is almost of the past, as Rails is evolving so fast as in two years it will duplicate Access on the web I think, while still being a very flexible, very dynamic ultra-fine almost-LISP uber-elegant environment ( my glossary app lists the distinct first letter of terms to choose from by @first_letters = Articles.find( :all ).map { |article| article[0,1]}.uniq – now THAT’s elegant, I think, and this is a good example why people become Rails junkies, eating, drinking and dreaming Rails ) but as for the past, I am completely puzzled how could one start programming without Access.

  120. @Shenpen:

    For a long time, KDE was a huge construction site with lots of basically unusable applications that looked like poor attempts to reinvent the wheel and whose very existence seemed hardly justifiable. Things may be different now, but a little more focus wouldn’t have hurt. And when I’m looking at the schizophrenia of having `applications’, `KDE applications’ and `GNOME applications’ doing the same thing in only slightly different ways, I can’t help thinking that something, somewhere, has gone wrong.

  121. Shenpen:

    Some people are just comfortable with learning things the hard way, especially if the reward is mastering a really powerful tool. (You probably know that Hole Hawg story.) These same people may love to write dozens of little programs that aren’t strictly useful, but somehow `interesting’. At some point, they probably started to manage Uncle Joe’s video cassettes with some ugly ad-hoc database thing written from scratch. Not exactly elegant or efficient, but it worked for Uncle Joe and they learned a lot in the process of writing it. And they still smell overkill whenever somebody says `database’. So, yes, you can live without Access, and in the FLOSS camp, you have to.

    But this is mainly for historical reasons and doesn’t have to be this way. Maybe right now, after getting some of the basics right, the FLOSS world can afford to devote developer time to solutions that encourage more people to become developers.

  122. Shenpen, that geeks make a game out of understanding and discovery is what enables them to leapfrog past, and make themselves more useful by orders of magnitude in far less time than, the guy who’s focused on career advancement, and only learns something radically new when forced to by job pressures.

    Back in the 80s most computers only came with BASIC. Sometimes people would buy (or steal) a dev kit for Forth, C, or Assembly. How ever did they get along without integrated tables, graphical relationship tools, and form design wizards? Barbarism!

    I’ll tell you how. It is called “hacking”, and while it is a theoretical game it is not merely so. Great discoveries can be made only by those who set out to discover. We as sophonts have been engaged in this game from our earliest days; it is by this means alone that we survive, and thrive, on this planet, in more varied conditions than any other animal is capable of doing. Stubbornness, and not lack of cleverness, is the only thing that separates hackers from auto mechanics or coal miners or marketing gurus.

    Let me give you an example: our old friend, the lambda expression. You may think of it is a useful tool, a neat little bit of syntactic sugar that lets you do something really useful like sort your tables succinctly or something. What Church found was that lambda expressions were sufficient to describe all of computation. What Steele and Sussman found was that it was possible, and practical, to build optimizing compilers for von Neumann architectures for languages not essentially different from the lambda calculus; and even entirely new LC-evaluating CPU architectures. And what Montague found was that the relationship between syntax and semantics in natural language is not essentially different from that in formalisms like the LC. (The integrality of lambda calculus to modern linguistics was brought to my attention recently by a friend of mine, who isn’t terribly interested in computing but has a deep and abiding passion for linguistics and cog sci.)

    So we see how one little concept, once grokked, leads to profound ramifications across disciplines. Except if you’re too busy fixating on Matters of Great Consequence like CRM, you might miss it.

    As for databases, they are just set theory. Middle school math. Grok this and SQL becomes a cinch.

    Your story is interesting in that it perfectly exemplifies how far up its own bottom our society has gotten, that what is truly important to our species is dismissed as mere frivolity, to be brushed aside for Matters of Great Consequence. You and I are on opposite sides of a great chasm in this regard, but I should like to offer a bridge to you, constructed by those whose communication skills are far more eloquent than my own. Whether you take it is up to you:

    http://charlespetzold.com/etc/DoesVisualStudioRotTheMind.html

    http://www.cs.utexas.edu/users/EWD/transcriptions/EWD10xx/EWD1036.html

    http://www.angelfire.com/hi/littleprince/frames.html

  123. I think we’ve stumbled into the realm of BSing each other, which while I agree to be great fun — let’s keep in mind that nobody really knows what they’re talking about here. At least not with mathematical rigor.

    Jim:

    Morals? I don’t think anybody doesn’t care about ’em, we just differ about what is and isn’t immoral. I’m a libertarian for many reasons, but the biggest one is that it seems to be the most intellectually honest practice considering how much we don’t know. In the absence of God telling us what to do, let’s just leave each other as much alone as possible. And if we don’t know how much that is, let’s default in favor of liberty and see if things work — empirically speaking, it seems that the consequences of giving government too much power are orders of magnitude worse than giving it too little. So, I really don’t have any qualms with proprietary software, and I have a hard time believing that that makes me immoral.

    Shenpen:

    Well, I just don’t agree. The Access Virus actually is a computer, and it’s just running a very sophisticated synthesis program. Same for all the hardware from nord these days. Same for korg and emu. I never have, and never will, buy into the “classic vintage tone” thing. Synth guys believe in it, guitarists believe in it — I’m both, and to my ear, it’s just a bandpass filter and some noise. But this is a matter of taste, so I don’t think it’ll ever be settled.

    Jeff:

    There’s just more than one way to skin a cat — more than one flavor of enlightenment. There’s a continuum between being so meta your farts don’t smell, to scratching your nuts and grunting “git er done.” Every step along the way has its place. Considering that most people tend to think ascending from poverty to post-scarcity is more important than evolving into light (or something), the “git er done” mentality tends to dominate. “Achievement is 1% inspiration and 99% perspiration,” so the saying goes. There’s still plenty of meta in the world, and yeah, people would be better off if they added it to their intellectual diet — but they’d also be better off if they ate 6 small clean meals a day and went to the gym 5 days a week. Oh well.

    And it just so happens that thinking inside the box is useful. It makes it easier to do certain tasks. The pop music song structure box, when adhered to, makes it easier to make a pop tune. And a lot of success comes from subtly pushing the boundaries of this box. And thinking outside of the box is great, since that’s how we got from Sonnets to Heavy Metal. But considering the box worthless? Well, that’s the land of Der Moonfleck and 4’33 — and I just don’t see that getting me pumped up for a workout the way Slipknot does.

  124. Jeff,

    I don’t think such a chasm exists. Do we agree in that programming is not a science but a meta-science: it’s about expressing ANY other science/knowledger/information in a structured/automated way? If we do, then we also have to agree that programming is not a first-class profession, but an auxiliary profession: first you grok some science/knowledge and then express it in code. For example, kernel hackers are hardware (or, more generally, physics / systems engineering) experts who just express their hardware expertise in code. You sound like to me a mathematician who expresses math in code. (Although the model I present here is imperfect, as math is also a meta-science…) I express business models in code.

    I’m not talking about learning something only when career advancement makes it necessary: I look down upon onto those guys who write 100 lines of Java whenever they need to do a simple grep on files as you do. However, I think practicality is important, being close to real-life problems is important, and being a normal white-collar worker in the mainstream society instead of an outsider is important. It does not mean one cannot jump into research when one sees fit, but research needs to have a practical reason for, I think.
    I also accept that theoretical researchers are also an important minority but basing a whole OS culture on them looks like an overkill – I don’t think most Linux users would fit in the category you described. I have a guess that more people use Kate than EMACS, for example.

    My anti-theoreticalism comes from my school experiences. I learned exactly what I do now (accounting systems design at a business school ) and when i left school and started working, all the theoretical crap didn’t worth anything. I remember memorizing the formal definitions of BCNF, 4NF etc. at school, without understanding a word from it, and when I started working, I just stood there puzzled, without knowing what to do. I had to reinvent the theory for myself, in a practical street language instead of a formal language, like “Hey, brada, be sure to design everything 1:N what can possibly 1:N such as invoices : invoices lines, and be ready to sniff when something needs to be N : N, such as owners : companies.” And only later on, when I had some free time to think things over, I realized “Damn, it’s just the BCNF and 4NF the taught in school – but why couldn’t they express it in an understandable, practical way?”

    Of course, later on I moved more and more into theory – my hobby languages of choice – (because at work I only can use Navision C/AL) – Perl -> Python -> Ruby -> sometimes a bit LISP, but still quite afraid show a development from practical to back to theoretical.

    But I think theoretical understanding needs a strong practical background. Wheneve one groks how it works, one can learn why it works. But if we do it otherwise, there are problems.

    My father is a construction entrepreneur, and he spends his time fighting engineers who design completely impractical, not cost-effective, but so very “artistic” buildings.

    If construction engineers worked a year as a mason before the university, these problems would not arise.

    This is my point in programming too.

  125. An afterthought: maybe it sound extremely heretical on this forum but I don’t think the old hackers of the seventies and eighties should be treated as demigods. Sure, they were clever guys, but there’ve been many other clever guys around, with the only difference being that the university / research center hackers were in a very, very comfortable position.

    I mean find a bunch of clever guys, throw funds on them and let them play around with years with any problem that catches their fancy without having to ship any marketable product, allow them to use whatever language they want and even to invent new ones, allow them to share their code with other universities and research centers, don’t put pointy-haired bosses on their backs, throw even more funds on them by claiming the research is of military importance, and they will come up with something like the Internet. There is nothing exceptional in that, only the conditions are expectional – commercial programmers usually had NONE of the above. If you want to erect a statue to the real heroes of the Internet revolution, it should be the CFO’s of Stanford and MIT who made it possible for the clever boys to play around, without demanding any tangible, marketable results in the short run.

    Clever commercial programmers like Joel Spolsky could have duplicated the results of old-time hackers easily, but they’ve been working in completely different conditions.

    I have met with people having typical hacker traits – being math geniuses, playing instruments and playing amazing practical pranks [1] – in the COBOL/RPG-400 banking/finance shops considered so horrible by pampered hackers. They are as clever as them, with the difference being less spoiled by comfortable unversity lab conditions protecting them from the harsh realities of the outer world called “market” and “business” , less bitching about bondage-and-discipline languages and all the other typical spoiled-kid approaches etc. etc. learning to find fun and intellectual challenge by playing in walled gardens, and finding fun in working around organization, language and hardware restrictions. The lack of freedom can be inspiring, albeit in different ways.

    [1] one story from the “horrible boring life” of RPG-400 banking programmers: they were using Win NT boxes with terminal emulation programs to the AS/400. They were quite proud of how stable the AS/400 is as compared to “toy computers”, like the Win NT boxes. When the team leader came back from holiday, the guys prepared a little surprise: as soon as he logged in to the AS/400, it displayed a typical stupid Windows error message in the green screen terminal, and when he sat there puzzled thinking “WTF???!!! Did the AS/400 turn to Windows ??!!!” the AS/400 made a callback to the terminal emulation program, which started an OS call which played a horrible, evil laughter recorded in .WAV sound file … :-)

  126. Peter,

    First, working together is better than standing apart and deciding that we can all just do what we want while we leave each other alone. The second certainly ‘works’, but the first can generate far better impact.

    I believe it is immoral of those who advocate “open source” as a superior solution to business while secretly knowing that they aren’t telling the whole truth.

    I don’t have any qualms with proprietarty software either. I do have qualms with people who purposely tell only part of the truth in order to advance their position.

  127. Pete, what I’m driving at is that understanding the meta levels, the deep theory, provides enormous leverage in the software domain that lets you “git ‘er done” a hell of a lot sooner. One of the gob-smackingly profound truths about our field and a big reason why computing science is not like bricklaying. Software does not yet have its equivalent of 4’33”, because at the end of the day, unlike an art exhibit or musical performance, it’s gotta work. So there’s a need for a structure, but room for different points of view, different surface modalities that emerge from the deep structure. Kind of like Mozart vs. Topsy vs. the Rolling Stones vs. The KLF.

    To continue with your music analogy a bit more, what Shenpen is doing is declaring Britney Spears to be Really Important Music because a lot of people buy it, and furthermore, Mozart was basically an overrated, spoiled kid and that we should really give our thanks to the kings and noble patrons who paid for his music, rather than to Mozart for having composed it. It’s that condescension that I have a problem with.

  128. working together is better than standing apart

    Yeah, and if we leave each other alone, there’s room for voluntary cooperation. There’s a subtle but important difference between believing we should work together, and believing it’s good to work together. I’m in the latter camp.

  129. > I’m in the latter camp.

    Me too, but I’m also in the “we should not deceive others” camp, not the “its good to not deceive others” one.

    According to ESR, “open source” is deception.

  130. >According to ESR, “open source” is deception.

    Only in the relatively weak sense that all marketing is deception, or dessing up fancy to go out is deception. I never lie to my audience, but I will cheerfully admit to being careful about which truths I speak. And the fact that I do this is not a secret either.

  131. Jeff,

    this Britney vs. Mozart this is such a gross oversimplification that it almost hurts to think about it. It’s like comparing jazz radio stations to WiFi access points just because they operate through a similar medium…

    And, computing science? 95% of todays programming doesn’t have much to do with computing science, but lot more to do with other kinds of science/knowledge/information – knowledge about what the code expresses and models, instead of how it is designed or written.

    Bricklaying? I usually call business programming “welding” but bricklaying is also a good term. And if you lay bricks really well, you got something like Gaudí’s work. Sigh. Quality, creativity and innovation is not a single-sided, one-dimensioned, black-and-white entity.

    Did programming become more democratic in last few years or decades? Surely it did.

    Does it mean quality went down? Only if you don’t trust stochastic decision mechanisms.

    And only if you oversimplify the definitions or viewpoints on quality. Was Didgeridoo from Aphex Twin a great conceptual art or a popular dance club hit? Black or white? (Clue: both ) It would be a loss if this interesting discussion drowned into a kinda “strong, stupid warriors vs. frail, clever magicians” level of fantasy novels written for kids.

  132. Shenpen:

    > And if you lay bricks really well, you got something like Gaudi’s work.

    And if you draw characters really well, you get something like Shakespeare’s work. Or something like the Linux kernel source. Or something like line noise.

    Back to the original topic (no, not really):

    > the university / research center hackers were in a very, very comfortable position

    Yes. And you seem to agree that this actually helps to invite great things. Today, everybody with a computer and an Internet connection is in this comfortable position, regardless of background or education, so you shouldn’t waste your time envying spoilt kids.

    The open source playground is big enough for researchers, hackers, coal miners and companies. That this whole culture appears to be based on the values of those useless hackers and researchers has a simple reason: They built this thing. And for a long time they have been occupied with the immediately practical things (of putting together GNU, Linux etc.) you seem to value so highly. Providing an equivalent of Access didn’t occur to them as a pressing need, so some of these `practical’ things seem to take infinite time.

    I predict that the overall pace of open source will dramatically go up as soon as it’s possible to invite the first coal miners to the party (of course it would be foolish not to do that). But don’t underestimate the importance of fundamental, undirected research. CS doesn’t really deserve to be called a science not because its object lacks substance, but because we are barely beginning to understand what it’s all about. We can get interesting things done with as little as we know, but this may not be enough in the future.

    My impression is that there hasn’t been much progress in the proprietary software world for some years now, at least if we don’t redefine `innovation’ as `another version of MS Office’. In many fields, OSS still has to catch up, but it is at least getting better. Maybe the bazaar is capable of moving beyond whatever barrier proprietary software has hit, but this certainly isn’t the final solution. There may not even be a solution. Maybe computer programs will forever be crappy, brittle, fighting with complexity and a pain to use. But if there is a way to make them fundamentally better, you will certainly not find it while working inside a box defined by Really Important Business.

    Question to Jeff Read about the chasm:

    Is this a new phenomenon? Do you think something has changed in the last few years/decades/centuries? (Complaints about education going downhill don’t count, they are probably as old as education itself.) I have my doubts, but they aren’t properly backed up by age or historical knowledge.

  133. Yeah, I reiterate, I don’t think anybody really knows what their talking about. We know things inductively or deductively. The latter requires mathematical rigor, which I think is obviously missing. The former requires detailed scientific analysis, which isn’t exactly what’s going on here. Interesting stuff, though.

    I personally think that simply getting exponentially more powerful is innovative. There are things I can do with my computer now that I simply would not be able to do 10 years ago. Maybe 10 years without a mind-blowing invention counts as a brick wall, but if that’s the case, I think that only goes to show just how much computing has accelerated progress.

    And I really just don’t grasp the meta stuff, I don’t think. If, for instance, lisp is the secret to attain the ninth level of power — uh, I’ll pass, plzkthxbi. But I get the feeling that’s not quite the point, so I dunno. /me shrugs

  134. bru:

    The chasm is not new but what is new is that we have ample opportunity to bridge it. Most Americans have a machine many times as capable as the ones spoken of in hackerdom’s sagas. SICP is a free download; and furthermore, with the aid of Google and Wikipedia, one can imbibe as much mathematics as one cares to study, including abstruse stuff like category theory. We’re running out of excuses to be willfully ignorant. It is not a responsibility of open source developers to cater to those who are so.

    Jim Thompson:

    Now that’s what I call REAL ultimate power!!!!!!!!

  135. Jeff… you are one crazy motherfucker. Damn. ParodyCheck has to have some of the most disproportionately large titties that man has ever drawn.

    As to the topic at hand, I still don’t really think I get what is being argued over. If I want to make ultra-funky beats, Linux is poorly suited to that task. Jeff, are you arguing that I should… write my beats in scheme? Or that if I want to check my email I should write a hack in haskell to do it for me? I’m so confused.

    And on a different note, ESR should just start a discussion forum, because that’s essentially what we have over here. He says some shit in a blog, then various parties chatter about it in the discussion section, eventually getting way off topic. That’s just begging for phpbb right there.

  136. Even if we all believe open source produces less buggy software (which in my experience is not the case), or superior software (which also in my experience is not the case: there is no way my Mom could use gnuplot, for instance), it massively fails at getting money to the people doing the work.

    It seems to me open source is this massive scam by the proponents on the masses of unwitting graduate students that support them.

    Meanwhile, being a capitalist software engineer, I’ll take advantage of it every step I can.

  137. Pete, Shenpen and I weren’t arguing about funky beats. He seems to think that the only way ordinary people can learn to program is with productivity tools. He also seems to think that the open source movement owes people a career path. Since I’ve seen people who’ve barely touched a computer pick up SICP and Scheme and get that “Holy shit, this is cool” reaction, I am inclined to think that people who can’t learn to program except by Access are simply preconditioned by their expectations. Computing is driven by some very simple but powerful and fundamentally different ideas. Open source development would become more powerful if it were a vector for those ideas to the world, instead of spending all its time trying to hide them the way Microsoft does. That’s probably what Microsoft sells successfully more than anything: not software, not even really a way of making things easier, but comfort, a way of making sure you won’t be subjected to anything discomfitingly perspective-altering. (It helps if you read that Dijkstra essay for a clearer perspective on this.) I suppose my opinion branches significantly from even ESR’s on this issue but that’s how I currently feel.

  138. Jeff, about your “REAL ultimate power” link: Not sure if that’s something you came up with or not, but Common Lisp, Python (read-only; assigning to the variable creates a new one), and ECMAscript should both be on the “lexical scope” list as well. (Common Lisp has a couple ways to override scoping for a particular variable, though, similar to Scheme’s “set!-able variables”. IIRC CL calls them “parameters” and “globals”; by convention, the caller can change a “parameter” before calling a function that uses it, but the caller should not change a “global”.)

    Also, if the language doesn’t support defining a function inside another function (AFAIK C, Java, and BASIC do not, although C++ and Java have classes, which can sort of compensate), then lexical scope really has no meaning. The only options at that point are local variables, global variables, and function arguments. ;-)

  139. Oops, not “both be on the “lexical scope” list as well”, but rather “all be on the “lexical scope” list as well”. (It’s probably still obvious what I meant, but still.)

    That’ll teach me to add Python after having written that up once already.

  140. Bryan,

    Thank you for the info, but my intent was to provide illustrative examples — not exhaustive lists. Your point is well taken, however.

    C allows you to declare variables local to compound statements which follow lexical scoping rules (i.e., their extent is limited to the enclosing block, they are not visible in function calls outside the block, and they are completely separate from other variables with the same name, with innermost variables shadowing outermost). AFAIK Java follows C’s lead in this regard. This was considered sufficient for me to grant them a “kind of”, but only a “kind of” for the reasons you said — functions within functions (closures) are not allowed. (And Java’s scoping rules for anonymous inner classes are kind of weird.)

  141. That’s true; I did forget about nested blocks and creating variables inside them. OTOH, since outer blocks always “call” into inner blocks (short of goto, and that starts to create problems with the stack), there’s no difference in C between a lexical scope and a dynamic scope. The outer scope is always executing whenever the inner one is, so in either case, you get the same result. A language basically *has* to support closures (or something equivalent) to be able to observe a difference between the two scoping rules; when the lexical environment always equals the dynamic environment, the effects of both scoping rules are identical.

    Though Java inner anonymous classes do act like closures, from what I remember. (It’s been several years since I did any Java.) Code in that type of nested class can access members of the outer class, and it’s possible to pass the instance as an event listener (or whatever). Then when its method/s is/are called, you get the partial equivalent of a closure. So yes, Java has lexical scope, because code in the outer class is not necessarily running when the inner class’s code gets run, but the inner code can still access members of the outer class. (Not variables from the procedure that was running, though, only the class instance. So it’s not a “full” closure, but the code would act differently if the language was dynamically scoped.)

  142. Bryan, consider the following Emacs Lisp program:

    (setq x 5)
    (defun foo (x)
    (print x)
    (bar)
    (print x))
    (defun bar ()
    (setq x (+ x 1)))
    (foo 5)
    (print x)

    Consider also its C equivalent:

    #include <stdio.h>
    int x=5;
    void bar();
    void foo(int x) {
    printf(“%d\n”,x);
    bar();
    printf(“%d\n”,x);
    }
    void bar() {
    x = x + 1;
    }
    int main(void) {
    foo(5);
    printf(“%d\n”,x);
    return 0;
    }

    In neither case is a function defined within a function, yet the results are different for these two programs. The Emacs Lisp program should produce as results 5, 6, 5; whereas the C program should produce 5, 5, 6. This is because the binding of x as a parameter to foo is visible only within foo’s body in the case of lexically scoped C. When foo invokes bar, bar does not see that binding but rather the global binding to x. In dynamically scoped Emacs Lisp, the binding of x as a parameter to foo is visible from the time foo is invoked to the time it returns. Thus, bar sees (and modifies!) foo’s parameter, not the global binding to x! So even without closures, the scoping rules make a difference, and the classification of C as a lexically scoped language should be seen as significant.

    The specific weirdness I was referring to with Java’s inner-class scoping rules was that Java inner classes can access the members of the outer class, but not local variables in the method where the inner class was instantiated.

  143. Man, I take a couple days to get settled in my new job, and everything goes all haywire on me.

    Peter Bessman:

    > Can you clarify? These statements seem contradictory to
    > me, so I don’t think I’m interpreting properly.

    It is acceptable under the OSI definition for a license to specify that there is one and only one source for the original source code, prohibiting free redistribution of that code, and that any and all modifications by the community are provided exclusively as patch files. Explicit provisions were made for this in the OSI license requirements.

    esr:

    > Open source is so good at motivating programmers that
    > they build entire operating systems (not just kernels
    > but userlands too) on their own time.

    I think the motivations and results are a little different. As I recall, it took roughly ten years before Linux and the GNU utilities produced a working and reliable UNIX-like environment on the desktop, whereas AT&T/Bell Labs initially produced UNIX in two. While I will wholeheartedly agree that the UNIX of 1969 was certainly not the same as the UNIX of 1991, the Linux crowd certainly had a roadmap to follow.

    Archangel:

    > Those changes may make a difference to a lot of people,
    > _if_ they ever actually get released

    Volume Shadow Copy was in Windows Server 2003 and kicks major butt. Trouble is, it’s only good for things you store on the server, not things you store on your local hard drive. Vista moves it onto your local hard drive, where EVERYONE can use it – not just people with Windows-homogeneous networks that store all their data on network shares.

    Shenpen:

    > Isn’t this all actually politics disguised into technical
    > terms?

    I believe the open source crowd has mixed up their politics with their technical arguments, and that the proprietary source crowd has responded in kind which got *their* politics mixed up in it. I think this is the single biggest obstacle between the two camps.

    Jeff Read:

    > Windows is such a compelling platform because Microsoft is
    > a monopoly.

    I disagree entirely with this proposition.

    Upon considering why, I don’t think disagreement is entirely the right word. I believe Windows is a compelling platform because Microsoft tends to take things in the right direction, because they do a lot of research, because they can afford to take risks, because they have little competition, because… they are a monopoly.

    So I sort of agree, but not really. I lay a lot of blame at the feet of Microsoft’s “competition” simply refusing to do a good job. RealAudio led the streaming music market until they tried to use a licensing deal with Microsoft as a loss leader to sell their own incompatible streaming media player. Microsoft pointed out the conflict of interest, RealAudio said “neener neener what are you going to do about it”, and Microsoft blew them out of the water. If RealAudio had played nice, they could have made a boatload of money with Microsoft, but they got greedy and ended up with squat.

    Phil:

    > I heard they tried to kill all other servers besides
    > IIS on the NT platform by changing the NT Workstation
    > EULA to not allow for use as a server.

    They just pointed out the language. NT Workstation was always just the same product provided at a discount on the condition that it be used primarily as a workstation, not a server. Lacking the technical background necessary to understand this, Microsoft’s PR folks released some truly ludicrous and lamentable statements. Lacking the business background necessary to understand this, the rest of the world perceived this as an unfair practice.

    What Microsoft was trying to prevent was the use of NT Workstation as a server platform, which spelled about a $300 loss per license. From their perspective, thousands of people were robbing them of millions of dollars by violating the license agreement, whereas the average consumer simply didn’t see why you shouldn’t do what you were technically capable of doing.

    Meanwhile, Netscape and O’Reilly were loudly proclaiming that their web servers could be run quite happily on NT Workstation. Their overt proposal was that you should buy NT Workstation instead of NT Server, saving several hundred dollars, and then use that several hundred dollars to buy their product. This was their response to Microsoft’s “free” IIS: our product is effectively free too, if you violate the Microsoft EULA. Besides, they added, the Microsoft EULA is unjust and you shouldn’t submit to it anyway! Down with Microsoft! Down with capitalism! Down with America! Oh, did I say that out loud? I meant, down with *emerita*! You know, people who get paid for doing nothing because they did something great once. Whew, that was a close one.

    So who’s the company with the predatory business practices, again?

  144. >It is acceptable under the OSI definition for a license to specify that there is one and only one source for the original source code, prohibiting free redistribution of that code,

    Not so. No OSI-conforrmant license can prohibit free redistribution. The clause you’re think ing about permits a license requiriung the redistributed versions to be shipped as pristine source plus patches.

    >I believe the open source crowd has mixed up their politics with their technical arguments

    Whatever “open source crowd” you’re speaking about, it’s not the one I know about. I popularized the term to separate us from politics in order to make our marketing more effective, and I believe I largely succeeded.

  145. > What Microsoft was trying to prevent was the use of NT Workstation as a server platform, which spelled about a $300 loss per license.

    If that’s true, and if it’s also true what you’re saying about it being “just the same product”, then there was no loss here.

    It did not cost Microsoft anything more to sell the “server” edition if it was the same codebase. So if there was a loss when someone used the “workstation” edition as a server, there would have been the same loss whenever someone bought the “workstation” edition and used it as a workstation.

    You’re confusing a “loss” with a “lack of extra profit”. (Yes, it may have been illegal to use a workstation as a server, if Microsoft’s licensing terms would have held up in court. But that doesn’t mean they were taking a loss when someone didn’t follow those licensing terms, either. Note that many politicians do this too — something that gets advertised as a “cut” is often actually an “increase by a smaller percentage than last year”. The amount still goes up, just not by as much as they thought it would.)

    Jeff: You’re right, I must have mis-remembered the difference between scoping rules, or something. So never mind then. ;-)

  146. Jeff,

    Actually, if we look at why MSFT became so successful, I agree that the real key is how they handle learning. I don’t think MSFT simply empowers ignorace – actually, that was the Mac’s way (before OSX, of course). I think MSFT empowers gradual learning procedures – SQL Server is a good example, anyone can install and configure it to a basic level, and pick up DBA knowledge gradually, during production operation. (Of course most OS databases are similar – actually I was thinking about Oracle as a counter-example.) Or, from another viewpoint, they support a kind of “learning by imitation” until one gathers enough confidence to go on his own.

    This is not in the defense of MSFT, actually, this idea occured to me a few days ago when I was thinking about why schools force students memorize definitions instead of learning by imitation.

    For example, if you became self-employeed and therefore having to write proposals, how would you learn to write them? I myself would borrow a winning proposal from a salesman friend, and would use it as template: I’d just rewrite the subject and the price an as much of the boilerplate text as I am able to identify as unfit to the case. Then I would use that for a template for the next proposal. After five or ten proposals, when I feel confident that I start to get what’s going on but before things got too ossified in my mind I’d buy a serious sales psychology book, grok the theory and identify what can be improved. After ten more proposals, I would be on my way to designing my own sales methodology. Isn’t it the most effective way of learning things?

  147. I blame it on “C”.

    A very ugly language that almost requires long, hard to test subroutines due to the stack thrash when calling a subroutine.

    FORTH (for those of you who have ever heard of it) is much cleaner and encourages better factoring. Top it off with hardware that can execute code directly (as opposed to having to be translated) and you have a screamer.

    What you do not get is a good marketing model since the code is easier to develop with fewer bugs and FORTH processors are rediculously easy to design (and they are small – in a number og gates sense).

    I will admit that “C” has more protections for the average or poor coder. Which says it all. The “C’ model is for those who don’t want the very best.

  148. Lets try that again:

    I blame it on “C”.

    A very ugly language that almost requires long, hard to test subroutines due to the stack thrash when calling a subroutine.

    FORTH (for those of you who have ever heard of it) is much cleaner and encourages better factoring. Top it off with hardware that can execute code directly (as opposed to having to be translated) and you have a screamer.

    What you do not get is a good marketing model since the code is easier to develop with fewer bugs and FORTH processors are rediculously easy to design (and they are small – in a number og gates sense).

    I will admit that “C” has more protections for the average or poor coder. Which says it all. The “C’ model is for those who don’t want the very best.

  149. Windows is a mess due to creeping featurism. Microsoft has been building the software equivalents of “Ricers” ( It is my understanding that a Ricer is an economy car that is dressed up to look like a racer, without actually upgrading the drive train.)
    Starting with a Yugo (DOS) they changed the wheels to chrome spinners with low profile tires, added a sporty steering wheel and shift knob, and voila! Windows. The next version added a loud stereo (multimedia), next version added a new paint job, clearcoat, some vinyl tape stripes and stickers (windows 95 style gui ) and later skirts, and air dam and a fake oversized exhaust tip. (Windows 98 with usb support ). And lastly, an impressively large but useless spoiler wing bolted to the back (Windows ME ).
    The NT lineage is similar, but they started with an army surplus Hummer instead of a Yugo.
    The real problem is that the core compilation libraries are not designed with the robustness needed to support the system and much of the functionality is left up to the application programs.

    I remember a DOS evangelist back around 1982 saying that CPM was “nothing but a program loader” as opposed to DOS which he considered “A real Operating system”. Particularly amusing since the early versions of ‘DOS were laundered bootlegs of CPM-86 Beta.

    Windows has evolved into a monstrous patchwork of swiss cheese. It is cheesy and full of holes. Someone mentioned NT as a port of VMS… I believe it was more of a port of RSTS-E, which unlike VMS had some serious problems keeping workspaces seperated. VMS was eventually ported to intel and alpha systems by the time DEC was aquired by Compaq. I guess that makes VMS a product of HP now. VMS was specifically designed for the VAX minis and would most likely port to Motorola 68xxx series processors easily due to the similarity of the cpu designs. NT was actually an atempt to start a new code base that would support non-intel processors as well as the intel ones, while adding some Posix compliant features. The result was a really stable platform which lost a lot of it’s stability when the code base was merged with the 9x series to produce NT 4.0.

  150. I guess the only question left is to ask which band of communists nationalized the micro-computer industry back in 1980 and made Bill G. the czar of software. The suits in America just don’t have faith or understanding of what the many levels and facets of democratic participation can provide.

Leave a comment

Your email address will not be published. Required fields are marked *