Adverse selection and old technology

Yesterday I shipped cvs-fast-export 1.15, with a significant performance improvement produced by replacing a naive O(n**3) sort with a properly tuned O(n log n) version.

In ensuing discussion on G+, one of my followers there asked if I thought this was likely to produce a real performance improvement, as in small inputs the constant setup time of a cleverly tuned algorithm often dominates the nominal savings.

This is one of those cases where an intelligent question elicits knowledge you didn’t know you had. I discovered that I do believe strongly that cvs-fast-export’s workload is dominated by large repositories. The reason is a kind of adverse selection phenomenon that I think is very general to old technologies with high exit costs.

The rest of this blog post will use CVS as an example of the phenomenon, and may thus be of interest even to people who don’t specifically care about old version control systems.

Cast your mind back to the point at which CVS was definitely superseded by better VCS designs. It doesn’t matter for this discussion exactly when that point was, but you can place it somewhere between 2000 and 2004 based on when you think Subversion went from a beta program to a production tool.

At that point there were lots of CVS repositories around, greatly varying in size and complexity. Some were small and simple, some large and ugly. By “ugly” I mean full of Things That Should Not Be – tags not corresponding to coherent changesets, partially merged import branches, deleted files for which the masters recording older versions had been “cleaned up”, and various other artifacts that would later cause severe headaches for anyone trying to convert the repositories to a modern VCS.

In general, size and ugliness correlated well with project age. There are exceptions, however. When I converted the groff repository from CVS to git I was braced for an ordeal; groff is quite an old project. But the maintainer and his devs had been, it turned out very careful and disciplined and comitted none of the sloppinesses that commonly lead to nasty artifacts.

So, at the point that people started to look seriously at moving off CVS, there was a large range of CVS repo sizes out there, with difficulty and fidelity of up-conversion roughly correlated to size and age.

The result was that small projects (and well-disciplined larger projects resembling groff) converted out early. The surviving population of CVS repositories became, on average, larger and gnarlier. After ten years of adverse selection, the CVS repositories we now have left in the wild tend to be the very largest and grottiest kind, usually associated with projects of venerable age.

GNUPLOT and various BSD Unixes stand out as examples. We have now, I think, reached the point where the remaining CVS conversions are in general huge, nasty projects that will require heroic effort with even carefully tuned and optimized tools. This is not a regime in which the constant startup cost of an optimized sort is going to dominate.

At the limit, there may be some repositories that never get converted because the concentrated pain associated with doing that overwhelms any time-discounted estimate of the costs of using obsolescent tools – or even the best tools may not be good enough to handle their sheer bulk. Emacs was almost there. There are hints that some of the BSD Unix repositories may be there already – I know of failed attempts, and tried to assist one such failure.

I think you can see this kind of adverse selection effect in survivals of a lot of obsolete technology. Naval architecture is one non-computing field where it’s particularly obvious. Surviving obsolescent ships tend to be large and ugly rather than small and ugly, because the capital requirement to replace the big ones is harder to swallow.

Has anyone coined a name for this phenomenon? Maybe we ought to.

92 comments

  1. Event Horizon Cruftal Accretion Disks. “Eh, Cads” for short.

    The cruft has accreted to the point where it is impossible to get the project past the event horizon of its CVS repository.

  2. I rather like the name “adverse selection”. I think it may even operate in the animal kingdom, though I haven’t quite found a good example.

  3. Lennart Poettering is aware of this phenomenon, and is rapidly developing systemd in such a way as to make it have an extremely high exit cost. Eric, will you be doing a write-up on systemd anytime soon? Lennart has explicitly and publicly denounced the principles that you outlined in The Art of Unix Programming. Thank you for writing it by the way.

  4. I don’t like the name “adverse selection” because it is already taken — as economics jargon related to the insurance industry.

    As a substitute I propose something like “conversion effort” as a quantity, analogous to the activation energy you need to provide to start a chemical reaction. Most of the examples you cite appear to be packages whose remaining value simply doesn’t justify spending that effort — kind of like deciding not to put a new transmission in your over-the-hill car because it will cost more than the car will be worth if you do it.

  5. Perhaps the old term “vendor lock-in” could give hints toward a good term for the phenomenon.

    Escape velocity is another related concept.

    Sisyphus eternally pushing his rock up the hill comes to mind.

    Network effects are involved.

    Instead of vendor lock-in, how about “legacy lockin”.

  6. You could call those old projects in CVS “Battleship Code”. Or, related to the discussion from earlier about the obsolescence of carriers, you can call it “Carrier Code”.

    Or in reference to old giant battleships, you could call them “Bismarck Projects”.

  7. It is a class of “Technical Debt”.
    Just as in hacking and patching the code itself without refactoring, taking even good code and doing a hack of a repo update has a similar effect.
    One thing which makes it worse is the history can’t be refactored. The warts are permanent even if the repo is converted.

    1. >The warts are permanent even if the repo is converted.

      That isn’t necessarily so. An expert with reposurgeon in hand can do a whole hell of a lot of wart removal. The Emacs conversion is a case in point.

      Of course, such expertise is still rare. The relevant set of people is not quite {ESR}, but it’s still alarmingly close to being a singleton. I’m doing what I can to change that.

  8. Juggernaughting
    Juggerboating
    To Big to Fail
    To Big to Whale
    Dinosaur Tipping
    Scuttleboating
    Scuttlebloating
    .
    .
    .

    Bloatilla!

  9. >The Emacs conversion is a case in point.

    I enjoyed that post, as well as the preceding one on “Dragging Emacs forward”. Will there be a third part? :-)

    1. >I enjoyed that post, as well as the preceding one on “Dragging Emacs forward”. Will there be a third part? :-)

      Perhaps when the official cutover happens. My work on the conversion is largely done; I’m now waiting for the maintainer to decide when to pull the trigger.

  10. The process seems to resemble fractional distillation more than anything else…maybe refer to the leavings as ‘the dregs’, or ‘that tarry residue’.

    If battleship metaphors are your cup of tea, let me suggest ‘dreadnought’.

  11. White elephant doesn’t fit but is in the neighborhood. “Monkey on your back” sort of fits, but isn’t as specific as I’d like.

  12. At work they are generally referred to as “legacy systems”, ones who’s cost of updating, migrating outweighs the obvious benefits. Eventually the maintenance costs get high enough, rather than migrating or updating these systems, they are worked around with greenfield work. Nobody likes working on the legacy systems, yet they must continue to work. It might be related to the Sunk Cost Fallacy.

  13. Hydra systems – whenever you solve one problem, two more crop up.

    Leviathan or Colossus – too large to transport

    Titanic????

    Under the category of “things that should not be”, I am reminded of trying to shelve books in the juvenile non-fiction section – it was a mess, constantly out of order, because little “helpers” would rearrange the books by color, or by height, or by publisher and series, and they never had the stamina to do the whole shelf, let alone the whole section. The only thing worse than screwing up the whole job, is screwing up half the job. An inconsistent collection of commits, with different conventions, leaves you in a place almost worse than if you had no conventions at all.

  14. “CVS was definitely superseded by better VCS designs”
    For SOME projects that were well managed on CVS it is still questionable as to whether moving them to other VCS platforms was actually productive?
    I still today find that what is missing from newer systems is some of the nice facilities we had back in 2000. Modular projects never converted well to the newer systems, where ‘sub-repo’s are still frond upon by many. Currently I’m having to manually manage several dozen repo’s which in 2000 Eclipse+cvs gave me a clean compartmentalized view which I could merge or monitor each module easily, or simply update the whole suite. The ‘conversion’ was perhaps done before the tools were actually ready so the mad rush to ‘modernize’ took over common sense.

  15. Reminds me a lot of some bank back-office systems that are firmly stuck on S/360 because they’re such a gnarly mess.

  16. > I still today find that what is missing from newer systems is some of the nice facilities we had back in 2000. Modular projects never converted well to the newer systems, where ‘sub-repo’s are still frond upon by many.

    The answer can be either modern centralized version control systems like Subversion, or modular support in distributed version control systems, like submodules in Git.

    Atomic commits and (perhaps to lesser extent) whole-tree changeset support are must be for a sane version control system. Good support for branching and merging. Sane performance is also needed for pleasant work.

  17. “The answer can be either modern centralized version control systems like Subversion, or modular support in distributed version control systems, like submodules in Git.”
    Yes and no …
    Having lived through many changes, most were carried out with undue haste and with little consideration to all the consequences. Today I’m running Hg locally because it’s tools provide at least some of the missing facilities, and those tools work transparently in to git, svn and what cvs repo’s I still use. BeyondCompare provides the manual merge tool these days, but ‘Atomic commits’ are still not something that works well across ‘submodules’ although that may well these days be due to the the original conversions to many dozen separate git repo’s from the module system CVS managed so nicely :(

  18. And comments by Theodore T’so:

    https://plus.google.com/+TheodoreTso/posts/4W6rrMMvhWU

    The problem is not just systemd, but a cluster of things that are coupling tighter and tighter to each other; PulseAudio, Gnome, soon even KDE, Wayland, udev, HAL, NetworkManager, IPv6, etc. It is like X11 all over again. Keith Packard and Jim Gettys know how to reimplement an X server to run fast and small, but their system design skills… sigh. Apart from the “mechanism, not policy” decision that X11 made, I really can’t respect their system architecture skills. For Wayland to depend on systemd is just insane. Creeping dependency hell. Just like the X source code itself, back in the day. Not sure if that’s been fixed. Do they still use imake? Point being, the mentality that created imake, has been at work with systemd and its ecosystem of dependants.

    Individually, none of the technologies are that bad. But the tight coupling is rough, makes introspection and hacking tough.

  19. That Linux seems to have lost it’s way the same as VCS systems just seem the norm these days.
    I’ve had to add dongles to every computer on the grid here simply to get them to work at the right screen resolution, something that used to just work only a few years ago :( Edge cases simply get brushed under the carpet these days in the drive to hide anything the programmer does not ACTUALLY understand themselves …

  20. Another good link:

    https://pappp.net/?p=969

    Lester, I hear you. From the FreeDesktop crowd (Kay, Lennart, Daniel Stone, and others) I’ve had a bad vibe for more than a decade. I had such high hopes when X.org split from XFree86, but instead freedesktop.org has been a disease vector bringing the X11 architectural horrors to Linux itself.

    Seriously, using XML and JavaScript together as configuration files? And then making a lot of the code be C++? Might as well have written it in PL/I. Linux was on an upswing when Eric wrote the Art of Unix Programming. Around 2007, the freedesktop rot really started to set in, with PolicyKit, NetworkManager, PulseAudio, HAL, etc, all being tightly coupled. The things those pieces are meant to do, are useful. The way they have gone about doing them, are horrific. They are not discoverable.

    The Unix Philosophy was all about discoverability. Ken Thompson said the three key ideas were:

    1) hierarchical filesystem (became namespaces in plan9)
    2) everything is a file (plan9 got it right, no ioctl)
    3) easy for programs to talk to each other.

    Points 1 and 2 make the system very discoverable and manipulable for humans. Point 3 makes it easy to take that knowledge you have discovered, embody it in a program, and automate things. the freedesktop crowd don’t seem to get this. X11 had a lot of code bloat; all the stuff coming out of the freedesktop.org world retains that problem.

  21. I can contribute absolutely no competency to this discussion, but how about the descriptive phrase, “Legacy System Detritus”?

    It at least offers the potential for a snappy acronym.

  22. “Lennart has explicitly and publicly denounced the principles that you outlined in The Art of Unix Programming.”

    I’d love to read how he justifies this.

    I have no real opinion on systems, aside from thinking that that area is one where standardization would be nice (one of the biggest areas of difference between Linux distributions has always been in the init system), and am somewhat bemused by the jihad that has sprung up over it.

  23. @esr: How did you manage with an O(n^3) sort? Bubble sort is only O(n^2).

    I’d point out that a lot of what you are seeing is a side-effect of individual contributors who are trying to get stuff done rather than to maintain purity of a system. In effect, it involves a change rate faster than a single person can keep track of everything which is going on. Individual people want to make a change – perhaps to add a new feature. The way they do this is likely to be the easiest way for them to do so because it is their “itch to scratch”. This frequently results in messy code. It’s also a work-minimization approach for the individual developers.
    The sum total is that the work required to do a number of the features individually is greater than the work required to do the features with a structural refactoring to keep things “clean”. The down-side is that nobody individually has the incentive to do so. The work for the package maintainer is almost certainly greater than the work difference between the features-individually and features-as-a-group workload, too.

    1. >@esr: How did you manage with an O(n^3) sort? Bubble sort is only O(n^2).

      I didn’t write it. It was an O(n^2) bubblesort with a quirk that caused frequent restarts. Inherited from Keith Packard’s original.

      To be fair to Keith, it was only used on small datasets; going with a naive algorithm was defensible. The rest of the code does not shy from complex optimizations.

  24. Yes, Eric, you should address systemd. There is much fail there. I blame X. X has *never* been a Unixy program. Is there even a single X program which cooperates with pipes??

  25. Is there even a single X program which cooperates with pipes??

    Most of KDE and many other programs play nicely with DBUS, and it’s reasonably easy to interact with the KDE programs from the shell. Generally, most graphical programs don’t fit the pipes-and-filters model: They’re long-lived and interactive. The best solution there isn’t usually pipe compliance but rather a decoupled message-passing API like DBUS.

  26. @esr:

    I think you can see this kind of adverse selection effect in survivals of a lot of obsolete technology. Naval architecture is one non-computing field where it’s particularly obvious.

    What on earth makes you think this effect is limited to technology?

    “Fail” is just a specific case of “change”, and “too big to fail” is merely a specific instance of “too big to change”…

  27. Seems to be a corollary to the economic concepts of switching costs and path dependence; once you have those it’s almost tautological (Those who don’t switch early are the ones with high switching costs), though the dynamics and details are interesting.

    One sees a lot of this in database systems, too. I particularly recall one huge critical application that went though multiple failed attempts to migrate away from an increasingly creaky proprietary database.

  28. And of course, that application only became more difficult to maintain as other users of that database did migrate away, leaving the vendor to spread costs over a userbase that was shrinking and losing leverage (because the only ones left were demonstrably those who couldn’t migrate) by the quarter…

  29. I think the “too painful to convert/replace/update” point is a bit like the event horizon of a black hole (and in fact the large legacy systems that tend to suffer from this problem frequently resemble black holes in many other ways such as absorbing endless resources with no apparent effect, failing to provide external information …) .

    Hence I think the term you are looking for is “complexity event horizon” or (for CVS projects) cruft event horizon.

    The one difference between a complexity event horizon and a true black hole event horizon is that complexity event horizons can vary by the amount of resources available to the project. For example in $dayjob we’ve seen some database performance related stuff that nearly became complexity event horizons because as a small startup we struggled to get the resources (time, servers, manpower) to do the replacement. A better funded organization could have easily funded the change over however.

    OTOH, said experience confirms the theory that resource limitation can be a great way to inspire code/process optimization. Our solutions to the performance issues we’ve hit have been way better and will scale to far larger user counts than what a better funded operation might have done because the better funded group could just have thrown a few more servers at the problem and solved it that way.

    1. >Hence I think the term you are looking for is “complexity event horizon” or (for CVS projects) cruft event horizon.

      Ah. “Cruft event horizon”. I like that. Less unwieldy than Ken’s attempt.

  30. It is the opposite of low hanging fruit, so “high hanging fruit” perhaps. Or perhaps since only tall animals can reach the high hanging fruit, we should invoke the new law of reverse engineering: “Giraffes never go hungry.”

    i think someone pointed out that this phenomenon is pretty prevalent in evolution, and is actually a fairly good argument against “intelligent design”. Evolution gets its self dug into holes that are hard to get out of.

    My favorite example is the mammalian eye, primarily because the ID people like to use it to “disprove” evolution. The mammalian eye, while an amazing piece of technology has a major flaw in that the optic nerve is connected backward, and so it has a blind spot.

    The position of the optic nerve is kind of fixed as part of the core infrastructure, and so extremely hard for evolution to fix. All the other peripheral stuff is optimized, and niche optimized too. But that one big goof up at the beginning can’t be optimized away without starting from scratch.

    It is also interesting how the brain hacks around the problem so that we aren’t even aware of the bind spot, but uses interpolation and our bifocal vision to compensate for it. Which to me is a lot like the hacks we programmers put in place to compensate for architectural flaws that are too expensive to fix properly.

    For those interested in the flaw in the mammalian eye see this very interesting article:

    http://theness.com/neurologicablog/index.php/the-not-so-intelligent-design-of-the-human-eye/

  31. I had to solve various Poettering problems to get Bluetooth audio working…and it does, using a bunch of bash scripts that interact with bluetoothd and pulseaudio over dbus. I wrote them years ago and mostly never looked back once pulseaudio was capable of supporting these devices at all. Once you wrap up dbus in a simple CLI utility, all the dbus interactions can be made nice and Unix-y, although it really bears more resemblance to a REST API than to Unix.

    That’s not to say there aren’t still problems. One of the things the shell scripts do is babysit the bluetoothd and pulseaudio processes, and restart them when they crash or become otherwise unable to transport audio (taking every application that depends on them down with them). bluetoothd and pulseaudio run under different uids, so each one gets a separate babysitting script running under the matching uid…which have to coordinate with each other. Now I have two problems.

    After the first two or three years of trying to use the GUI applications provided by upstream for bluetoothd and pulseaudio, I gave up trying to solve their ever-shifting dependencies. The dbus API underneath is stable and working, so I just use that.

    I also gave up on trying to have a static audio configuration with the upstream tools. This does not mean I accepted a dynamic audio configuration–it means the babysitting daemon literally polls pulseaudio to see if its configuration has diverged from what I specified (dbus notifications from pulseaudio, crashes, and hangs are part of the possible configuration divergence, so these checks can only be done by polling from a separate process with a timeout), and changes it back when it has. If the changes don’t stick, my daemon applies SIGKILL in the hopes that a new pulseaudio process will be more cooperative than the old one–or at least more silent, since the only options I can tolerate are “exactly what I want” and “total silence from all output devices.”

    The one problem I haven’t been able to solve–even as badly as the solutions above–is that the upstream tools provide no way to configure a device that has not been discovered yet–a glaring omission for a world where new devices pop up on system buses all the time, and are automatically added to the audio stack by pulseaudio. It would be nice to specify a policy like “all bluetooth devices should have 1:1 output gain” or “all unknown non-bluetooth devices should be muted by default” or “all unknown devices with both digital and analog output ports should use the analog ones by default” without race conditions and without putting custom C code inside the pulseaudio process.

  32. A friend suggested something that I don’t think is exactly applicable, but may be a degree or two of separation from something it is, so for that sake:

    Army chocolate. The more you chew it, the bigger it gets.

    On that note, riffing off Jessica: giraffe code.

    (If the barrier between too mangled to migrate and migrated is called a “cruft event horizon”, would using reposturgeon to get it done anyway be Raymond radiation?)

  33. “My favorite example is the mammalian eye, primarily because the ID people like to use it to “disprove” evolution. The mammalian eye, while an amazing piece of technology has a major flaw in that the optic nerve is connected backward, and so it has a blind spot.”

    @Jessica Boxer: And when the ID people try to make excuses for this, be sure to point out that the octopus has eyes without this flaw. Obviously, there are two Intelligent Designers, one a bit more intelligent than the other.

  34. Ah, the great systemd debate. I actually got unwittingly involved in the Debian init wars: someone on the mailing list had cited a rant of mine about systemd’s apocalyptic anti-Unix-ness.

    But the writing on the wall is this: in the very near future your choices will be to adopt systemd, assume the responsibility of maintaining and supporting non-systemd workarounds in major software, or watching major software break. And systemd solves major problems. For example:

    I had to solve various Poettering problems to get Bluetooth audio working…and it does, using a bunch of bash scripts that interact with bluetoothd and pulseaudio over dbus. I wrote them years ago and mostly never looked back once pulseaudio was capable of supporting these devices at all. Once you wrap up dbus in a simple CLI utility, all the dbus interactions can be made nice and Unix-y, although it really bears more resemblance to a REST API than to Unix.

    That’s not to say there aren’t still problems. One of the things the shell scripts do is babysit the bluetoothd and pulseaudio processes, and restart them when they crash or become otherwise unable to transport audio (taking every application that depends on them down with them). bluetoothd and pulseaudio run under different uids, so each one gets a separate babysitting script running under the matching uid…which have to coordinate with each other. Now I have two problems.

    systemd can babysit system service daemons, restarting them as necessary and even tracking their dependencies, all without ugly shell script hacks. You just specify in the unit file what the service depends on, the way it should be monitored, and the restart policy and systemd takes care of the rest.

  35. OTOH, said experience confirms the theory that resource limitation can be a great way to inspire code/process optimization. Our solutions to the performance issues we’ve hit have been way better and will scale to far larger user counts than what a better funded operation might have done because the better funded group could just have thrown a few more servers at the problem and solved it that way.

    Dick Hamming has observed in “You and Your Research” that great research tends to come out of poorly funded, spartanly equipped groups. It works the other way too: if a company or division builds a large, impressive-looking downtown hi-rise and calls it their “Center for Excellence”, RUN.

  36. When an application gets too big to grow or change, it is like a tree that has become rootbound. How about “root-bound”? Trees benefit from taking out some of their root mass every so often so they can continue to grow.

    “Muscle-bound”, the phenomena where body-builders trash their bodies and are all stiff and sore and can’t move.

    We already have bit-rot, how about bit-bound? Held back by pre-existing bits.

  37. systemd has lots of features that would make sense as stand-alone Unix utilities (indeed, many of its functional components were stand-alone Unix utlities). Some of the new ones, e.g. the ability to spawn and manage daemons within cgroups, are also quite useful (and have been implemented independently of systemd). What’s not useful is building all of these into a single process that doesn’t work if it’s not PID 1, and doesn’t quite emulate the behavior of its predecessor in legacy mode.

    It makes sense to hardcode in init’s C code the relatively uncontroversial kill/sync/umount sequence during a reboot…except even that tiny code fragment is controversial. I never reboot a machine when I’m not in a serious hurry, so when I say “reboot” I want bootloader code executing before the “t” is echoed back from the console. Calling kill or sync first is incorrect given that requirement. This is easy to fix in traditional init scripts, but requires a successful build and install of a new systemd binary (possibly along with all its dependencies)–and systemd contains hundreds of such “non-controversial” decisions.

    Based on the history with the other FDO projects, I assume that the whole thing will collapse under its own weight in five years, and everyone with sanity left over will exit the scenario. That will leave the insane to run the asylumproject, and try again with a “this time for sure” reattempt to implement the same broken model with incompatible persistent storage (and therefore no upgrade path that is less effort than reinstalling from scratch). That means in 2029 or so we should see systemd 3 running GNOME 6, and it won’t be compatible with anything written today.

    If there are going to be init wars, the only way to win is not to play. I pulled all the system-level stuff my applications care about into a monitor daemon that is part of the applications themselves. This means I no longer assume that server daemons, storage devices and networking are set up for me when my application starts–instead, I tell the init-system du jour to run one daemon which does everything I care about, starting with setting up a root directory and populating it with writable filesystems, and providing not just the application but also all the infrastructure like remote access and backups and cgroup resource management.

    Under my scheme the init daemon–any init daemon–is blissfully unaware of anything I care about, and I’m mostly unaware of whatever it wants to control. Or at least it will be until I have to set up a shell script to pepper it with dbus queries and send it SIGKILL every now and then…

  38. systemd can babysit system service daemons, restarting them as necessary and even tracking their dependencies, all without ugly shell script hacks. You just specify in the unit file what the service depends on, the way it should be monitored, and the restart policy and systemd takes care of the rest.

    This is true, but irrelevant. We can babysit system service daemons, restart them as necessary, and even track their dependencies with a simple stand-alone CLI utility written in any language you consider beautiful. The unit files are not expressive enough to detect all the feature interactions involved between these two relatively simple dbus services, so we’ll still have to code failure detection separately.

    systemd is not required for this, and can do no better. This particular systemd feature is just an incomplete reimplementation of the ugly shell scripts in C code that FDO has peed on^H^H^H^H^H^H^Hblessed as a standard.

    I’d rather have a high-reliability audio stack than a high-availability one. Anyone can make a HA system by putting a LA system in an explosion-resistant box with a watchdog timer on it. We don’t need to rewrite init to build those.

  39. > This is true, but irrelevant. We can babysit system service daemons, restart them as necessary, and even track their dependencies with a simple stand-alone CLI utility written in any language you consider beautiful.

    Why not just have the init script run the daemon in the foreground and restart it whenever it dies? Or have a daemon consisting of a small part which can restart the larger part whenever it dies until told to stop.

    If an init script running something in the foreground prevents the init process from continuing, maybe that is the problem to be solved. (For what it’s worth, Ubuntu’s init script for lightdm seems to run in the foreground based on my experience with it. Ubuntu uses upstart. I haven’t done any analysis beyond that.)

    Dependency management is another thing (though maybe by the same philosophy, each service’s init scripts should start the services they depend on), but IMO an init system should be able to assume that any service it starts will stay started without it having to do anything or monitor any processes. If anything, a distinction should be made between one-off tasks (e.g. ifup) and running services.

  40. I think you can see this kind of adverse selection effect in survivals of a lot of obsolete technology.

    Parts of downtown Boston are still scarred by 1960s architecture, losing much of their character in a way that time has done little to repair. It would be one thing if the offending buildings were small business buildings or even medium-size corporate hi-rises. But what they are is massive, sprawling government complexes…

  41. On the systemd topic: Is there an actual citation for Lennart dismissing the principles of Unix outlined in TAOUP? All that comes to my mind presently is his post on debunking systemd “myths”, where he is quite clearly in favor of regarding it as upholding the principles of Unix rather than the opposite. (And I highly recommend people to read it, among his other blog posts. Even if you end up disagreeing with them, having some informed disagreement over systemd is much better than the usual ill-informed revolsion to it.)

  42. This is true, but irrelevant. We can babysit system service daemons, restart them as necessary, and even track their dependencies with a simple stand-alone CLI utility written in any language you consider beautiful.

    No, we can’t. There was no general, correct way to monitor and control a system service including restarting it if it dies or shutting it down or restarting it per user command under Linux until systemd came along. There are many reasons for this. One is that querying the process table and then sending a signal to a given PID is inherently racy. In the interval between when you queried and when you signalled, the process you were trying to signal could have died and been replaced with a different PID. It’s not common in practice, but that doesn’t mean it can’t happen.

    Another reason is that if you have two normal processes, a daemon and a monitor, and you send SIGKILL to the monitor, the daemon is now unmonitored and the system could be in an inconsistent state. Special code would be required in the daemon to watch for its monitor’s death and take action accordingly. (Quis custodiet ipsos custodes?) If your whole system is booted with this arrangement under a single process monitor, killing the monitor process could render it unusable, but not shut down.

    Systemd resolves these issues in a couple of ways. First of all it is PID 1, so it can’t be forcibly killed by user action. This makes it uniquely suited as a monitor process. It also means that it will receive SIGCHLD even for daemons that do the double-forking trick.

    Secondly it uses cgroups to identify services. Once a process is put into a cgroup, it can’t get out nor can any of its children. So by killing entire cgroups, systemd can ensure all processes associated with a service are shut down. You don’t *need* systemd for this (yet), but having this functionality in PID 1 has an enormous upside as mentioned above. In the future, managing the cgroup hierarchy will be unified under a single controlling process; the intent is for systemd to be this process.

    So yes, systemd solves problems with processes and daemons that many people didn’t realize they had. Linux was in desperate need of a modern, enterprise-grade init system, and now it has one. As for being anti-Unix, systemd was explicitly modelled after SMF in Solaris.

  43. My favorite example is the mammalian eye, primarily because the ID people like to use it to “disprove” evolution. The mammalian eye, while an amazing piece of technology has a major flaw in that the optic nerve is connected backward, and so it has a blind spot.

    What is important is that octopus eyes are without blind spot, because it was convergent but independent evolution. Though I am not sure if this design doesn’t have its own drawbacks…

  44. “So yes, systemd solves problems with processes and daemons that many people didn’t realize they had. Linux was in desperate need of a modern, enterprise-grade init system, and now it has one. As for being anti-Unix, systemd was explicitly modelled after SMF in Solaris.”
    Eric – what IS your view on systemd?
    While there is a learning curve, and the way it has been evolved was perhaps not as openly discussed, it DOES actually do a particularly good job! What remains as a problem is people fixing other areas of the the infrastructure without bothering about compatibility and with an arrogance of their knowing best. Many of my pet hates do now seem to have solutions with sidestep that arrogance and restore a desktop experience I can live with …

    1. >Eric – what IS your view on systemd?

      I’ve already said in public that I’m suspicious of it. Some of the signs of code and architecture bloat seem to be present, and the people attacking it are making a case that seems plausible. But I’m reserving judgment until I have time to study it more closely.

  45. “I’m reserving judgment until I have time to study it more closely.”
    Thought that was what I’d seen Eric
    I’m certainly not driven to condemn it, and I do think there is room for improvement, but on it’s own it does not get in the way enough to cause me to worry about it. Systems now do seem to boot a lot faster. Other things irritate a lot more :)

  46. I used the word “change” earlier, but that doesn’t really have the right connotations, so let me try again.

    If you want a term that describes decisions in such a way that everybody can understand that they are driven by (possibly questionable) economic thinking, you could do worse than “too big to fix.”

  47. “Systems now do seem to boot a lot faster. Other things irritate a lot more :)”

    That could be down to any number of factors. Ubuntu boots plenty fast, and it uses Upstart, which is, as I understand it, [i]significantly[/i] closer to the SysV-Init model than Systemd (but I believe it does run init scripts in the background, allowing them to run simultaneously on multiple cores.). It’s also possible that desktop systems install less stuff that runs at startup in the first place, or that each individual package has improved startup time, or that hardware performance improvements are a driving factor.

  48. I switched my desktop to systemd a couple weeks ago. Didn’t notice any difference in boot speed. So, if some people notice a difference in boot speed, know that there are others who don’t.

    The boot process is suddenly more opaque.

    Learning to set systemd up was quick and easy, but also made me think wtf. In the Unit files, there are “sections”, but no apparent reason for these sections. And the documentation has not been keeping up with development.

    For doing the things Jeff Read just described, I’ll grant, systemd is ok. And if it stuck to being init, and doing those things, then it could be fixed, or forked and fixed.

    But it keeps dragging in dependancies.

    Now, this may not be systemd’s fault all by itself. There is a cluster of people around systemd. Systems integrators, Gnome, KDE, the whole freedesktop *Kit crowd. By installing systemd, now suddenly I have to do some special firewall magic to run my own servers? Yuck! I mean, WHAT? I can’t just open a port and run a server? That is why ports 1024 and below are reserved to root; anything above that can be treated as untrusted. It is just complexity on complexity. Tight coupling. Failure to use the Demeter Principle. Just that one thing alone blows my mind and gives me a headache.

    So, if Lennart says that of the 60 daemons in systemd, they are all independant of each other, I want to know this: who is it that is coupling them so tightly together that I can’t install GNOME3 without also having a firewall running and IPv6, and a few other goodies? Who is coupling everything together so tightly?

    I’ve been doing some web searches. I remember lots of quotes from Lennart that were quite arrogant. Most of them seem to be scrubbed from the web now. I suspect he is doing damage control.

    Most of what PulseAudio does should have been done in the kernel; in fact FreeBSD did just that and audio works sweet.

    Compiling systemd so it DOESN’T pull in a bunch of dependancies requires knowledge and effort. The default does pull you in to the “whole enchilada”. And people taking the path of least resistance, guess what happens… suddenly my desktop has a firewall just like my windows boxes. WTH!

  49. When Lennart said that BSD was irrelevant to “desktop developers”, this was in the context of his interview about systemd and pulseaudio. One can be forgiven for thinking that Lennart is opposed to portability, not just for systemd (where that actually makes sense) but for the entire desktop ecosystem. He was definitely describing embrace and extend in his 2011 interview:

    http://linuxfr.org/nodes/86687/comments/1249943

    Also, his comments about OSS and Alsa audio stacks showed such profound ignorance about those technologies, it is… wow. just wow. FreeBSD uses OSS, and they did a very minimal extension to the interface, kept the API, and it allows all sorts of things to play simultaneously, very sweetly.

    Lennart can sound very reasonable, except to people who know that many of his facts aren’t.

  50. Also, his comments about OSS and Alsa audio stacks showed such profound ignorance about those technologies, it is… wow. just wow. FreeBSD uses OSS, and they did a very minimal extension to the interface, kept the API, and it allows all sorts of things to play simultaneously, very sweetly.

    But not with timer-based scheduling. Did you actually read Lennart’s interview and his relevant blog posts to understand why this is important? In something like OSS or ALSA, latency requirements in the form of buffer and fragment sizes are per sound card. Getting this wrong has costs: too little latency and the CPU will be nailed with interrupts, draining CPU time and battery life. Too Much and you introduce audio lag, which isn’t terrible on a music player app but is a killer for professional audio, or even Guitar Hero style rhythm games. With Pulse, you can specify latency constraints per application, and Pulse will compute the optimum buffer/fragment sizes for all its clients.

    If you don’t like these design choices, well, there’s always JACK :) Unlike systemd, Pulse is designed with a rather limited scope and goal: consumer grade audio.

    Also, OSS does way too much stuff in the kernel. For example you do not want to do mixing or resampling in the kernel. Especially when floats are involved; there’s a hard “no floating point code” rule for Linux which is why OSSv4 is distributed separately. Pulse moves this into user space where it belongs. That’s kind of a theme with Lennart’s code: systemd recently gained terminal parsing and rendering capabilities. The idea is to remove the hacky VT code from the terminal and have a user space virtual terminal implementation inside systemd. Since systemd now handles login and user sessions, this makes all kinds of sense.

    One thing I’ve noticed: Lennart seems to have thought through the technical issues more than most of his haters.

  51. All I can say about PulseAudio is that it works less well than ALSA, and is significantly more opaque. I know of few that aren’t developers who have an opinion that’s not more than a little negative.

    My roommate has been fighting it; he says that it’s like sitting in the left seat of a 737 looking for the altimeter, and all you need is a Cessna…and heaven help you if you learned on a Cub.

  52. A few thoughts:
    My professional job involves enterprise software. (In this context I mean, among other things, that when our stuff breaks, our customers can be losing $1M/h or more due to business outages, and $10k/h is common). I’ve been a Linux proponent for over 10 years. Even though almost all of my machines are GNU/Linux, it certainly isn’t enterprise-grade. Here are some examples of systemic or architectural issues which I deal with just on my desktop on a regular basis:

    1) Strange statefulness of core system management software, such as DHCP. On occasion, I’ll have some physical configuration with my setup. Perhaps a power outage has taken down the network switch. Perhaps I didn’t quite plug the Cat5e jack into the socket far enough. In any case, once I’ve fixed the problem, the system doesn’t attempt to grab an IP address. It simply sits there like a lump. It’s even worse when you’re trying to diagnose the issue and nothing you change fixes the issue. This has happened to me and my coworkers enough that I have a workaround: I kill the dhcp process and start it again.

    2) Inability to query internal state of many system daemons. It seems like either I need to set the verbosity level to “audible disk IO”, or I have no idea what’s going on with a system process. In the above example of dhcp, printing out the logs wouldn’t tell me anything. And, I don’t want to have a huge amount of logging going on routinely. Some daemons will provide either a client utility or some magic signal you can send to the process to get information (perhaps a current state dump to the log file), but nothing which is actually standardized. Running “service dhcp status” would only tell me “Running”. GUI design has the concept of Model-View-Controller. Here we are lacking any standardized way of having a view into what’s going on.

    3) Lack of comprehensive toolset to deal with non-tabular data. The classic unix command set of grep/awk/sed/cat/head/tail/etc are all wonderful for dealing with single-table text-file data. However, this isn’t the 1970s any more. We frequently have to deal with more complex data structures. At the very, we have hierarchical and multi-table data layouts. I’ve tried to parse large quantities of XML-formated data representing a hierarchical relationship. In some cases, tens or hundreds megabytes worth. Almost nothing allows me to work with that data in a useful fashion in a similar command-line mode (yes, I eventually managed to use some xquery/xpath utilities to break stuff down into a useful format) other than running stuff through commands to temporary files and manually processing the data in several independent steps. SQL-style databases can’t easily be piped between multiple processes easily. SQLite will let you do some stuff to a temporary file, but not as a part of a data stream. This needs more architecture and design behind it.

    4) Lack of testing of much of anything on an unreliable system. Seriously, remote-mount the root file system via NFS on a Linux machine and send all of the network traffic through a device which adds a random amount of jitter to the packets (say average of 100ms), possibly with 5% packet loss as well. Then fire up a KDE desktop and start checking your email. Every layer in the system seems to pretty much rely on everything underneath working perfectly. Finding the cases where there is actual graceful error handling is more a surprise than the reverse.

    5) Configuration management. I usually run the LTS versions of Ubuntu, because I want extra stability. I usually have to manually tweak the configurations a little in order to get things to work right. For example, I might have 2 subnets attached to 1 computer with 2 NICs with 3 network ports. Making sure that the right firewall rules get applied correctly matters, so I tweak network config files. Likewise for NFS mounts and a handful of other things. So when I perform a distribution upgrade, why does the distribution upgrader note that I’ve modified the default config file and ask me which version I want to keep? Why doesn’t the configuration manager as a part of the distribution have enough knowledge to be able to figure out semantically what I’ve changed and carry those changes forward? As long as the configuration file is “correct” for the application, all appropriate settings should be handled without user intervention.

    In conclusion, I don’t know if systemd is a good idea, a bad idea, or something in-between. I do know that I spend way too much time on stuff which should Just Work and look forward to anything which enables that to happen.

  53. “In conclusion, I don’t know if systemd is a good idea, a bad idea, or something in-between. I do know that I spend way too much time on stuff which should Just Work and look forward to anything which enables that to happen.”
    That just about sums it up nicely. KDE and Gnome used to be nice stable platforms until the ‘it’s too old fashioned we must fix it brigade’ got at it. Do I want my windows to bounce around the screen – NO – even on windows the first change was to kill the fading in. I just want a window to open as fast as possible.
    I have another fix on audio – parallel processing ;) I have a TV as the third screen and the desktop has no audio enabled. THAT really upsets SUSE13.1 but I am sure half the complaints about systemd are probably more to do with the bloat the distributions add rather than the software itself? It would be nice to have a simple vanilla distribution, and I have tried a few others but at the end of the day I just need to get work out and fighting everybody else trying to improve things does not help productivity at all! Getting my desktop back to working the way I have used it for 20 years is more important than any of the crap that has been added! And the fact that ‘classic’ modes do get added back eventually probably indicates I’m not alone?

  54. All I can say about PulseAudio is that it works less well than ALSA, and is significantly more opaque. I know of few that aren’t developers who have an opinion that’s not more than a little negative.

    While it has its rough edges, from a user’s perspective, it’s a massive improvement on the status quo and, while the configuration API isn’t as clear as it could be, the most controllable of the audio systems I’ve used. In particular, network transparency (such as multicasting an RTP stream) and easily rerouting streams in progress are bits that ALSA can’t handle at all, period. WebOS actually used Pulseaudio as its core audio system, moving streams between ear speaker, loudspeaker, jack, and Bluetooth by simply issuing sink commands, and it worked perfectly.

  55. Yeah, it’s controllable. Hence the 737 instrument panel effect. If you can’t grasp the control mechanisms, though, you can’t control it effectively.

  56. Yeah, it’s controllable. Hence the 737 instrument panel effect. If you can’t grasp the control mechanisms, though, you can’t control it effectively.

    I know virtually nothing about the C API, but the CLI and GUI tools are very easy to use. I’m quite willing to believe that this is an unusual but not unprecedented case of writing good wrappers around a lousy API.

  57. Speaking of PulseAudio, I’ve moved a SoundBlaster Live! card from machine to machine through upgrades because it was a great card, almost certainly better than on-board sound. Recently I yanked it out because PulseAudio/KDE decided to make my on-board sound the primary sound device instead of my audio card, and I hated having to hit the thing over the head with a hammer every time I rebooted in order to get it to play sound again. Reconfiguring the hardware was easier than trying to get the software configuration right.

  58. “PulseAudio/KDE decided to make my on-board sound the primary sound device instead of my audio card,”
    You just disable the on board hardware and the secondary card is all it finds. Or at least that is all I’ve had to do.

  59. You assume that you *can* disable the onboard sound card, Lester.

    As for PulseAudio (and more than a few other things I’ve been forced to look at), it looks like configuration is more and more becoming a case of writing a section of code that the main program exec’s.

  60. >WebOS actually used Pulseaudio as its core audio system, moving streams between ear speaker, loudspeaker, jack, and Bluetooth by simply issuing sink commands, and it worked perfectly.

    That’s not as much of an endorsement as you might think. I have a pair of Touchpads – they’re both Android now, but they spent a lot of time running WebOS. ISTR audio on WebOS being screwy, as in it would regularly get all garbled requiring you to run a (community developed) utility to restart audio. Except that regularly didn’t work either, requiring a reboot in order to get your sound working properly again. (Feel free to correct me if my memory is unreliable, of course.)

  61. ISTR audio on WebOS being screwy, as in it would regularly get all garbled requiring you to run a (community developed) utility to restart audio. Except that regularly didn’t work either, requiring a reboot in order to get your sound working properly again.

    I only used WebOS on my original Pre, and I never had any issues with audio.

  62. esr> At the limit, there may be some repositories that never get converted because the concentrated pain associated with doing that overwhelms any time-discounted estimate of the costs of using obsolescent tools – or even the best tools may not be good enough to handle their sheer bulk.

    A test for this just occurred to me: Your theory predicts that if there are any SCCS repositories remaining, their average size should be truly gigantosaurian. In particular, it should exceed the average size of remaining CVS repositories because SCCS is even older. (At most, because of Moore’s law, the relevant measure of size should be indexed to the amount of core and disk space typically available when the projects were started. SCCS should have hit the project-size wall before CVS did.)

    Do you happen to know of any remaining SCCS repositories and their respective sizes?

    1. >Do you happen to know of any remaining SCCS repositories and their respective sizes?

      I do not. It’s been a very long time since I heard of one. I believe SCCS is genuinely extinct.

      One reason this is plausible is because most SCCS repos moved to RCS more than 20 years ago. I maintain code descended from the converter they used; it was a csh script at the time, but is now in Python.

      http://www.catb.org/esr/sccs2rcs/

  63. On a slight tangent to the topic of this thread, Eric: Over in the G+ thread about CVS-fast-export, I overheard you say that “there aren’t any canned solutions in C for sorting linked lists”. As it happens, I’m currently playing around with glib, which does provide a canned sort routine for its singly-linked lists. Have you run into any bad experiences with Glib that are keeping you from using it now?

    1. >Have you run into any bad experiences with Glib that are keeping you from using it now?

      Not bad experience, but lack of experience. I’ll read the documentation.

  64. Since Mycroft Jones mentioned Wayland, I’ll ask this question here: you’ve said a couple of things against it. Do you see Canonical’s Mir as a superior X replacement?

    1. >Do you see Canonical’s Mir as a superior X replacement?

      Insufficient information.

      I will say that I would be doubtful of any effort to replace X that doesn’t have Keith Packard and Jim Gettys in it. I think they understand the mistakes and false turns of the design better than anyone else.

  65. I will say that I would be doubtful of any effort to replace X that doesn’t have Keith Packard and Jim Gettys in it. I think they understand the mistakes and false turns of the design better than anyone else.

    That means that X has a bus factor of 2 — and they’re not young at that. That’s quite unacceptable for such an essential piece of software. All the more reason for X to be replaced with or without their involvement.

    Anyway, the case that X was designed for: big iron servicing one or more external graphics terminals, doesn’t really exist anymore. Most Unix installations have a fast local display including GPU, and most users, spoiled from exposure to Apple hardware, demand perfect flicker-free tear-free UIs on said display. That’s a case that X11 can’t even touch; Wayland is optimized for that case. So we don’t have to totally replace X while avoiding its design mistakes. We have simply to design a display server that meets today’s needs. And Wayland is the closest thing to that.

  66. Nb. in the case of switching (changing) version control system there is always the nuclear option of starting new SCM from current version, dropping history… or in the case of Git, leaving it to the historical repository, which can be “attached” to present-work repository by the way of grafts or replaces.

  67. History is everything in SCM. Abandoning it is HIGHLY undesirable, managing it with two different VCS only slightly less so, and avoiding these itself motivates most of these conversion tools.

  68. @Jeff Read: in Git grafts (or its modern instantiation: replaces) can seamlessly join historical and current-work repositories, as if it were single repository.

  69. One of the other benefits that systemd brings to the table is standardized D-Bus APIs for basic system tasks, and even logging in to the system. There is even a project called systembsd — not a reimplementation of systemd for BSD, but a collection of daemons that expose the same APIs as the systemd suite, making the systemd APIs a universal standard.

    It’s a hard-learned lesson that developers want APIs to do common tasks — not the usual Unix ad-hockery of configuration files, shell scripts, command line utilities, pipes, and ad-hoc protocols. APIs can be audited and verified and are uniform across distributions and kernels.

  70. In some ways I think the real precursor for systemd is not pulseaudio but Bitkeeper. It provided such a compelling rethink of a long-running problem that adoption was almost a no-brainer, but social/political/structural issues made it impossible to commit to long-term.

    Perhaps Linus will tire of the systemd squabbling and rewrite 80% of its functionality into a much more adoptable system…

  71. (That said, the barriers to systemd are obviously not as large. As Debian has adopted it as default it may well be Good Enough, or at least be able to evolve into same.)

  72. By the hypothesis presented here, the very last System/360 machines to go dark — long after all of us are dead — will be the ones running Lovecraftian horrors we’ve all heard the stories about: the ones that have mutated into something quite different from any semantics suggested by their original COBOL source.

    Of course some of us may well believe that with strange aeons even death may die, but even then there will be mission-critical S/360 apps…

Leave a comment

Your email address will not be published. Required fields are marked *