Unix != open source

Yesterday a well-meaning hacker sent me a newly-recovered koan of Master Foo in which an angry antagonist berated Master Foo for promoting an ethic of open-source software at the expense of programmers’ livelihoods.

Alas, I knew at once that he had been misled by a forgery, or perhaps some dreadful chain of copying errors, at whatever venerable monastic library had been the site of his research. Not because the economics was wrong – Master Foo persuades the antagonist that his assumption is in error – but because the koan conflates two things that were not the same. Actually, at least three things that are not the same.

Eighteen years into the third millennium, long after the formative events of Master Foo’s time, many people fail to understand how complex and contingent the relationship between the Unix tradition and the open-source ethos actually was in the old days. Too readily we project today’s conditions backwards in a way that impedes understanding of history.

Here’s how it was…

There are at least three different things one can mean when one speaks of the practice of open source.

The first, and oldest, is an unreflective folk practice of code-sharing. At this stage people just…share code. They don’t worry about licenses because they think of the activity as one that only involves an informal peer network of consenting sharers; there’s no concept in anybody’s mind of having to defend code-sharing, or of any collision with third-party interests. Nor is there ideology about the practice, nor any name for it – there’s just custom and utility. I’ll call this “Level One” or “unreflective” open source.

Another stage is reached when people begin to reflect on open source as a practice and articulate the pragmatic advantages of it. At this point one begins to get folk theory about it – claims like “sharing code is good because it reduces wasteful duplication of effort” or “code sharing is good because other people can notice problems that the author doesn’t.” But these claims are not connected by any generative or prescriptive theory; other than technical conventions about how to pass around code, there aren’t any norms. We’re still at folk-practice here, but folk practice becoming conscious. I’ll call this “Level Two” or “emergent” open source – you have an ethos, but not yet an ethic.

A third stage is reached when prophets try to systematize a theory of open source. What’s characteristic of this stage as distinct from the others is the assertion of strong normative claims attached to an explicit theory of the consequences: “you should share code, and here is why”. Only at this point can one really speak of an open-source “ethic”. I’ll call this “Level Three” or “ideological” open source, because when you get here practice starts to change in response to the developing theory. There are manifestos, and the manifestos matter.

Historically there have been at least two competing theories of open source, one associated with Richard Stallman and the other with me. But for the purposes of this post that distinction is almost completely unimportant; only the time of arrival of Stallman’s theory (1985) and to a lesser extent mine (1998) actually matters much.

The koans of Master Foo address the Unix design tradition, which began around 1969 and reached a recognizably modern form by the early ’80s when it incorporated TCP/IP networking. Right away we can see a question here; the early formative period of Unix long predates public Level Three thinking about open source.

This is why I knew that koan had been forged or corrupted. The Level Three language in which someone could berate Master Foo for promoting an ethic of open source did not yet exist in the legendary age of the early Unix patriarchs. It is true that you can find some evidence of Level Two thinking as far back as Ferdinand Corbató’s reflections on Multics in 1963, and there are quotes in the early Unix papers out of Bell Labs that suggest it. But we don’t see actual normative claims – full Level Three – until the GNU manifesto arrives in 1985.

Until then there is no concept that code-sharing could be in conflict with programmers making a living, because nobody has proposed that it be done systematically as a replacement for closed source. Master Foo’s antagonist in that supposed koan is anachronistic for early Unix, a back-projection of later concerns. Master Foo would have understood the proposition that source code has a longer expected survival time than object blobs, but not the ethical claim.

Now, I’m not arguing that the development of the Unix tradition and the open-source ethos were completely disconnected. It is a historical fact that the Unix tradition incubated open source, and worth looking at why. I’ve written about some of this before, so the following is a reminder more than a full exposition.

You can’t have open source if you can’t port software between machines – and not just between machines of the same make, but across architectures. It was Unix that made this possibility normal by systematizing the idea that APIs and retargetable compilers could decouple source code from the machine.

The other direction of entailment doesn’t work, though. You can have Unix without open source – and until Linux most of us did, most of the time. Bootleg source tapes and things like the Lions book deepened our understanding but didn’t free us from dependence on closed-source kernels.

It seems highly unlikely that there will ever be another closed-source Unix implementation in the future; the coupling is pretty tight, now. But remember that it was not always thus.

69 comments

  1. For a younger Linux generation the closed-source Unix wars of the 80s could be studied like the Punic wars – interesting to scholars and certainly of historic significance but mostly lessons lost and not really impactful on day-to-day modern life.

    The code sharing (Stage I and Stage II) among the practitioners of BSD is one of the antecedents of today’s Open Source world including the almost pre-historic distribution mechanisms of Usenet.

    …!phillabs!sdcsvax!lance

    1. I think we’re at the point where the same can be said of the open-source enlightenment/wars in the ’90s.

      The lore is weak with most modern professional programmers that I meet.

      I was taking to someone recently about how the events of early Twitter and Rails really shaped how most people think about programming today and they asked me if I realised that “most senior programmers weren’t even programmers when that happened”. That’s when it dawned on me how much of what I take for granted had been lost from the common zeitgeist.

      When you add that to the events of 2007, things really have taken a step backwards. Before 2007 we were genuinely at the point where you could use the standard platforms of the day with entirely Open Source software and have an experience with features at least on par with those using commercial software.

      Then the iPhone happened, quickly followed by the appstore, and now there’s currently no realistic chance of getting a fully open source system running on today’s standard platforms (handhelds).

      For a while I thought we had a window of opportunity. Most handhelds were marketed such that graphics performance was one of the chief technical barriers for competition. Open Source didn’t have any options. However, I saw a window for a handheld device that used a *colour* e-ink screen. That would lower the barrier for an Open Source graphics pipe to the point where it wouldn’t need an ASIC and produce a platform that had value in terms of battery life.

      I thought that window would close far quicker than it has. Even now there are hardly any major uses of colour e-ink technology in consumer electronics and the battery life situation is still appalling.
      However, in the intervening time another barrier has risen: all the common activities that people want to use their devices for now require access to proprietary services: maps, taxis, chat, food, files, etc. These are almost entirely vertically integrated.

      I still wish there was a way to have a UI that was “me” oriented rather than “app” oriented but the business model of the major platforms is basically about selling you (or taking rent for) a spot on the home-screen for your logo. You can’t build something that competes because, even if you can build the platform, you can’t get the services.

      I’m not sure how we get ourselves out of this low part of the curve but I’d hate to wait another 10 or 20 years to see it improve. It’s the combination of these things that makes it particularly difficult to fix (in no particular order):

      + Programmers don’t understand the non-technical and economic aspects as well as they used to.
      + There are no role models like we had in the ’90s (Eric, Linus, RMS, Tridge, Perens, etc) actively leading a particular campaign to teach programers, vendors and users about the non-technical aspects.
      + We need more than just competitive software this time: we also need access to the services and (social) networks.
      + We don’t have an open architecture platform like the PC as a base for our effort.
      + There are “better” options: even programmers don’t want to take a hit on the features and immediate utility that their current handheld gives them in order to use something that’s ideologically pure (or even just “more secure” / “more private”, etc).

      1. What about javascript and html5? I been playing around and pretty a full blown development enviroment. Even works off-line by opening it up in browser provided I am not trying connect anywhere from the app.

      2. “now there’s currently no realistic chance of getting a fully open source system running on today’s standard platforms (handhelds).”
        ****
        “However, in the intervening time another barrier has risen: all the common activities that people want to use their devices for now require access to proprietary services: maps, taxis, chat, food, files, etc. ”

        I have to admit, this direction completely surprised me. My prior’s were that open standards, particularly w.r.t the Internet, were unstoppable.

        And yet, here we are, back in a proprietary standards world.

        1. >And yet, here we are, back in a proprietary standards world.

          That’s not exactly it. It’s the ties to big back-end databases that are hard to replicate without lots of cash flow that have us stuck in a service oligopoly, not the licensing status of the software.

          Yeah, some of that stuff is closed-source. But we’d have the same problem if it were all open. The moat around Google/Facebook/Twitter’s business is really the cost of their huge server farms.

          1. > The moat around Google/Facebook/Twitter’s business is really the cost of their huge server farms.

            Well one thing to look at is what the huge server farms do, e.g., what does twitter have that usenet didn’t or couldn’t trivially replicate?

            1. The thing that comes to mind is automatic spam filtering, and I suspect that may be the only actually useful thing twitter’s server farms are doing.

  2. And once again I put on my early open source historian hat…

    You’re absolutely correct in that stage III open source didn’t happen till RMS came along. I’m unsure about when stage II happened, though. Obviously, SHARE in 1955 was the genesis of stage I, and by 1959 had produced what can be argued is the first open source operating system in the sense used today, the SHARE Operating System for the IBM 709/7090. But I don’t know enough about the history of the group to know when it reached stage II. By the time I got into mainframe systems programming in 1981, stage II was very firmly entrenched, as there weren’t any IBM mainframe sites that didn’t have and use software from the SHARE and (for MVS users) CBT tapes; there was very much a sense of “this is good stuff that will save you reinventing a bunch of wheels, and if you can improve it, please do”. I can’t put my finger on when that evolved into something conscious.

    I doubt Bell Labs or UC Berkeley or MIT were influenced directly by SHARE, but I’m equally certain they were influenced directly by DECUS, and that probably began as a “hey, we need a SHARE for DEC users”. I doubt anyone is still alive who was around during those formative years, though.

    1. Just as a data point: as a university student in mid ’70s, I was encouraged to contribute to the SHARE tapes, and did make a minor contribution.

    2. And like Tom in the 70’s there was software for Burroughs users. It came on tape and the first program allowed you to pull any program off the library tape without dumping the entire tape. But the best part was that program helped you add your updated version to the back of the tape, so other users could get the updated code. Copy the entire tape and send it on.

      So much cool, free, and amazingly useful software.

      1. >But the best part was that program helped you add your updated version to the back of the tape, so other users could get the updated code. Copy the entire tape and send it on.

        Fascinating. Did cycles form in the transmission graph?

        1. I’m not 100% sure what you are asking, but let me give this answer.

          About once a year places in Canada would send their latest tape to Queens, places in the US would send them to me. I lived on the US/Canada boarder. We’d do a merge and take a fresh tape and send it out. Because the tapes had the prior versions you could get a little history and try to decide what version had all the upgrades.

          If it wasn’t easy to figure out, we’d contact the “authors” and see if if we could get a consolidated version. We could mail a tape and get a merged version back in a week or so.

          In other cases, we would punt and put the versions out and make a note what had been done. Sometimes a merge would get done, sometimes not. That means that multiple versions would float until someone got time to pull things together, or it became an orphan.

          1. This sounds very much like the CBT Tape, with lots of contributors, maintained as a labor of love for many years by Arnold Casinghino at Connecticut Bank and Trust and now by the retired Sam Golob. It lives on at the cbttape.org site.

  3. I think this is far more a mindset than anything else. I know a dude who made a utility used by exactly one gaming clan. He will not make any money with it. It does not make the clan more competitive in gaming, mostly just automates some boring housekeeping. It is in Python. And carefully compiled into .pyc, .pyd and .EXE files. Why? Because users are users. Users don’t belong in the code much like how for a ship captain passengers don’t belong in the engineering room. In his mind you have no legitimate business reading his code than participating in the project, in which case if you are good enough he will share it with you. It is a mindset similar to hiring or job certifications. He does not want other people to send random patches. Maybe they would be in a different coding style than his or at any rate maybe he would spend more time finding if they are good than writing his own. He also doesn’t really want other people to debug his code. He would maybe spend more time explaining what the code intends to do than doing it himself.

    This is a very common mentality outside programmers. Users are users, don’t unscrew the lid of the device or you lose your warranty, and if you are actually knowledgeable we will hire you when we want to, or train you to work on our device making sure you understand it properly.

    From this angle what you guys had was not only a situation where many users were programmers but also it seems it was programmers who trusted each other to read each others code properly, contribute patches in the same style, not ask 143 questions during debugging like what this variable is meant to do, probably use one commonly accepted coding style, and stuff like that. Correct?

    I understand you, and obviously open source is a huge success, I mean, who is even using anything but VLC VideoLan to watch movies on a PC these days? But I also understand that dude. In my field if I even just blog about a piece of technology, I get seriously stupid questions in email, not beginner level questions but “low IQ and doesn’t even attempt to understand what is going on” level. What would happen if I posted the code on the blog? Dumbf*cks would try to copy-paste into their programs and ask even more stupid questions. There are literally untrained 85 IQ imbeciles out there in huge numbers who have no idea about programming but pretend to and are actually doing contract work building e.g. websites copy-pasting example and tutorial PHP code together.

    Seems this is the difference. That dude gets enough stupid questions just even from users. He will give me the source code when he will be conviced I won’t waste his time.

    1. I mean, who is even using anything but VLC VideoLan to watch movies on a PC these days?

      Me :P I think mpv is a lot better, though I will concede to using VLC only for its Blu-ray menu support, but I find the interface painful and not as good as mpv.

      VLC is probably fine in the Windows world, but on Linux I find it to be quite lacking.

    2. > This is a very common mentality outside programmers. Users are users, don’t unscrew the lid of the device or you lose your warranty.

      Just as a side note, did you know those warranty conditionals are illegal? (at least on the US) https://www.ftc.gov/news-events/press-releases/2018/04/ftc-staff-warns-companies-it-illegal-condition-warranty-coverage

      > a situation where many users were programmers but also it seems it was programmers who trusted each other

      Well, by reading about that period, it seemed it was more like a common informal convention of just passing the source code to whoever asked for it, regardless of how well they knew each other. (like if someone from a different school asked for it)

      That is only my impression, however. Maybe ESR or someone else with more knowledge about that period could explain how it actually was.

      1. >Well, by reading about that period, it seemed it was more like a common informal convention of just passing the source code to whoever asked for it, regardless of how well they knew each other. (like if someone from a different school asked for it)

        That’s basically correct. We pretty much assumed that anybody asking had a good use for access…because in thoose days nobody else would have thought of asking.

      2. I had heard that they had ruled that those sorts of conditionals were illegal, and in general I’m glad, but it seems really unreasonable in the case of something like a hard drive, where if you open it, you have borked it.

        Can they set any conditions? i.e.: “You can service your own hard drive, but if you open it outside of a clean room environment, you own all the dust inside yourself afterwards.”

    3. Another contingent reason for the success of Open Source is that the legal liability laws in the United States protect people who share code. GPLv2 is very explicit about the code coming with ABSOLUTELY NO WARRANTEE. Without that clause being as enforceable as it was, people would be much less likely to share code. (Why take the risk of appending my code to the tape if someone down the line could lose $BIG by depending on my code, and seek redress in the courts?)

  4. @TheIndividualist: I mean, who is even using anything but VLC VideoLan to watch movies on a PC these days?

    Any Windows user who hasn’t discovered VLC, and simply uses Windows Media Player. In other words, most Windows users.

    I’ve been using VLC for years, because it’s cross platform, and available for platforms besides Windows. I was delighted to discover a port to Android that I happily run on my tablets. Where possible, I prefer to use the same program on everything I use. The fact that hardware gets steadily smaller, faster, and cheaper helps – it means that inherently cross-platform languages like Java and Python can be used to write applications, and the application will run wherever there’s a current language runtime.

    (And that’s increasingly the case for Windows applications built on .NET. MS has open sourced .NET, and MS engineers are major contributors to the Mono project that implements .NET for Linux. I expect to see applications written in C# start appearing in other places beside Windows as a consequence)

    >Dennis

    1. > Any Windows user who hasn’t discovered VLC, and simply uses Windows Media Player. In other words, most Windows users.

      Or the occasional Windows user like me who tried it and found that he really did not like it.

      1. @Deep Lurker: Fair enough. VLC has a quirk or two, and if I only ever used Windows, Media Player might meet my needs. But I do use other environments, and the fact that VLC runs in them gives it the nod.

        Out of curiosity, what don’t you like about VLC?

        >Dennis

        1. I don’t use Media Player either. I use PotPlayer. And while I do use Android, I don’t play video in Android.

          But the big initial thing that gave me a negative impression of VLC that it subsequently failed to overcome was the way it rudely borged video file types unto itself, turning them all into “VLC media files.”

          1. @Deep Lurker: I haven’t tried Pol Player. But turning supported file types into VLC media files sounds like standard Windows behavior, not specific to VLC. It has the notion that different file types are handled by different applications, and associates the file types with the default application, or with a different one if you prefer another app.

            VLC can handle video files, audio files, and image files. I spent some time getting Win10 configured so that VLC was the default player for video files, but not for audio or images. If Windows sees them as VLC Media Files, fine by me. I have “show file extensions for supported files” turned on, and know if the file is AVI, MPG, MP4, MOV or whatever. VLC is what I use to play them, so being identified as VLC media files isn’t a problem.

            The issue with Win10 is that it really wants you to use the default apps provided with Windows, and you may have to specify more than once that the default app isn’t what you use, thank you.

            >Dennis

            1. No, it wasn’t a case of standard Windows behavior. VLC was rude about taking over the various media file types in ways that went beyond the usual behavior of Windows programs.

              1. @Deep Lurker: Okay. That hasn’t been my experience, but I don’t blame you for being annoyed. It might be a side effect of “cross platform”. The code may be portable to other architectures, but the behavior will be another matter, and UI and installer issues can be deadly. VLC Works For Me, but I understand it doesn’t work for everybody.

                >Dennis

      2. Or the vast majority of reasonably savvy Windows users who’ve installed CCCP and use Media Player Classic.

        VLC is not well loved in the Windows world for a combination of poor installer behaviour (associating a gazillion file types with one app by default is installer behaviour and considered rude at best) and the fact that it lags better projects like CCCP and MPC in terms of codec support.

        1. VLC – or at least the version I tried – didn’t just associate a gazillion file types with itself as the default program to run them. It changed those gazillion types to the “VLC Media File” type, overwriting their previous identities and crushing the distinctions between the different extensions. This is significantly worse behavior than merely associating a file type without asking and without giving you a chance to opt out.

            1. Once bitten, twice shy.

              Also, having VLC change file types to “VLC Media File (foo)”, is still sufficiently rude to put me off the program, even if it isn’t as horribly bad as smashing all file types into a single “VLC Media File” type.

    2. Counterpoint – the company I was working at three to eight years ago did video documentation for public works construction. *Public agencies* (Caltrans, City and County of San Francisco, etc.) were switching from requiring DVDs which could play in a DVD player to media where files could be downloaded and be playable on VLC Player.

      Running further off topic – those agencies aren’t all that smart. At the same time, they were still sometimes requiring photos to be submitted on paper in protective sleeves, marked on the back. At the time, we ran scripts which put the information on the *front* of the photos, then printed them two to a page on glossy paper, and submitted the image files in case someone needed to recreate or enlarge a print.

      I sort of work for them again, and they still do that, though most clients don’t want the paper anymore.

  5. I would call early Unix development Stage One. The programmers who contributed tools to Unix worked at Bell Labs, had access to the Unix source, and (more important) access to Thompson, Kernighan and Ritchie for clarifications on the Unix API when they were scratching the itch that led them to write a particular tool to do something they needed to do.

    Stage two might apply to the period when BSD arose after AT&T provided Unix in source form to educational institutions because back then, they were still Ma Bell, the regulated national telecom monopoly, and not allowed to sell software for money. Folks like Bill Joy might be seen as in an open source environment, because they had access to the source and could modify it and build on top of it. Of course, you had to have an AT&T source license to do that, but the institutions they were at had one.

    Stage three properly arose when licenses were created that explicitly allowed and encouraged sharing of source, with the GPL as the archetypal example. You didn’t have to already have a source license for the code, as the code came with the permitting license.

    Alas, the direction of progress is not always forward. We now have a number of open source licenses, and my irony meter pegs off scale when one open source project cannot use code from another, because the licenses the two projects use are not compatible.

    And it still begs the underlying question of how you make a living writing open source code. The folks I’m aware of who might be seen as doing so mostly work for outfits like Facebook and Google that use open source and pay engineers to hack on what they use. Other open source contributors do it as a spare time activity, and do other things to pay the bills, like writing proprietary code for folks who sell software.

    One thing I’ve been thinking about is the nature of production. Sooner or later, every product becomes a commodity, with commodity pricing. The PC is an example. The original PC defined a standard. Others adopted the standard and made competing products. The ones who are left are the survivors. But PCs are standardized commodities, and if the one you buy has the specs you want, it largely doesn’t matter whose name is on the box, and you buy the least expensive one.

    Similar statements apply to software. The open source programs I tend to use are examples of programs everyone uses, which have become commodities and spawned capable open source versions. The stuff still closed source tends to be things that haven’t spawned open source alternatives that are as good.

    The poster child for that is probably Adobe Photoshop. Yes, open source has the Gimp, and it’s a worthy product, but it’s not Photoshop. Nothing else is. If you make your living in the graphic arts, you use Photoshop. There is no realistic other choice.

    >Dennis

    1. > And it still begs the underlying question of how you make a _living_ writing open source code.

      See The Magic Cauldron (2000), the third paper in ESR’s _The Cathedral and the Bazaar_ series. It is still quite up to date today, the main thing that has happened since then is that crowd-funding has also been used quite successfully to fund quite a bit of open source work. And IME, the reasons GIMP might not be “as good” as Photoshop by some standards have nothing to do with a lack of available funding models for GIMP development. Mostly, there’s quite a bit of annoying lock-in in the “graphic arts” sector that also makes it hard for any _proprietary_ competitor to PS to gain market share anyway – GIMP, Krita, Blender etc. are actually doing rather well by comparison.

      1. I’ve been actively looking to replace Photoshop. The GIMP still isn’t there, mainly because the workflow doesn’t port over seamlessly. (In fairness, I haven’t tried GIMP 2.10.) It got to be so frustrating the last time I tried I removed it from my system with extreme prejudice, before I was led to remove my system from my house with extreme prejudice.

        If you’re looking to replace Photoshop, you have to act like Photoshop.

        1. @Jay Maynard: I don’t see any way the workflow could be ported over seamlessly. I very much doubt the folks who created The GIMP were intending to compete with Photoshop. They simply wanted a powerful graphics editor that ran on *nix.

          I have an older version of Photoshop here. I use it once in a blue moon. The last time, it was to play with an enormous TIFF file that was raw output from the Hubble space telescope. PS was the only graphics editor on the system that would open it. Everything else (including The GIMP for Windows, IIRC) died horribly with out of memory errors.

          My modest graphics editing needs are handled by other things and don’t require PS, but my needs are modest. If I were making a living as an art director or an illustrator, I’d have to use Photoshop. Photoshop now isn’t just a program, it’s a complete ecosystem with more plugins than I can keep track of, and people I will be dealing with expect to get Photoshop files. Even if I could do the work in The GIMP, no one else could use what I produced.

          There are various other applications like that, Discussion here a while back talked about Libre Office. Yes, it can read and write MS Office files. But if you are doing serious work in Excel, you use Excel, because Libre Office Calc is likely to choke on the programming serious Excel users do for sophisticated mathematical modelling. Calc might do it, but you’ll grow old and grey waiting.

          There are probably other examples I’m not thinking of that open source offerings cannot compete with because the proprietary solutions are too deeply embedded. You might be able to do the same things with open source products, but you won’t get the existing user base to throw out the baby with the bathwater to make the switch.

          >Dennis

          1. For me, it’s not that…it’s that I’ve got a pile of stuff I’ve done for Second Life that uses Photoshop file features such as clipping masks and blending options and the like, and workflows for those files that use Photoshop features. I’m going to find myself having to relearn how to deal with that kind of thing, since I’m not going to pay Adobe $50 a month for a program I use infrequently (I’m still running CS6, but that’s going to quit working at some point because while the program is 64-bit, the license manager is 32!).

            Right now, I’m leaning toward Pixelmator Pro on OS X. I’d like to use The GIMP, but not at the cost of my sanity.

            1. >because while the program is 64-bit, the license manager is 32!

              Reminds me of a lot of games I played as a kid, for which the games themselves were Win32 applications that very often still run, but the installers were old Win16 installer packages that no longer run on 64-bit Windows.

      2. Yes, you can use crowdfunding, and people do. But for it to work, you have to have something you are trying to fund that people will find worth supporting, and you must successfully reach the audience that might contribute. Good luck. I estimate your chances are only marginally better than having an international bestseller book that you self-published.

        Eric is getting some crowdfunding support via Patreon, but that works because of who he is and what he has already done. Effectively, he has created a market over the years that knows who he is and wants to support what he does. If you aren’t ESR or someone equivalent, you have an uphill battle.

        (And on those lines, most of the writers I can think of that have attained success self-published had already built an audience through traditional publishing, so their effort was essentially reaching their existing audience to let them know they were publishing themselves now, and how to get their work. If you haven’t already built an audience, success in self publishing requires $DEITY to work a miracle to order and grant you a giant economy sized amount of luck. Don’t give up your day job while you wait for that to happen.)

        >Dennis

  6. > It seems highly unlikely that there will ever be another closed-source Unix implementation in the future; the coupling is pretty tight, now. But remember that it was not always thus.

    Well there’s OS X. Also Google’s really pushing the open/closed boundary with android.

    1. >Well there’s OS X. Also Google’s really pushing the open/closed boundary with android.

      I know. That’s why I used future tense.

      1. Why do you expect the trends that motivated Apple and Google to behave this way to reverse in the future?

        1. >Why do you expect the trends that motivated Apple and Google to behave this way to reverse in the future?

          I don’t; the desire to build that kind of moat won’t go away. Rather, I think we’ve run out of problems that a new proprietary Unix can monetize.

    2. Darwin is open source, no? As well as the BSD user land I recall it using.

      The OSX GUI user land and above-kernel APIs aren’t open source, but … they’re not the Unix part, really.

      (You can run Darwin with X11, after all, without using any of that.

      You wouldn’t bother to, of course, but you could, and it’d be just as unix as Linux – if you wanted to be ridiculous, you could say it was *more* Unix because OS X is TOG branded Unix(tm)…

  7. Until the 1970s, even as late as 1980, the copyright status of source code was ambiguous. I think that only after source code copyrights were well established, could level 3 be sensibly argued about.

    In 1974, the Commission on New Technological Uses of Copyrighted Works (CONTU) was established. CONTU decided that “computer programs, to the extent that they embody an author’s original creation, are proper subject matter of copyright.”[7][6] In 1980, the United States Congress added the definition of “computer program” to 17 U.S.C. § 101 and amended 17 U.S.C. § 117 to allow the owner of the program to make another copy or adaptation for use on a computer.[8]

    https://en.m.wikipedia.org/wiki/Software_copyright

    1. >I think that only after source code copyrights were well established, could level 3 be sensibly argued about.

      Not entailed. Pre-1974 source code could be and was protected as a proprietary trade secret. What copyright added was the ability to protect derivative works.

      1. Definitely not entailed, but without software copyrights, it would have unfolded quite differently as there would be no copyleft and thus the FSF as we know it today wouldn’t exist. Possibly a quite better outcome in the long run.

  8. It seems highly unlikely that there will ever be another closed-source Unix implementation in the future

    *cough*macOS*cough*

      1. Future versions of Android or Fuchsia (!) or Amazon Fire or MacOS? Where do you draw the line?

        1. >Future versions of Android or Fuchsia (!) or Amazon Fire or MacOS? Where do you draw the line?

          What I don’t think we’ll see again, ever, is a launch of a new proprietary Unix brand. Part of the reason is that Google has shown that when you want to play lock-in, the smart way to do it is tie users to your network services, not your OS. So, um, why bother launching your own OS?

          1. @esr: I suspect it’s unlikely we’ll see a major new proprietary OS, period. What would be the point?

            Yes, Windows is proprietary, but more and more if it is being open sourced, like .NET. Windows is no longer a critical part of MS’s financials – growth is in cloud services. I don’t see MS making all of Windows open source, but the reasons for not doing it wouldn’t be protecting a major revenue stream.

            OS/X is proprietary in the sense that I don’t believe you can get source for the modified BSD kernel it uses, but it comes with the full set of GNU/Linux utilities, and a fair bit of open source code intended to run on OS/X exists. I know a fair number of folks who use OS/X because it provides a good Unix development environment, and Just Works. The fact that it’s not open source is not a concern. The fact that it Just Works is.

            Linux is open source, and that carries through to things like Android that use a Linux kernel. You can download Android source and build an Android image, though most mere mortals probably shouldn’t try. It’s enormous and highly complex, and why would you need to unless you were a hardware vendor needing a version of Android customized for you device?

            (And I note with amusement you can get source for the customized Android versions used by things like the Amazon Kindle, because they are based on Linux and Linux is open source.)

            I am fascinated by Google’s Fuschia effort. It’s nowhere near release, but it looks like an attempt to build a new mobile device OS based on a micro kernel a/la Carnegie Mellon’s Mach, instead of a monolithic kernel like Linux. And I don’t see a reason it can’t implement existing Android system calls and be a drop in replacement, because Android apps will see what they expect and run, and be blissfully unaware Linux isn’t under the hood.

            I think OSes are now at the commodity level I mentioned upthread. And with increasing use of cross platform scripting languages like Java and Python, we are at the point where it mostly doesn’t matter what the OS is. The question is whether your programs will run under it, and can you use the same program on any device. More and more, you can.

            >Dennis

              1. I think open source, as we knew it, is kind of dying. Perhaps we are in a stage IV that was inevitable. Are android, IOS, OSX a step forward, or backward? is it a cycle? Me, formerly a major linux advocate, run OSX mostly now-adays, because it has low latency that the DC driven development of linux lacks. I miss the days where a hobby-ist OS was what I used for everything.

                I never imagined I’d end up having to call my personal AI “Alexa” or “OK, google”, I thought I’d get to name my own, “Athena”.

                Earlier today I was trying to sort out how I’ve been presenting my politics (see here if you like: https://plus.google.com/u/0/107942175615993706558/posts/G2Mk4Dk4p1D ) and I’d also written in a private note the following about licensing:

                The most corporate accepted license is now apache. That one basically
                lets corporations rip off anything you do without any fear of reprisal
                from the developer, and also disavows the developer of responsibility,
                and takes care of nagging issues surrounding patents on all sides.

                2 or 3 clause BSD has some minor legal issues that are resolved by
                apache. still good IMHO but I’d rather apache than BSD if I have to
                choose between the two.

                GPLv2 (and it’s companion, LGPL), is good for making sure you have
                some control over who messes with your stuff and redistributes it. GPL
                builds communities for people that want credit and egoboo and not see
                their stuff disappear into a corporate maw entirely. GPL makes people
                co-operate better out of collective guilt about the benefits of sharing.

                LGPL is nice for many things if you care about how you library evolves
                but not how it is used, and almost always is a good choice.

                GPLv3 among other things… exists to terrify lawyers into accepting
                the gplv2. :) It was created to stop “Tivoization” and make web
                software more shared.

                I sometimes argue that software as a service and the web itself exists as it does today
                because everybody built tiny proprietary tweaks on gpl’d code and if
                they distributed that code they’d go out of business. So we got all
                the MITM services like uber, airbnb, google, etc, instead of apps we
                could use on our own machines, running client/server over the
                internet via a cpu-intensive graphic terminal application, because of all the great gplv2d code they stripmined.

                One example, years ago I wrote “gnugol”, which attempted to turn
                internet search into a command line utility like whois. It was
                fantastically useful for me… and then google deleted the API for it.
                (it has apis for bing and a few others now) I GPLv3’d it (because I’d
                like search to be open and not ad driven and trustworthy and not
                orwellian) and never got any contributors.
                https://github.com/dtaht/Gnugol – *I* still love it. If ever I get
                more pissed at the great firewall or google or governments than I already am I’d go
                back to working on a distributed search engine for it… I’d leverage
                QUIC too… which would be ironic, considering its source.

                We specifically used GPLv3 for flent and irtt because A) as
                benchmarking tools we didn’t want anyone to game their results, a
                standardized, shared output format (outputs are not covered by the
                license) and B) pete and toke are far more religious about “free
                software” than I, and for all I know “long term right” about the
                disaster that code we don’t have in our hands will incur as skynet
                arises.

                1. >I think open source, as we knew it, is kind of dying.

                  Huh? Not if the trend in number of projects on major forges is an indication. The community is huge and it’s growing – looks like even the second derivative is positive.

                  What we’re seeing isn’t death, its successful re-incursions in a few areas by oligopolists, mainly based on tied services with capital-intensive back ends. Highly visible areas because that’s where the consumer bucks are. But I think you’re taking our actual victories so much for granted that it’s become difficult for you to see the scale of them.

                  Relax, man, it’s going to be OK. What they can’t do anymore is keep us from exiting the monopolies. The barriers to doing that were insuperable once upon a time; by comparison they’re only mildly annoying today. Facebook and Twitter can go piss up a rope, and I’ve never moved to Gmail from hosting my own SMTP server.

                  1. There’s a lot of jargon here:

                    “What we’re seeing isn’t death, its successful re-incursions in a few areas by oligopolists, mainly based on tied services with capital-intensive back ends”

                    I don’t understand any of this.

                    1. >I don’t understand any of this.

                      oligopolists — oligopoly is literally “rule by a few”, used by economists to describe market situation that is not quite a monopoly but where a handful of large firms control most of the action.

                      “tied services” – things like Gmail and Twitter and Facebook where the business proposition isn’t the software but control of a network service that has a positive feedback between perceived value and the size of the userbase.

                      “capital-intensive back ends” – the server farms to run one of these are billion-dollar babies that have to be located near powerplants because they eat so much electricity. This creates a large barrier to competing with them – you have to start small and scale up, but with few users it’s hard to generate enough perceived value to compete, each individual gets more win by defecting to an incumbent.

          2. Well, what we’re really seeing is a trend towards open kernels and greater and greater lockdown in firmware and middleware (the latter of which is often tightly integrated with the developer’s network services).

            Microsoft is possibly an exception, they seem to be moving towards open middleware while keeping the kernel closed (notably including that the closed kernel now implements the Linux ABI well enough that a good chunk of the standard GNU-and-friends middleware stack runs unmodified on Windows).

            So what we’re tending to see is “open” Unices with Tivoized, unmodifiable system images that are often not self-hosting. I’ve tended to move away from the FSF’s position towards yours over the years, but I’m beginning to wonder again if the FSF is right.

  9. FYI Eric, I went to read the Wisdom of Master Foo, and strayed into the footnotes. The link to “The Tales of Zen Master Greg” doesn’t work.

    1. >One wonders about the future of Open Source when we see license-changing behavior like that at the Lerna project.

      Their amended licemse is no longer OSD compliant, is it violates the nondiscrimination clause.

Leave a comment

Your email address will not be published. Required fields are marked *