Is closed source worth it for performance?

The following question appeared in my mailbox today:

If a certain program (that you need) was proprietary, and its open-source counterpart was (currently) 40% slower. Which would you use, the open-source one or the proprietary one?

The answer is: it depends. I’m going to answer this one in public because it’s a useful exercise in thinking about larger tradeoffs.

The first subsidiary question I’d have to ask myself is, how much does that 40% speed increase matter? If the answer is “not much” (say, because even the ‘slower’ program runs pretty fast, or because even though it’s noticeably slower I won’t use it often) then I’ll use the program I can read the source code for. Because what if I trip over a bug, or need to extend what the program does to get my task finished?

The more general point here is that by using the closed-source program I’m giving up some significant options, including (a) asking the open-source program’s maintainers for help, and (b) fixing or enhancing it myself. There is a tradeoff between the value of those options and the value of the additional performance which I have to evaluate on the facts in each case. (And yes, valuing these open-source options highly is based on the assumption that help is effectively unavailable from the maintainers of the closed-source program. Sadly, this is usually the case.)

From now on, then, we’re only talking about the case where that 40% is really important. Let’s consider a couple of easy subcases.

It might be the case that what these programs do is very simple and well-defined, so it’s easy to verify that they’re functionally equivalent except for speed. If that’s the case, my risk of getting locked in by the closed-source program is low – so I’ll go right ahead and use the closed-source one, knowing I can fall back to the open-source program at any time.

It might also be the case that buying faster hardware will make the open-source program fast enough for my purposes. Hardware is cheap and the benefits from improving it extend across a broad range of tasks, so I’d probably rather upgrade my hardware than accept the risks of using the closed-source program.

Another easy case is when the closed-source program jails my data, so I cannot examine or modify it with other tools. It is hard for me to imagine any scenario in which I would swallow that for a mere 40% performance boost. Forget it; no sale.

Could I make the open-source program go faster? I would at least spend an hour or two looking at the possibility.

Beyond these the decision process starts to get more difficult. I don’t think I can utter a completely general rule for when I will use closed-source software, but I hope I have illuminated some of the tradeoffs.

185 comments

  1. the assumption that help is effectively unavailable from the maintainers of the closed-source program

    Often true of open source as well. Of course, then you have the option of fixing it yourself. But you have that skill and I don’t unless it’s very easy to fix. So in borderline cases I might go closed source (or open source with a maintenance contract) as opposed to a publicly available program where the maintainer might lose interest at any time. But yes, tradeoffs – it’s worth noting that different people might have different priorities and make different decisions on the same facts.

    There are some halfway houses in this decision like third party escrow of the source code so you can get at it if the company is non-responsive.

    On the other side, I’ve been involved with a couple of cases where we couldn’t get faults fixed in products we had bought, not because the company didn’t want to help, but because the one person who did all the firmware embedded programming had disappeared and taken his laptop (the one copy of the current source, patched on-site) with him. Would have been nice if we’d demanded access to the source upfront.

  2. I’m wondering if the author of that question had GPU drivers in mind. If you’ve got an AMD or NVidia card, the available open-source drivers yield much poorer performance, to the point of being highly unsuitable for intensive gaming (Intel GPUs are unsuitable for this purpose regardless of driver).

  3. I always prefer a pragmatic look, but the advantages open source has definitely falls into being part of the decision, so I suppose I might have a bias towards it. I have an NVIDIA graphics card despite needing a binary blob driver for it; though from what I’ve heard on the AMD front, my next graphics card might not be from NVIDIA (assuming they stay on course with being proprietary and unhelpful to efforts like nouveau). I even have Windows 7 installed as a dual boot, which I barely use, to play a handful of games… it’s a situation I’m not particularly happy with but i don’t mind it enough to change it, usually.

    On the other side, I’ve been involved with a couple of cases where we couldn’t get faults fixed in products we had bought, not because the company didn’t want to help, but because the one person who did all the firmware embedded programming had disappeared and taken his laptop (the one copy of the current source, patched on-site) with him. Would have been nice if we’d demanded access to the source upfront.

    This is depressingly common with proprietary software. A fairly infamous case is that Microsoft nearly lost their only (!) copy of MS-DOS (version 5.0, I think it was); an OEM had licensed the source code from them for their own custom modifications, and Microsoft sent off their own copy of the source code floppies, and it took a long time to get them back. I’m sure Microsoft doesn’t make those mistakes so easily anymore, but in smaller development houses there’s plenty of cases of one guy having the only copy of the source, not backed up.

    Open source of course almost automatically gets backed up once it’s on the Internet. DVCSes improve the situation greatly. I can imagine there’s hundreds of thousands of backups of the entire Linux git repository ;) I know some extreme cases in which Debian has ended up being the best backup of software, where original web sites have long been defunct.

    Well, maybe not just Debian, but its repositories are massive and unmatched by any other distribution (disregarding derivatives).

  4. This is depressingly common with proprietary software.

    It’s even worse in the embedded/industrial control field. A lot of this stuff is binary blobs to start with, and the hardware architecture is proprietary too, so forget about even decompiling it. And there is almost no culture of change control or backups. A lot of changes happen during commissioning of plant, and often the only current copy is the one on the live system. I remember once implementing a crude version of synchronisation for a 4 member team working on 6 sites using Windows Briefcases, which sounds primitive but was a hell of a lot better than what we had, which was error-prone manual copy of the files. We’d all update our laptops in the morning, go do our stuff, then come back and resynchronise in the evening. So at least if you were ill, someone could carry on.

    Plus most of the embedded controllers support a ‘no-upload’ or ‘no-clone’ option which is usually set by default (to protect OEM’s IP). Which was the problem I had most recently – we didn’t want to change the firmware, just provide some more spares. But although the hardware was readily available, we couldn’t even copy the code from the working system.

  5. I once lead a project (only once) where we had a brilliant coder and build guru who was irreplacable. I told him that if this had not been a “voluntary” FLOSS project, I would have sacked him.

    He really never understood. Obviously, after the end of the project no one else ever succeeding in building the binary. In the end I rewrote his part of the system from scratch.

    Beware of the build Guru.

  6. dtsund: Not just intensive gaming, but most gaming. I know that, for Second Life, a 40% drop in framerate is a significant usability hit. I’ve been told that only about 1-2% of SL users run Linux, but I’d be astounded if more than a tiny minority of those ran the open source drivers. (And Intel graphics is purely sucktasic on SL.)

    Of course, graphics drivers are about as far from data jailing as you can get. From where I sit, the only reason to prefer an open source graphics driver is political.

  7. Spreadsheets have a performance delta far in excess of 40% between Excel and Libre Office.

    I know this because I maintain an Excel spreadsheet and help a fellow in Finland maintain a Libre Office version of the same spreadsheet. Neither spreadsheet uses macros. It’s nothing but formulas.

    The Libre Office version performs so poorly – in spite of several feature deletions to remove performance bottlenecks – that I have customers who buy netbooks and “discounted” versions of Excel 2007 to get something usable.

    My user population is:

    People who use the Excel version and thinks that it’s good the Libre Office version exists.
    People who’ve used the Libre Office version and switched to Excel.
    People who use the Libre Office version and want to know why making any change locks their machine up for a minute, and “is this normal?”

  8. My spreadsheet needs are small and simple, so using Libre Office (or Google Apps) is not a noticeable performance problem for me. My graphics needs are just as simple, and the GIMP has far more capability than I need. Bluefish is my favorite text editor, open source or proprietary.

    And so on — except for video editing. That’s the one case where I use proprietary software. Sadly, there is not yet a single open source video editor that can match top-end proprietary video software in features or ease of use. Note that I say “top-end proprietary video software,” not “consumer level.” For simple home productions, you can use open source tools, no problem. When you’re billing a commercial client? No. Unless they specify open source tools (and none do) it’s way better to use proprietary software at this point in time.

  9. When dealing with pre-press, there’s Adobe Illustrator/InDesign/Photoshop and, well, having to hope the print shop doesn’t mess up your file if you send it in some other format. In particular, CMYK support for The GIMP was a complete fustercluck for more than a decade. PANTONE support still is.

    I think the fundamental disconnect in mindset is that open source software advocates never consider “How do I hand this file to someone who doesn’t use this software to get something done?” while also remembering that the person you’re handing it to is not a programmer, has no desire to be a programmer, and will regard the file coming in in some format that they’re not familiar with as “Oh god, not THIS customer again…”

    Usability testing and UI design aren’t places where you can show off cleverness. Thus, the ego economy of Open Source programming doesn’t touch on them, save for very very limited (and rare) participants. Indeed, when Open Source attempts to be clever with UI design, there’s a backlash (Ubuntu Unity interface, anyone?)

    This results in a “Designed by programmers, for other programmers.” comfort zone.

  10. PANTONE Is proprietary…and its licensing fees are substantial, but not crippling. At various times, there’ve been attempts to get PANTONE into GIMP.

    However, there’s nothing remotely close to PANTONE’s ubiquity in print fields for color-matching. (And CMYK in GIMP still can’t do rich blacks very well.)

    GIMP is wonderful if you’re doing web work. If you have to put something on the press, it falls down, and falls down hard.

  11. And lets not forget who does the procurement of the software. While a competent software engineer might be able to if not fix, than at least find someone to do that, even if there is no original author present, than the typical IT department is often a bunch of guys who just happened to be there in a needed of a job. They sometimes aware they buy a program, but almost never know they lock their company in a proprietary software (not to mention the “advanced thinking” of proprietary data protocols). They definitely *never* know to think about open-source alternatives.
    Funny story: I was once threatened to be fired for suggestion an open source alternative (Git) to the IT-chosen piece of junk (Perforce). Should have said the “piece of junk”-words out loud: wouldn’t do any good to the matter, but at least I would have had my fun. Didn’t, sadly..

  12. Despite the fact that much of the fear-mongering about the GPL is unmitigated FUD, it remains true that the open source licenses matter, and it remains true that many, if not most, programmers who prefer open source work for enterprises that are, at best, indifferent to open source, and at worst, worried about inadvertently having to disclose parts of their proprietary systems.

    As an end-user, I am open-source license agnostic, happily using any open source program that appears to be healthy. As a programmer, I follow Sherlock Holme’s analogy of the brain to an attic, which you should stock with useful items. To me, a library that is permissively licensed will more easily find its way into my attic than one with the GPL attached, for the simple reason that, if I commit time and energy to learning and using the library, I want to be able to reuse that knowledge, if possible, in any future conceivable scenario without having to worry about it.

  13. “On the other side, I’ve been involved with a couple of cases where we couldn’t get faults fixed in products we had bought, not because the company didn’t want to help, but because the one person who did all the firmware embedded programming had disappeared and taken his laptop (the one copy of the current source, patched on-site) with him. Would have been nice if we’d demanded access to the source upfront.”

    It’s incredible that employers put up with behavior like this. One copy of source, patched on-site with no repository? This is how we lost CIA.

  14. “I’m wondering if the author of that question had GPU drivers in mind. If you’ve got an AMD or NVidia card, the available open-source drivers yield much poorer performance, to the point of being highly unsuitable for intensive gaming.”

    Yes, this annoys me also, but at least this is a case where there is no data to be jailed. You can fall back to an Open Source driver at any time.

    I do wonder when we will finally have open source drivers for a high-end, readily available video card that fully uses the optimized capabilities of the card. A manufacturer who comes out with one would have a 100% chance of getting my next video card purchase.

  15. “Spreadsheets have a performance delta far in excess of 40% between Excel and Libre Office.”

    But many spreadsheet users in the business world don’t have large enough spreadsheets for performance issues to even be noticed. For the subset who do, yes, they may need a product with good performance.

  16. It’s pretty much a hard and fast rule that if you’re doing serious non-software-development related work, you always buy the proprietary product because compared to it, the open-source alternative is amateur hour.

    And even in software development — has anyone ever seen an open source tool stack that lets programmers be as productive for most software development tasks as Visual Studio? Anybody? Anybody? Bueller? Bueller?

  17. “Not just intensive gaming, but most gaming. I know that, for Second Life, a 40% drop in framerate is a significant usability hit.”

    Second Life would fall in my “intensive gaming” category. For other stuff, I was thinking more along the lines of NetHack (which is intensive but not in the relevant way).

  18. @Jeff: will trade the Visual Studio for a Linux system from a respectable distribution any time. The VS-thing is just an anchor for a serious development: inflexible, feature-poor, resource-hungry, unstable and slow (well, that last might by virtue of its host operating system, but anyway).

  19. I think the fundamental disconnect in mindset is that open source software advocates never consider “How do I hand this file to someone who doesn’t use this software to get something done?” while also remembering that the person you’re handing it to is not a programmer, has no desire to be a programmer, and will regard the file coming in in some format that they’re not familiar with as “Oh god, not THIS customer again…”

    It’s worth getting a couple of things straight here.

    First of all, that “fundamental disconnect” was and is part of Adobe’s mindset also; when they set about designing the PSD format they didn’t consider “how do I hand this file to someone who doesn’t use our software to get something done?”. Just the opposite, in fact; it’s a pile of crock upon crock done solely for expediency and perhaps code-efficiency purposes back when Macs had megabytes of memory at best and ran in the tens of megahertz. They still don’t consider this; if someone brought it up the response would be: “A print shop? That doesn’t use Adobe? *snicker* *snort*”

    So this is not a mental disconnect that is unique to GIMP, or the open source community. In fact, I’d say proprietary vendors are even more disconnected; as for decades it has been perfectly okay to barf out a pointer-unswizzled memory dump of the document’s data structure in memory and call that your native file format. The mindset is “If our customers are doing serious work they’d be using our software anyway, so who gives a shit about the file format?”

    Secondly, GIMP — like any sane program — has export filters for a variety of standard formats. If the format is an actual standard rather than a proprietary format everyone just happens to be vendor-locked into, the filter might even work. Most printshops accept TIFF, which GIMP can export.

  20. One downside to proprietary software not mentioned yet, at least for corporate users: Licensing hassle. Installing an OSS product usually takes a few seconds in a terminal with apt-get. (or yum if that’s your thing) Installing an equivelant closed product often means internal company politics, budgeting, approvals, lots of time spent trying to convince muggles to let you do the Right Thing instead of the Most Generous Vendor Thing, and everything else contigent to actually spending money.

    Then to set up the software you have to first spend time tracking down licensing information, getting install media from the vendor, making sure you’re not doing anything you’re not licensed to do, figuring out why the program won’t let you configure option X “because you’re not licensed for that,” and generally dicking with unnecessary bureaucratic roadblocks that have to be dealt with but aren’t actually related to anything you want done.

    In the end you realize you’ve spent six months trying to set up a solution for which the actual technical hurdles took six days to figure out.

    Even if the closed option is flat out *better* (and, honestly, sometimes it is), the OSS route can be much less of a strain on the admin’s sanity. Yeah, I’ve done this recently. It sucks. :-(

  21. It’s incredible that employers put up with behavior like this. One copy of source, patched on-site with no repository?

    Hehe, it is awful. But you have to think about the situation. For industrial control, there’s only so much you can test in simulation. A lot of your final tweaks come from observing the behaviour of a big, dirty, smelly machine and modifying the code on the fly. So its easy for those changes to get lost. And bear in mind that sometimes you’re standing out in the middle of nowhere, so forget about a network to plug into.

    Admittedly, it’s getting better. A bit of rigour has been creeping into the control systems engineering industry over the last ten years or so. A few places I work at have instituted change control boards so you have to get approval just to get access, then someone is tasked with supervising your changes on-site and making sure backups are done prior, the new version is recorded and backed up, and so on. It can be a pain, but overall I’m glad to see it.

  22. @Ken:
    >I think the fundamental disconnect in mindset is that open source software advocates never consider “How do I hand this file to someone who doesn’t use this software to get something done?” while also remembering that the person you’re handing it to is not a programmer, has no desire to be a programmer, and will regard the file coming in in some format that they’re not familiar with as “Oh god, not THIS customer again…”

    I wouldn’t say it’s that open source advocates never consider it so much as they don’t know how else to fight vendor lock-in. If you give in and buy proprietary software in a field that’s locked-in to one vendor, you’re just another guaranteed customer for the vendor. If you hand a file to (say) a print shop in the industry-standard vendor-locked format, you’re not doing anything to show that there’s demand for anything else.

    >Usability testing and UI design aren’t places where you can show off cleverness. Thus, the ego economy of Open Source programming doesn’t touch on them, save for very very limited (and rare) participants. Indeed, when Open Source attempts to be clever with UI design, there’s a backlash (Ubuntu Unity interface, anyone?)

    I don’t think it’s so much that Open Source can’t be clever with UI design so much as that the current user base for Open Source software tends to have off-average usage habits, so there’s a tension between cleverly designing things for your user base and cleverly designing things for new users. GNOME 2 is an example of the former, Unity is an example of the latter. (And then you throw the whole fiasco with GNOME 3, which happened at about the same time as the Unity transition, into the mix. A fair amount of the grief people give Unity actually deals with stuff that isn’t Unity specific but comes from the fact that it runs on top of GNOME 3).

    I’ll note that Unity probably isn’t so bad for a user that uses a laptop with a trackpad instead of a real mouse, and doesn’t want to have multiple instances of a single program open a lot. The problem for me is that I can’t stand trackpads (my mouse always goes wherever my laptop does) and do fairly often have multiple instances of a program open (calculators, text editors, terminals, file browsers, etc).

    I will say, though, that although I perceive a general decline in GUI quality on both Windows and Linux (though it’s probably as much a shift towards average users with trackpads as an actual decline), both Win7 and Unity share a great feature in the integration of search features into their menus.

  23. has anyone ever seen an open source tool stack that lets programmers be as productive for most software development tasks as Visual Studio? Anybody? Anybody? Bueller? Bueller?

    What features do you actually use in Visual Studio? For my usage eclipse loses slightly on solutions vs workbench (solutions seem marginally more explicitly configurable), loses on C# but gains on every other language(which considering i’m a avid polyglot programmer is important). And the rest is pretty much same same.

  24. “has anyone ever seen an open source tool stack that lets programmers be as productive for most software development tasks as Visual Studio?”

    You’ve got to be kidding. Visual Studio spits out a bunch of automatically-generated code that hardly any Windows programmers understand. It’s a crutch that lets them hide from understanding code that they supposedly own. And trying to port it to Unix is a nightmare (I’m speaking from experience here).

    If you told me that I had to develop software that would run on both Windows and Unix, I would insist on writing the Unix version first (being careful to keep the GUI code separate from the functional code), and then port it to Windows afterward.

  25. Jay Maynard
    > From where I sit, the only reason to prefer an open source graphics driver is political.

    One sound reason would be that administration of a larger number of heterogenous Linux desktops is a lot easier with free drivers that come with the distro. That is, if the users of said desktops do not depend on the performance of the proprietary drivers for their work. The proprietary drivers can be broken by kernel and X server updates, even on distros such as Ubuntu, which handle them relatively well. And when the drivers do break, it’s usually difficult to fix the installation remotely, since you need to reboot and verify that the updated drivers are actually working in an hardware-accelerated state. If you stick with the open source drivers, it’s usually safe to let the updates be applied.

    Ken Burnside:
    > Usability testing and UI design aren’t places where you can show off cleverness.

    I don’t think this is true at all. Great UIs take a lot of cleverness and work, like anything else. The recent troubles with Gnome Shell and Unity seem to stem from – as was noted in this thread – the fact that they’re designed by developers for users who are not developers, and from the fact that they’re trying to cover both conventional desktops and touchscreen devices with the same UI. The two are arguably different enough that using the same UI on both may be an inherently bad idea. It’s not as if this mistake is unique to open source. Microsoft seems to have run into it rather hard with Windows 8.

  26. Jon Brase: “If you hand a file to (say) a print shop in the industry-standard vendor-locked format, you’re not doing anything to show that there’s demand for anything else.”

    This is a political argument, and will not resonate with anyone who uses a computer to get real world work done that isn’t computing. Like, say, Ken Burnside.

  27. Mikko: Anyone who’s doing things that need OpenGL drivers are almost certainly not doing them in an environment where remote IT support of the type you describe actually is done. They’re gamers or the like. The folks doing this in the real world are using Macs.

    dtsund: Without getting into the whole off-topic argument about whether or not SL is a game, I do have to disagree that it constitutes hard-core gaming from the technical viewpoint. SL pushes a system hard, but there are lots of games that push it harder.

  28. I was thinking of excel. But that shows that it is rarely, if ever, only one parameter.

    I use gnu-calc. I have a visceral hatred of all the complexity since multiplan.

    Excel might be faster in execution, but I can never find things in the menus, my attempts to organize data usually destroy it,

    I had a spreadsheet program on the palm – 8mHz 68000 architecture that was fast!. Excel is tolerable on my multicore desktop, but it has 3 orders of magnitude more processing power.

    If I need speed, I can export-import.

  29. > Anyone who’s doing things that need OpenGL drivers are almost certainly not doing them in an environment where remote IT support of the type you describe actually is done.

    I used to do support in a place like that, namely a research group doing molecular modelling. They used SGI hardware before moving to x86 and Linux. An added complication was stereoscopic viewing with LCD shutter glasses, which most of the users were accustomed to. Every now and again an NVidia driver update would break stereo support, even though accelerated OpenGL was fine otherwise. I learned to check every update by trying the shutter glasses and making sure they’re actually in sync and not just looking like it.

  30. > The folks doing this in the real world are using Macs.

    In the sciences, it depends on the field. Macs have become common, but there are various pieces of (proprietary) software that are not available for OS X.

  31. This is a political argument, and will not resonate with anyone who uses a computer to get real world work done that isn’t computing. Like, say, Ken Burnside.

    It is a political argument but there’s an interesting side to this argument.

    Where i work we regularly get drops of ad-hoc files that are so poor that they sometimes don’t even conform to the file format that was agreed apon at the beginning of the contract. When we point out that they’d make our lives much easier if they stuck to the file format (That they give us mind you), the overwhelmingly common response is “do you want the work on not?”.

    It’s that kind of political argument that ultimately renders into an economic argument. If a printer wants to charge me more for not receiving in the format they prefer, no problem. I can then balance the extra cost of effort against the cost of changing how I drop, as well as the cost of getting a different printer to do it.

  32. Cathy wrote:

    “But many spreadsheet users in the business world don’t have large enough spreadsheets for performance issues to even be noticed. For the subset who do, yes, they may need a product with good performance.”

    Actually, most people who use spreadsheets in the business world use them as a strange hybrid declarative programming environment.

    I teach Excel stuff (and do Excel consulting). Once you’re using Excel for medium sized data sets (a few thousand rows of data), or using Booleans (IF/AND/OR), or using named ranges to refer to a data table, you are, in effect, programming and manipulating lists. You can’t really do a stack operation with Excel formulas….but I have actually made a functioning Djikstra sort algorithm that pulls coordinates off of a 3-D hex map that works in NOTHING but Excel formulas.

    The vast majority of my consults turn into the following process:

    1) What data are you actually using?
    2) How do I organize this data?
    3) What operations need to be performed once the data is organized?
    4) User-proofing the end result.

    And even on something like THIS, Libre Office Calc is sub-par. (Using a named range for a 14 column by 3000 row table, rather than just using the cell references, should not cause all operations to slow down by a factor of 3.)

  33. Ken Burnside on Saturday, February 2 2013 at 11:36 am said:
    Spreadsheets have a performance delta far in excess of 40% between Excel and Libre Office…

    I wonder how gnumeric compares, as all they are concerned about is the spreadsheet.

  34. I’m wondering if the author of that question had GPU drivers in mind. If you’ve got an AMD or NVidia card, the available open-source drivers yield much poorer performance, to the point of being highly unsuitable for intensive gaming (Intel GPUs are unsuitable for this purpose regardless of driver).

    No he didn’t, I’m the author, and I was asking a question to fully understand Eric’s FOSS ideology. The fact that he mentioned it publicly mentioned it on his site is cool.

  35. Intel compilers produce code for certain mathematical functions that is sufficiently faster than GCC that in a place I used to work there was *no* other option. We also had licenses for older versions of the Portland compiler for the same reason. C, C++ and Fortran.

    The percentage difference in the speed doesn’t matter. Either your code runs fast enough, or it does not.

    If you need to do something in real time (not as a “real time OS”, but as in “as it’s really happening”) and your code from compiler X does that, and compiler Y does not, then you pay for compiler X, or you do the cost comparison between buying faster hardware and paying for a compiler license.

    I don’t know why GCC sucks compared to Intel, I don’t care, it’s not my job to sell one or the other, it’s my job to provide my developers with the infrastructure and tools they need. If that means I have to run an ass sucking license server like FlexLM so they can do what they do, then I’ll run it.

    If you’re running a power grid or major constituent parts, if you’re building rockets, if you’re turning terrorists into smoking holes in the ground, or rescuing lost or injured people, sometimes the job is a lot more important than Free v.s. Closed source.

    I’d rather the folks from GCC clean their shit up[1]. It would simplify the lives of some folks who sure do need it. But everyone has their ego and a need to stroke it.

    [1] for the record I have no idea what the problem really is. I’m a System Admin, not a PhD in EE with exotic and narrow specialties that has to write code. GCC is fine for what I need, but *they* had some benchmarks (and no, I can’t show them to you) that demonstrated that the same code built on GCC was dog slow comparatively.

    1. >I’d rather the folks from GCC clean their shit up[1].

      I’d say this is unlikely. GCC’s architecture is ancient and crufty – in some ways deliberately obfuscated, as RMS doesn’t want proprietary compiler vendors co-opting the code.

      What is far more likely is that the clang/LLVM group will produce a compiler good enough for realtime. It’s already good enough for production and compiles significantly faster than GCC does. Not the speed metric you want, but I think it’s indicative of quality. I don’t know what the performance of their generated binaries is, but everyone who has been paying attention thinks they’re going to get to where they blow the doors off of GCC.

  36. There are reasons to use open source 3D video drivers in Linux that have nothing to do with politics.

    For example, there was recently a critical root exploit found in the proprietary NVIDIA driver. These kinds of things can lurk hidden for a long time. Also, the Nouveau and open-source Radeon drivers plays better with the open source software stack. True story: a bug of some sort (or possibly a heating issue) in an NVIDIA part in my work machine causes the proprietary driver to freeze hard; but Nouveau just halts X dumps me back to the console. If you are doing light graphics work and desktop bling only, your system will be more stable with the open source software stack.

    If you are doing heavy 3D, what the fuck are you on Linux for? Get a Windows or Mac box; the software you will probably need is going to be on at least one of these two platforms.

  37. Jeff, I’m on Linux for SL as a proof of concept that I can actually get done what I need to on Linux. Eric’s been calling me an enemy of freedom for buying Apple, so Linux gets a good honest try. And if it’s a choice between running OS X and Windows, then that’s no choice at all; I refuse to run Microsoft on the Internet outside of a sandbox VM.

  38. Actually, most people who use spreadsheets in the business world use them as a strange hybrid declarative programming environment.

    This is true. I occasionally still use spreadsheets for pure number crunching. But most of my use (and those around me) is as a poor man’s database. Which it works really well at, provided one person is maintaining it – complicated Excel spreadsheets tend to be kinda fragile if too many people are screwing around with them. Forms, tables, lists, you name it – the combination of a semi-programming environment with easy calculation facilities on hand is fantastic, plus VBA if you want to get really tricky. About the last thing anyone does with spreadsheets these days is budgets or forecasts (that’s all on SAP or similar).

    Anything to avoid resorting to Access. It’s always bemused me that probably the best ever (Excel) and worst (Access) commercial programs for general users are packaged together.

  39. @Jay Maynard:

    Of course, graphics drivers are about as far from data jailing as you can get. From where I sit, the only reason to prefer an open source graphics driver is political.

    Not entirely so. I’m running Mint Nadia with an AMD Caicos card running multiple monitors. One of the first things I did when I installed Nadia was download the AMD proprietary driver. I found that the fglrx (proprietary) driver doesn’t support multiple monitors with Cinnamon nearly as well as the open source drivers, especially if those monitors are running different resolutions. Unfortunately, the open source drivers make the GPU run hotter, so I’ve got to keep an eye on the GPU temperature.

    However, the performance of the open source driver is nearly as good as the fglrx driver at this point.

  40. In the era of Moore’s Law, a 40% increase in speed, or reduction in memory or storage requirements, is trivial.

    I would be much more influenced by the presence or absence of essential features.

    What if the CS product does things the OS product doesn’t? And those things are important or critical to the job I’m doing?

    Another question is whether the OS product is effectively OS. Is the source code well maintained, or a mess? Is it well-made, or a mess?

    Is it supported by a goodly number of effective developers, or by a handful of oddballs, newbies, and wannabes?

    Is it supported at all, or have the developers all dropped away?

    Is all the code accessible, or is it buried in a corrupted repository?

    Was the project coded with a widely-used language and libraries, or something obscure?

    Was the product developed by and for non-English speakers? Even if there is an English front end: Suppose all text in the code, documentation. and metadata is in Spanish? Or Russian? Or
    (other than digits) Chinese ideograms? (I don’t know that there are any Chinese-script programming languages, but there might be, and given the sheer size of China, I expect there will be soon.)

    Suppose the work involves developing a lot of scripting and formulas, and the OS product’s format for these is unique – so that moving the project to another platform would require a complete rebuild?

    Which, if any, of these issues could tip the balance in favor of a CS product?

  41. “””
    I’d say this is unlikely. GCC’s architecture is ancient and crufty – in some ways deliberately obfuscated, as RMS doesn’t want proprietary compiler vendors co-opting the code.

    What is far more likely is that the clang/LLVM group will produce a compiler good enough for realtime. It’s already good enough for production and compiles significantly faster than GCC does.
    “””

    So the functional difference between the “freedom” given by RMS’s GCC and Intel’s compilers are?

    I mean in either case you can’t fix it, you can’t fight it, you can only switch to something else.

    The folks I was working with really did need as fast as possible. Moore’s law just let them process MORE data in a given time. At some point Moore’s law will catch up with their demands, but that’s not for another generation or two. (moores law generations, not human).

    1. >So the functional difference between the “freedom” given by RMS’s GCC and Intel’s compilers are?

      One important difference is that GCC can be forked if the maintainance group turns incompetent or evil. In fact this has already happened once; the present GCC gang derives from a bunch of dissidents who created a fork called EGCS around 1997 because the official maintainance group at the time was moving too slowly. Eventually RMS blessed the fork.

  42. @Jay Maynard

    >Jeff, I’m on Linux for SL as a proof of concept that I can actually get done what I need to on Linux. Eric’s been calling me an enemy of freedom for buying Apple, so Linux gets a good honest try.

    I’ve been on the Mac my entire life, but for the last 6 months or so I have been using Linux almost full-time (at home and at work). I have been extremely impressed, and there’s absolutely no question that I can get work done on the platform (because I have been!). It’s *almost* an It Just Works experience on par with OS X at this point. In fact, in general it is *more* stable than OS X. The quality control on OS X has been declining over the last couple of years in my view, so Ubuntu’s leaps-and-bounds improvements in usability couldn’t have come at a better time.

    There are just a few slight niggles for me remaining. For example, I just cannot seem to get my trackpad to behave correctly. I keep getting stray touches picked up by the system that result in weird behaviour, which can be *really* annoying. Also, the system doesn’t quite seem to know how to handle battery management correctly for my laptop. I have had the machine just shut down on me without warning due to a dead battery.

    And, sadly, I have to bring up a Windows virtualbox now and then for spreadsheets, which I still haven’t managed to completely eliminate from my life. Libreoffice just doesn’t cut it, which is really annoying.

    Still, it’s 90% a great experience.

  43. @Jeff Read:
    >If you are doing heavy 3D, what the fuck are you on Linux for?

    Some people are on Linux on principle, others might primarily use it for something else and want to do heavy 3D every once in a while.

    >Get a Windows or Mac box; the software you will probably need is going to be on at least one of these two platforms.

    Weirdly enough, about the heaviest 3D application I have on my system (X-Plane) runs on my laptop (Ubuntu), but not on the family desktop (Windows). It’s a hardware issue: the Windows box has an ATI card with bug-ridden OpenGL support (DirectX is fine), and X-Plane uses OpenGL. My laptop doesn’t have that problem. Interestingly enough, for a certain OGL app that actually does run on the Winodws box, the framerate on my laptop is actually higher, Wine overhead and all, than on the Windows box.

  44. @ ESR

    GCC’s architecture is ancient and crufty – in some ways deliberately obfuscated, as RMS doesn’t want proprietary compiler vendors co-opting the code.

    There is something about this that strikes me as borderline insane.

    But this… for some reason it makes me think of a long snake spotting the end of its tail and thinking “Ah! Free food!”.

  45. @BRM:

    Mate, take care of that neck. “Pinched Nerves” means it’s likely you’ve got seriously degraded disks, or passthrough holes are getting crudded up. I had my C6/C7 fused, and that is no fun at all. Especially since it f*s with my archery.

    @ESR:
    “”””
    One important difference is that GCC can be forked if the maintainance group turns incompetent or evil. In fact this has already happened once; the present GCC gang derives from a bunch of dissidents who created a fork called EGCS around 1997 because the official maintainance group at the time was moving too slowly. Eventually RMS blessed the fork.
    “””

    And the difference between this and me Sybase to Oracle or from MS Orifice to Corel Office/WordPerfect is?

    No, seriously.

    Between what the stupid f*s mangling GNOME are doing and RMS being a right bastard with his religious sh* (obfuscation in a bloody compiler? WTF mate?), I gotta confess that at my elderly age my vision is blurry and I’m having trouble telling which are the pigs and which are the men.

    Now if you’ll excuse me, Mr. Alfred Simmonds has shown up to give me a ride.

    1. >And the difference between this and me Sybase to Oracle or from MS Orifice to Corel Office/WordPerfect is?

      Huge. When GCC was forked, you didn’t have to perform a painful file or database migration to move your stuff. Absence of lock-in matters.

  46. “it depends”

    made my day. When I was studying engineering, our materials(read : polymers) teacher always answered every question that way : “the usual’s engineer’s answer : it depends”. And he always gave a long list of parameters. That post & most comments made me young as a student again.

    I’d add to what others said that it depends on your ecosystem. When you’re working for a bank, that relies on heavy process to compensate lack of programing talent(that’s nearly official), going for open-source is obviously a bad move(unless the commercial product is really inferior, and the question seems to assume it is not). Yes, there is really a french bank(that sponsors the tennis french open, thanks for not guessing) that asks such dumb-like things to its programmers than even the worse programmer can achieve its objectives, and offer productive work. Of course, people with at least one ounce of talent quickly despair. But it works. With huge costs for making simple things, of course, but it works.

    Open source really gives you more power when you know you have at least some programming talent on your side. Most employers have no clue wether their computer programmers have any talent, and will recruit based on weak – or even irrelevant – predictors, like skin color, age, sex, diploma, graphology, shared interests, astrology… (communicating ability I do not include, as while it’s orthogonal to programming capacity, it’s still useful on the job).

    Most people reading or writing here are at least somewhat talented hackers, & have enough motivation to read a hacker’s blog. That’s not what the standard firm is made of. Standard firm is made of average people who don’t care about programming, & just need to show that they are making something. In that context, usually, open source is not a good choice(even if technically superior) : it will not help the decider keeping its own place, even less climbing the corporate ladder.

  47. I’m not sure those cases are comparable. There’s a big difference between a compiler, which pretty much by definition has to process data in formats defined by someone outside the compiler writer, and an application program which has no such standards to follow.

    But then again, gcc *is* full of all kinds of nonstandard extensions that make a program written for it somewhat nonportable. I’m not sure that’s a feature.

    But William does have a point. When RMS tries hard to prevent a fork by making things obscure, he’s acting in ways contrary to his stated principles.

    1. >There’s a big difference between a compiler, which pretty much by definition has to process data in formats defined by someone outside the compiler writer, and an application program which has no such standards to follow.

      Not as large as you’d think. Often, application programs do have standards to follow – but extend them incompatibly as a form of lock-in. In other cases, vendors resist tooth and nail any effort to standardize formats and thus reduce lock-in. The comedy around OOXML is a case in point; see also various similar kinds of grief around CAD formats. One of the benefits of open source is that fights like this never happen – when you can’t keep your source code secret, trying to data-jail your customers is impossible from the get-go.

      >But then again, gcc *is* full of all kinds of nonstandard extensions that make a program written for it somewhat nonportable.

      Speaking as a person who has had to plumb those depths in connection with gpsd (atomic-locking and memory-fence instructions, yay!), this is at worst an extremely minor problem. And decreasing, as recent C and C++ standards have (quite appropriately) ratified gcc extensions.

      >When RMS tries hard to prevent a fork by making things obscure, he’s acting in ways contrary to his stated principles.

      OK, I’m going to bend over backward to be fair here by pointing out that this is wrong. RMS is conforming to his stated principles perfectly; your mistake is to think that his principles require him to make code re-use and engineering excellence priority #1. If you remember that his actual goal is to stamp out “software hoarding” by any means necessary, never giving an inch of ground to proprietary evil, you will realize that his choice to obfuscate the gcc stage interfaces makes sense in those terms.

      Mind you, I don’t agree with RMS’s priorities. For me good practice really is #1, which was the not-so-concealed point of my post. But I do recognize that his behavior is coherent.

  48. When you’re working for a bank, that relies on heavy process to compensate lack of programing talent(that’s nearly official), going for open-source is obviously a bad move(unless the commercial product is really inferior, and the question seems to assume it is not).

    Oh boy can i vouch for this. The lack of IT talent at most of the banks i deal with make me scared for my money. The only thing i’d disagree with is that their process ain’t worth a damn either.

  49. But then again, gcc *is* full of all kinds of nonstandard extensions that make a program written for it somewhat nonportable. I’m not sure that’s a feature.

    Again, it depends. :)

    Gcc’s computed gotos — one such extension — are a feature that many projects (including one of my favorites, Gambit, one of the world’s fastest scheme compilers) make use of.

    But you don’t want to use them unless you really, really have to.

    However, it should be noted that llvm is striving for compatibility with gcc including all the extensions.

    1. >However, it should be noted that llvm is striving for compatibility with gcc including all the extensions.

      And doing a very creditable job. Those guys are good – damn fine hackers on every level.

  50. “”Oh boy can i vouch for this. The lack of IT talent at most of the banks i deal with make me scared for my money. The only thing i’d disagree with is that their process ain’t worth a damn either.””

    There IS some talent in banks. I tend to think I’m not awfully bad(not very good either, but still muuuuch better than average, at least average in banks). Once in a while, I meet someone very good. Our new kid here is damned good. The trick is : managers have no clue who they are. So things work better when they assume everyone is crap.

    Most banks I’ve been working for(that’s 4 out of 5) fit your description : crap process with crap people leading to maintenance nightmare. That one, though, is suprisingly succeeding in making things work through process, & only through it. Yes, it does exist. No, it’s not very impressive in terms of productivity. Or in terms of friendly interface. Or in terms of overall functionalities given to bankers for their daily tasks. But it works, roughly on time. Horribly costly.

    I know what I write here is heretic to most hacker’s belief. Not only talent can do the job. I’ve seen process working for 3 years. Yet, I’m just impressed by the feat that it works(I couldn’t believe it at first), not by the overall result, a few well-chosen hackers(not even good ones, just average ones like me) would have done much better for a fraction of the cost.

  51. @William O’Blivion
    > Between what the stupid f*s mangling GNOME are doing and RMS being a right bastard with his
    > religious sh* (obfuscation in a bloody compiler? WTF mate?), I gotta confess that at my elderly
    > age my vision is blurry and I’m having trouble telling which are the pigs and which are the men.

    Since the example of GCC has already been addressed, I should point out that there is already a viable fork of GNOME 2 (MATE,) and a GNOME 2-ish desktop built on top of GNOME 3 (Cinnamon.) Neither would have been possible had GNOME been proprietary.

  52. I’d add to what others said that it depends on your ecosystem. When you’re working for a bank, that relies on heavy process to compensate lack of programing talent(that’s nearly official), going for open-source is obviously a bad move(unless the commercial product is really inferior, and the question seems to assume it is not).

    Banks don’t have to lack for IT talent. All the money they make from ruining economies and putting the rest of us in the poor house can — and does! — pay for top coders.

    Recently there was a peaceful revolution in Iceland under which the banking system was nationalized, the corrupt private bankers kicked out and prosecuted for their deeds. One of the interesting results of this, mentioned by the Icelandic president, is that the IT sector got an influx of talent. Now that the financial industry has collapsed, the smart folk that had been wooed away from productive work and into finance with huge salaries are out looking for jobs in other sectors.

    But yeah, when it comes to customer-facing things like online banking, the bank just doesn’t give a shit and will hire almost any warm body that comes through the door and meets their arbitrary criteria. The actual transaction code is rock-solid and has been for decades — it still runs on COBOL you see, on IBM mainframes…

  53. @ William O. B’Livion

    Mate, take care of that neck.

    Thanks for the thought. Necks have way too many moving parts.

    @ everyone

    I realize my comments about RMS and gcc were off-topic, but obfuscation of “free” gcc just blew me away. However ESR has already made his stance re: RMS clear. My apologies.

    1. >http://www.coboloncogs.org/HOME.HTM

      There are not words in any human language to express how…wrong…that is.

    1. >http://www.reagencydesign.com/products/241597-political-party-monsters-s-xl

      Much, much more wrong. Sorry, you don’t even compete with COBOL ON COGS. It spurns your petty satire.

  54. Generally my decisions are not based on performance.

    1) Is it compatible with all the other people I am in contact with? Can a recruiter edit my CV (remove my contact details, add his) as easily as with a .doc file? Can this spreadsheet open and run Excel files chock full with macros? Does this VoIP work well together with all the people who use Skype? Can I easily load a Visual Studio project into this IDE and save it too?

    2) Is it compatible with all the other software I use? Can this OS run Total War or Civ 4 games without much problems? Can this open source spreadsheet easily accept data exports from my ERP software?

    3) Does it feel mature, stable, polished, like something where the user experience is already nailed down?

    Currently I find both in business and gaming I stick to Windows for these reasons, I use Linux boxes for two purposes:

    1) a simple web terminal, running basically just a browser, plus a little entertainment like music and films, basic stuff like picture editing and uploading etc. casual-user stuff. I actually installed Lubuntu for my dad on an aging laptop for this.

    2) I currently do not, but I can easily imagine using a Linux box for a mainly web-based development machine. The LAMP stack or something similar. Currently my development mostly goes into one closed-source ERP and generally there are web extensions but usually in a Microsofty, ASP.NET environment, so it will not happen anytime soon, but for example if I worked for a company who used a combined Microsofty and LAMP environment, I would probably learn EMACS. I have actually tried its nxml-mode when I was desperate for a half usable XML editor and found it good. I kind of envy people who work with python and in a python-based IDE like Spyder.

  55. COBOL ON COGS: I did some work back with RPG/400 powered AS/400 machines, I don’t really remember the details, but it was well optimized for high-speed data entry. I remember what I liked was that for example if you went to the second menu, third submenu, fifth subsubmenu 200 times a day, you did not have to do this todays slow, inefficient click-click-click, you simply typed 235, pressed ENTER and you were there. . If your job is to make 200 invoices a day, they rock.

    A friend of mine liked the productivity of CA-Clipper’s similar keyboard-oriented, “browse and mask” user interface, so he with some other folks created a Clipper to C++ translator, and developed sutff like a pettycash software in it where you could do everything with one hand on the numeric keyboard, navigate menus, search from lists ( press dot, enter, turn num lock off, search, enter, turn it back on) while you have your left hand free to stuff your face with donuts or take calls – the users loved him for it!

    This stuff is outdated. Yet, they teach an important lesson. People do not just use software for pleasure, often they are “human data entry devices” so we must focus on productivity just as much as ease of learning!

  56. Shenpen, I’ve said it before: if you communicate with the outside world, you MUST have a copy of Microsoft Office. OpenOffice and LibreOffice won’t cut it.

    And COBOL ON COGS is funny in part because while we like to pretend that we are cooler than COBOL, COBOL is so far ahead of the competition for real business needs that it will be around long after the fad languages have died out. Again, you use whatever lets you get shit done.

  57. @Jeff Read
    I manage very well without touching MS Office. And I communicate a lot with others who do.

    On the other hand, I use PostgreSQL and R for work. That is, real tools for grown ups that have to work with real data. Excel is a toy.

  58. @Jeff Read
    I forgot, LaTeX for publishing and presentations. Word is for shopping lists, Ppt for high school presentations.

  59. @Winter I studied 2 semesters of statistics, stuff like regression calculations, yet in 10 years of ERP consulting no business manager ever wanted more complicated stats than say % difference between actual and budget. I think they are not even aware that advanced stats exists. Or they do not trust the data enough for such deeper analysis, which is understandable, as it indeed tends to be inaccurate.

  60. @Shenpen
    I know. But then, why talk about the horible inadequacy of LibreOffice?
    Whenever I read someone who claims he needs Excel with 10,000+ cells, I saw a person using the wrong tool.

  61. I know. But then, why talk about the horible inadequacy of LibreOffice?
    Whenever I read someone who claims he needs Excel with 10,000+ cells, I saw a person using the wrong tool.

    Excel lets you get the job done without being a programmer. The same goes for Word and document formatting. That’s EVERYTHING in the business world; large chunks of the American economy run on Excel sheets. (Yet another reason to be scared of Murka…)

  62. @Jeff Read
    There is a story going round that says there were funds employing economists quants and those employing physicists. The former made the heavy losses during the 2008 meltdown.

    I bet the physicists did not use Excel.

  63. “I remember what I liked was that for example if you went to the second menu, third submenu, fifth subsubmenu 200 times a day, you did not have to do this todays slow, inefficient click-click-click, you simply typed 235, pressed ENTER and you were there. . If your job is to make 200 invoices a day, they rock.”

    You’re reminding me of why vi is a much more productive environment for me than Word for editing flat text files. I have yet to see a word processor (as opposed to a text editor) where I almost never lift my fingers off the home row.

    Word does not have nearly enough keyboard shortcuts to cover all tasks I do routinely.

  64. “@Winter I studied 2 semesters of statistics, stuff like regression calculations, yet in 10 years of ERP consulting no business manager ever wanted more complicated stats than say % difference between actual and budget. I think they are not even aware that advanced stats exists. Or they do not trust the data enough for such deeper analysis, which is understandable, as it indeed tends to be inaccurate.”

    I work in consumer research, and we do much more complex stats than % difference. I routinely work with regressions, null hypothesis testing, key driver analyses, cluster analysis, etc. It really all depends on which functional area you work in.

  65. will trade the Visual Studio for a Linux system from a respectable distribution any time. The VS-thing is just an anchor for a serious development: inflexible, feature-poor, resource-hungry, unstable and slow (well, that last might by virtue of its host operating system, but anyway).

    With a world-class debugger, a C++ compiler that still (afaik) produces more efficient code than gcc, and an editor that actually knows the language you’re working in, and is able to do useful transforms (hello, refactoring tools!), which is an absolute must for large projects. And then there are just the darned cool things like edit-and-continue…

    Visual Studio makes the average developer (as opposed to the l33t hax0r who has been on Unix since his weaning) more productive than if he had had a command line, editor, and GNU toolchain; and many of its components (like the debugger) are just flat-out better than the corresponding OSS tools.

  66. @Jeff Read:

    > With a world-class debugger, …

    All that is true. And increasingly irrelevant. I’ve never been one to live in the debugger, but any more I only use a debugger maybe once every two years or so.

    Python is the cement that holds my world together, and quite often I can make really awesome concrete without any aggregate. When I do need aggregate, I use really fine-grained Cython or C functions. Debugging the occasional function which is not memory managed, and which is an exact analogue of my memory-managed prototype, is usually a piece of cake.

    Obviously I’m not alone in switching to this paradigm.

    Even back when I had a really shitty development environment for V.92 modem code, I would prototype my DSP assembly language in Python. It was pretty awesome, because I could start with stuff that worked that was fairly abstract, then iterate on the Python and make less abstract Python versions that still passed my Python regression testbench but looked more and more like the final assembly language, until it was a very simple almost-impossible-to-screw-up transliteration from the Python to the assembly language.

  67. In other news ComScore reports iOS went from 34.3% share in september to 36.3% share in december while Android went 52.5% share to 53.4% share in the US…

  68. Visual Studio makes the average developer (as opposed to the l33t hax0r who has been on Unix since his weaning) more productive than if he had had a command line, editor, and GNU toolchain; and many of its components (like the debugger) are just flat-out better than the corresponding OSS tools.

    The l33t hax0r on Unix is overrated. I know many really good coders that do windows or heavens forbid, java work (you know…android devs…). I would also say the same about some of the ObjC coders I know on OSX but they fall under the Unix banner although via XCode.

    Open or closed source “average” corporate devs do most of the heavy lifting.

  69. @Patrick
    “Python is the cement that holds my world together, and quite often I can make really awesome concrete without any aggregate. ”

    If you compare the differences in total time of development and debugging to total aggregate run time of the resulting program, using C often becomes hilariously inefficient.

    I understand using C for OS kernels and programs used by many every day.

    But building a personal document parser with a web crawler interface in C versus Perl/Python/Ruby? Not.

  70. Excel is not a toy. (the same is true for other spreadsheet calculators)
    Toys are designed to be safe for their users.
    When spreadsheets grow, the possibility of cleverly buried errors grows exponentially. Spreadsheets should not be allowed to grow much more than the screen size.

  71. Winter,

    In the Beginning Was the Command Line was declared obsolete by Stephenson when he installed Mac OS X.

    The Windows Philosophy (binary file formats, large programs that do many things, ease of use uber alles) is winning, even in open source land — look at systemd.

    1. >In the Beginning Was the Command Line was declared obsolete by Stephenson when he installed Mac OS X

      You’ve claimed this before. My own conversations with the man do not so indicate. Cite, please.

  72. http://slashdot.org/story/04/10/20/1518217/neal-stephenson-responds-with-wit-and-humor

    Neal:

    You guessed right: I embraced OS X as soon as it was available and have never looked back. So a lot of “In the beginning was the command line” is now obsolete. I keep meaning to update it, but if I’m honest with myself, I have to say this is unlikely.

    To be fair, he did say “a lot of” and not the whole thing, but one of its central theses — that Linux is worth the time investment because you trade comfort and “bling” for an OS that’s designed for big jobs and built to last — was severely challenged by OS X.

    As to my point, I’m a traditionalist — a member of perhaps the last generation of people who can call themselves traditionalists when it comes to Unix — so it pains me to say this, but there is actually precious little of the Unix tradition that’s really relevant anymore, and it’s fading fast as a new hacker community that was brought up with Macs, Windows, and the Web becomes ascendant.

    1. > precious little of the Unix tradition that’s really relevant anymore

      What a willful misinterpretation of his remark this is. From Stephenson’s POV, half or more of the lure of OS X is that it is Unix underneath. What’s obsolete in that essay is, mostly, the bright hopes for BeOS.

      Unlike you, I actually know Stephenson slightly. We’re not best buddies or anything, but we’ve met FTF and remained friendly by email since. I don’t think his admiration for the Unix tradition has decreased at all – he used to be a programmer, close to and perhaps actually being a hacker even by my strictest definitions. He certainly identifies with us culturally.

      1. >So I want to know if something can be done. Thanks.

        Not from reposurgeon; this is an internal git problem, at a level reposurgeon very deliberately knows nothing about. Try “git prune”.

  73. I don’t think his admiration for the Unix tradition has decreased at all – he used to be a programmer, close to and perhaps actually being a hacker even by my strictest definitions. He certainly identifies with us culturally.

    I’m not doubting Stephenson’s admiration for the Unix tradition. But Mac OS X breaks from tradition in many important ways — most notably by hiding it from anyone who doesn’t actively go looking for it — and still confers the benefits of that tradition in the form of a robust platform that can handle serious computing tasks. Android breaks from Unix tradition even more radically.

    And if you can get the benefits of Unix without having been steeped in the memetics of Unix, as Mac OS X users do, you don’t grow the respect for the reasons why Unix is the way it is. Which isn’t the case for Stephenson, but it’s increasingly the case for a generation of programmers about my age and younger, who feel — not that Linux is Unix and all the great things that follow on from that, but that Linux needs to “keep up with” Windows, Mac, iOS and Android and needs all the cruft that inhere to those. Hence the increasing favor for large, kitchen-sink programs, binary protocols, and tight coupling.

    A good early example of this attitude is Miguel de Icaza’s essay “How to Make Unix Not Suck”.

    1. >Which isn’t the case for Stephenson, but it’s increasingly the case for a generation of programmers about my age and younger

      I don’t consider this belief consistent with the sales of The Art Of Unix Programming, nor how often I’m asked to autograph it. It’s not so much the absolute volume as the fact that the book has long legs, still selling well seven years after issue.

  74. > look at systemd

    Lennart Poettering’s refutation of some of the frequent claims about systemd:

    http://0pointer.de/blog/projects/the-biggest-myths.html

    On a related note, I just watched Daniel Stone’s talk at LCA 2013 about Wayland, and he expressed frustration at the “internet peanut gallery” writing uninformed comments on various sites about how X11 is unixy and Wayland is not. At one point he countered with “what one thing does X do and what does it do well?” Video here:

    http://mirror.linux.org.au/linux.conf.au/2013/mp4/The_real_story_behind_Wayland_and_X.mp4

    (The rest of LCA at http://mirror.linux.org.au/linux.conf.au/2013/mp4/)

  75. Winter, I actually used MS BASIC to write a real program once when it was the best tool for the job: needed not much in the way of performance but rapid development, and I could knock out something in it fast.

    I doubt you’ll find people here who will argue with the idea of using Python for most tasks, certainly opposed to, say, C. (Or C++, though I’m beginning to think that’s not so much a programming language as it is a cruel joke perpetrated by Bjarne Stroustrup. There are days when I am sorely tempted to get in my car, drive to College Station, drag him bodily out of his office, and skin him alive an inch at a time on the 50-yard line at Kyle Field.)

  76. And if you can get the benefits of Unix without having been steeped in the memetics of Unix, as Mac OS X users do, you don’t grow the respect for the reasons why Unix is the way it is.

    How many of those reasons are really relevant today?

    Which isn’t the case for Stephenson, but it’s increasingly the case for a generation of programmers about my age and younger, who feel — not that Linux is Unix and all the great things that follow on from that, but that Linux needs to “keep up with” Windows, Mac, iOS and Android and needs all the cruft that inhere to those. Hence the increasing favor for large, kitchen-sink programs, binary protocols, and tight coupling.

    What “cruft”? You mean “cruft” like a GUI? Or the Core Foundation libraries that OSX and iOS developers use? Or Quartz? Or Xcode? Or eclipse? Or java? Or VMs like Dalvik? Or a stable ABI?

  77. @Jeff Read
    “In the Beginning Was the Command Line was declared obsolete by Stephenson when he installed Mac OS X”

    Why do you think this is relevant? I can think for myself, thank you. If a text or music or anything else is worthy, I do not care whether the original author has found some god and denounces his earlier work.

    What is it with people that they think the current opinion of an author actually makes any difference?

    I see that all the time: Darwin/Einstein/Schrödinger was deceived/lying/atheist. None of that matters to the value of their works.

    What Neil wrote about the power of the piped command line versus the (tree based) menu system underlying GUIs is true for reasons beyond his simple story.

    For one thing, CLIs like Bash are Turing complete. GUIs never are (they are at most slightly over the level of a stack based machine/context free). It is obvious that a Turing complete interface will baffle J. Average Consumer. The Hole Hawg story made that point explicitly.

    It is really great that simplifying GUIs make complex computing tasks available to people who do not care for the real power. I have advocated that my whole life (at least the part which overlapped with the existence of personal computers).

    But the fact that people do not care about the foundations of their buildings nor the plumbing or wiring does not mean foundations, plumbing, and wiring are unimportant and “going away”.
    Neither in Unix as is obviously clear from market share numbers.

  78. I don’t consider this belief consistent with the sales of The Art Of Unix Programming, nor how often I’m asked to autograph it. It’s not so much the absolute volume as the fact that the book has long legs, still selling well seven years after issue.

    Well, you would know. Glad to see someone’s out there keeping the spirit alive.

    I also recall Rob Pike, who said in another Slashdot interview that the days of Unix programs doing one thing well “are dead and gone and the eulogy was delivered by Perl”. But his choice of editor betrays that he doesn’t totally believe that himself. Maybe he just despairs for the future of our craft, like me.

    Lennart Poettering’s refutation of some of the frequent claims about systemd:

    Lennart’s refutations are bullshit. It may take the wisdom of a Master Foo to fully and succinctly elucidate why they’re bullshit, but I can feel it in my bones and I’m not the only one.

    Strike one against systemd is the idea that pid 1 should do a lot. Strike 2 is the idea — repeated loudly and often by systemd fans — that we really are better off with binary daemon logs that can only be examined with specialized tools.

    That said, the sysvinit system — as implemented on most distributions — is needlessly complex and a crock. BSD init rocks.

    “what one thing does X do and what does it do well?”

    X is a remote graphical object and event server. That’s what it does. X is big because doing graphics in a device-, platform-, and network-agnostic way is HARD. PulseVideo–er, Wayland, doesn’t do nearly the same stuff that X does. It’s almost orthogonal to X and could have easily been pitched as a refactoring of the lower bits of the X server rather than The Thing That Will Finally Bury X. With that and network transparency, I might be more inclined to embrace it.

  79. What Neil wrote about the power of the piped command line versus the (tree based) menu system underlying GUIs is true for reasons beyond his simple story.

    I’m on your side on this but it’s deeper than that. The Unix command line is so powerful and flexible because of the idea of a constellation of small, loosely coupled programs that each fulfill a single abstract role. But the trend has been toward bigger and/or more tightly coupled programs that optimize for the case the developers think is important and ignore the others because fuck you, that’s why. Hence PulseAudio, systemd, NetworkManager, Wayland, and the rest of the crap.

    But the fact that people do not care about the foundations of their buildings nor the plumbing or wiring does not mean foundations, plumbing, and wiring are unimportant and “going away”.

    In case you haven’t noticed, software is not quite the mature discipline with a sense of history and respect for designs that have withstood the test of time that architecture is — and even architecture manages to pull some real boners. (Modernism. Ugh.)

    Neither in Unix as is obviously clear from market share numbers.

    Android is pretty much the poster child for radically rethinking Unix to the point of near-unrecognizability. (That said, props to the Android devs for cramming a quite sophisticated mobile touch environment into a few hundred MiB — something Microsoft couldn’t hope to achieve as the headlines about actual usable Surface capacity indicate.)

  80. @Jeff
    “In case you haven’t noticed, software is not quite the mature discipline with a sense of history and respect for designs that have withstood the test of time…”

    I thought Unix pedigree is all of that and more. Actually, this is exactly the reason Unix is so popular and successful.

  81. That said, the sysvinit system — as implemented on most distributions — is needlessly complex and a crock. BSD init rocks.

    Essentially this translates into SYSV sucks and BSD rocks. A sentiment I’ve had since Solaris 2.

  82. I’m on your side on this but it’s deeper than that. The Unix command line is so powerful and flexible because of the idea of a constellation of small, loosely coupled programs that each fulfill a single abstract role.

    Yes, but it’s done so poorly. The VMS command line was as powerful and didn’t use arcane commands and the parameters were consistent across commands. I can’t recall if the commands were built into the shell or individual executables but either way a bunch of small programs with a bunch of inconsistent switches sucked in the 70s in comparison to other systems and sucks harder today.


    But the trend has been toward bigger and/or more tightly coupled programs that optimize for the case the developers think is important and ignore the others because fuck you, that’s why. Hence PulseAudio, systemd, NetworkManager, Wayland, and the rest of the crap.

    The implementation has been crap. On OSX the implementation of the equivalent subsystems (say CoreAudio, etc) have been top notch and hugely beneficial to both users and developers.

    In case you haven’t noticed, software is not quite the mature discipline with a sense of history and respect for designs that have withstood the test of time that architecture is — and even architecture manages to pull some real boners. (Modernism. Ugh.)

    Software folks are as hidebound as any other and the rate of adoption of new ideas about the same as in every other discipline. Besides, if you state that software is not a mature discipline then no designs have withstood the test of time because relative to architecture there has been no time to withstand.

    I counter that software as an engineering discipline has been around since the 1960s and 50 years makes for a pretty reasonable level of maturity given the rate of advance. I would also counter the notion that as a discipline that we don’t respect designs that are proven to work.

    However, these designs are now encapsulated in the abstractions we use to maintain parity with or exceed the rate of hardware advances.


    Android is pretty much the poster child for radically rethinking Unix to the point of near-unrecognizability. (That said, props to the Android devs for cramming a quite sophisticated mobile touch environment into a few hundred MiB — something Microsoft couldn’t hope to achieve as the headlines about actual usable Surface capacity indicate.)

    Android isn’t Unix. It has a linux kernel but if Google swapped in mach I doubt many app devs would notice. Or a WinNT kernel.

    And I don’t recall WinPhone 8 being all that large. The 32GB Nokia 920 has about as much usable space as an iPhone. I don’t recall the numbers but it certainly wasn’t Surface Pro like. Surface Pro has a lot of stuff in order to support legacy apps.

    1. >The VMS command line was as powerful and didn’t use arcane commands and the parameters were consistent across commands. I can’t recall if the commands were built into the shell or individual executables but either way a bunch of small programs with a bunch of inconsistent switches sucked in the 70s in comparison to other systems and sucks harder today.

      Spoken like somebody who never had to use VMS for actual production. There was no piping; everything had to be done through tempfiles (OpenVMS eventually fixed this by adding a PIPE constructor). But worse than that, the process-spawn overhead of VMS was immense. Yes, VMS was so wonderful that, by around 1983, about 25% of VAX 11/750s were getting VMS stripped off of them in favor of Unix the second they came in the door. Later, the percentage increased. Many of the installations that stayed with VMS were running EUNICE, a replacement CLI that – even though it emulated the Unix command line only rather crudely – was generally felt to be far superior to the native CLI.

  83. I’ll second Eric’s comments about VMS, and point out two things: a few commands were badly, horribly overloaded (SET being the canonical example), and the command syntax itself led to severe cases of COBOL fingers.

    I have no clear memories of BSD init aside from the fact that it doesn’t provide a cleanly extensible architecture; whatever else you may think of SysV init, it does do that. Ubuntu’s startup stuff does that even better, though.

    To me, X is a solution to a problem that no longer exists: a graphical server/terminal environment. As a result, it’s crufty and needlessly heavyweight, while ignoring things that a GUI needs to do. As such, what’s happened is not one, but several, GUI environments, and that’s needlessly fragmented. GNOME’s insistence on doing away with customizabiltiy hasn’t helped.

  84. > X is a remote graphical object and event server. That’s what it does.

    Yeah. It’s just that it doesn’t do it very well with current clients. E.g. in the aforementioned molecular modelling lab, there was frequently a need to run 3d graphics applications remotely, because some proprietary application or another was licensed only on some machines. Direct rendering doesn’t work remotely, so a lot of that was just not possible on X. Some of those applications could switch to a Xlib rendering mode, which was generally so slow that you would have been better off with a dumb VNC-style terminal instead. Nowadays direct rendering is used for all sorts of things besides actual 3D graphics applications, of course, so the situation is getting worse. Then there are various sorts of breakage apart from direct rendering, e.g. older Unix programs breaking when trying to connect to a current X.org server, current Linux programs breaking when trying to connect to a proprietary X server, etc. etc.

    I’m sure that all of this can be blamed on the clients and implementations, and a paper written about the virtues of the X11 protocol, but it doesn’t change the fact that network transparency is a crapshoot with current X11 clients.

  85. I thought Unix pedigree is all of that and more. Actually, this is exactly the reason Unix is so popular and successful.

    You’re right. But the designers and establishers of the Unix pedigree are not the current OSS community. There is a strong push to have Linux stop being Unix and start being GNOME-OS. Case in point.

    Alan Kay ranted about this when he said “computing is a pop culture”. Back in the late 90s, a large number of people working on Linux were using it as a substrate for Java. These days it’s Ruby on Rails, with JavaScript looking likely as an upcoming replacement. When I first started out on Linux, most of its users “grokked Unix”. My university had a Linux users group in which we discussed the low-level details of how Linux, and other Unix systems, did things. Soon after, this was no longer the case.

    To me, X is a solution to a problem that no longer exists: a graphical server/terminal environment.

    You may not ever deal with the problem that X solves. You may not want to and assiduously avoid it. But that doesn’t mean it doesn’t exist.

    If I copped that attitude I could say Word and Excel solve problems that don’t exist, and get a tongue-lashing from Burnside about how out of touch I am.

    As a result, it’s crufty and needlessly heavyweight, while ignoring things that a GUI needs to do. As such, what’s happened is not one, but several, GUI environments, and that’s needlessly fragmented. GNOME’s insistence on doing away with customizabiltiy hasn’t helped.

    1) Wayland doesn’t solve those problems either.

    2) What you are describing is the X design guideline of “specify mechanism, not policy”, which is a core Unix principle and why X, despite being large and heavyweight in comparison to other programs (but was actually lighter and faster than Win32’s drawing libraries even in 1995 when I first tried it), is considered “Unixy”. I’ve mentioned this before; what you actually need is a single, soup-to-nuts API that specifies everything from graphics hardware primitives to widget layout, appearance, and UI guidelines — and cattle prods to poke developers with when they don’t follow them. Because, so the wisdom goes, a non-sucky OS specifies mechanism AND policy — for everything.

    The problem is, not even Apple or Microsoft follow all of their own policies. People including myself have lamented that LibreOffice is based on a completely custom widget set that has nothing to do with the standard system widgets. That’s true, and that’s a problem but so is Microsoft Office. (Dragon Naturally Speaking basically has to OCR each button on the screen so it knows what you mean when you say “click OK” inside Office.)

    I decided a long time ago I don’t like an OS from a system vendor that decides upon rules for me to follow that they don’t want to follow themselves. If Linux becomes such an OS, I will move to BSD.

    Also, the last time someone pointed to a Macintosh and said “This is what we need to build”, the result was Windows. GNOME is now doing the same, and achieving similar levels of suck.

    I don’t run GNOME, or KDE, so the problem of which desktop will triumph is about as much a problem for me as it is for you. But the encroachment of Windows/Mac end user desktoppy cruft into pid 1 and the kernel causes me to worry, and worry hard.

  86. I’m sure that all of this can be blamed on the clients and implementations, and a paper written about the virtues of the X11 protocol, but it doesn’t change the fact that network transparency is a crapshoot with current X11 clients.

    That sucks, but Wayland gives off every whiff of addressing the issue in CADT-compliant fashion, rather than — you know — being disciplined about solving it.

    1. >people I knew who actually admin’d VMS VAXen and wanted to get the hell out.

      Heh. I knew several such people in the 1980s. I was never directly a VMS user myself, but their war stories were both lurid and consistent.

      I cut my Unix teeth on a VAX-11/750 that had had its VMS stripped off and replaced with BSD4.1 (not 4.2, 4.1). Lotta that going around back then.

  87. Eric and Jay, your VMS experience jibes with my own limited experience, and with that of people I knew who actually admin’d VMS VAXen and wanted to get the hell out.

    When I started college the university had a VAXcluster and also had just acquired a shiny new SGI Challenge Unix server. If VMS were so much better, you’d think the Unix server would see little use, perhaps being left to collect dust in the EECS department, and they would upgrade to Alphas running OpenVMS. (DEC was still a thing back then.) But no, everybody eventually did stuff on the Unix box and the VAXen, and VMS, were taken offline completely in a couple of years. Good riddance.

  88. @esr:
    > But worse than that, the process-spawn overhead of VMS was immense.

    To be fair, hasn’t process creation traditionally always been a heavyweight operation?Certainly with Multics and VMS, the two systems that I had most of my experience on before Unix. It’s debatable what caused what – whether they had a single-process model because creating processes was expensive, or whether there was no need to make process creation cheap because of their single-process model.

    There was a design constraint for Unix too – limited address space, with the only way to effectively extend it being to have multiple processes communicating via pipes or other means. So Unix process creation had to be made cheap.

    1. >To be fair, hasn’t process creation traditionally always been a heavyweight operation?

      Outside of Unix and its derivatives, yes. What this means is that nobody else got it right. Weirdly, 50 years later non-Unix operating systems are still by and large getting it wrong.

      Your notion that small address spaces drove cheap process spawning is interesting but I don’t think the record really supports it. I’ll ask Doug McIlroy about this sometime; he would know.

  89. > Wayland gives off every whiff of addressing the issue in CADT-compliant fashion, rather than — you know — being disciplined about solving it.

    I don’t actually think so at all, but never mind, this is all pretty far off-topic.

  90. @Jay:
    >As a result, it’s crufty and needlessly heavyweight, while ignoring things that a GUI needs to do. As such, what’s happened is not one, but several, GUI environments, and that’s needlessly fragmented. GNOME’s insistence on doing away with customizabiltiy hasn’t helped.

    What does it ignore that a GUI needs to do? I won’t argue with you on GNOME 3’s regressions on the customizability front, though. (GNOME 2 / MATE is the best GUI I’ve ever seen by a long shot, and much of that is because of its customizability).

    @esr:
    >Outside of Unix and its derivatives, yes. What this means is that nobody else got it right. Weirdly, 50 years later non-Unix operating systems are still by and large getting it wrong.

    To be fair, both Unix and most extant non-Unix systems I can find references to had their origins either on hardware that didn’t support copy-on-write or before COW was popular (or both), and I do wonder how lightweight fork/exec really is when COW is not available (then again, I’m a young whippersnapper who has hardly ever touched hardware that didn’t support COW, and only dabbles in programming).

    Then you have the fact that people these days who start new OS projects who like Unix are likely to create Unices, and that even those who are neutral to Unix are likely to use a Unix as their OS development environment (and thus to be swayed towards creating Unices). The result is that only people who actively hate Unix are likely to create new non-Unices. Add to this the fact that fork/exec is a bit counterintuitive at first glance, and it’s statistically likely that anyone who’s bothering to create a non-Unix in the first place thinks that fork/exec is an evil unixism.

    So non-Unices with ancient roots lack lightweight process spawning by inertia from their younger days, and young non-Unices lack lightweight process spawning because their developers are convinced that their heavyweight process spawning mechanisms are better engineering than fork/exec.

  91. > GNOME is now doing the same, and achieving similar levels of suck.

    The majority of interface developers coding today got their start on their parents Windows machine.

    That is the baseline “normal” from which they deviate.

    CADT is the order of the day.

  92. > Your notion that small address spaces drove cheap process spawning is interesting but
    > I don’t think the record really supports it.

    I don’t have any authority that this is true, but it was certainly the case that, with Unix on early 16-bit hardware, large applications did have to be split up. I know someone who worked on an APL system (in the early 1980s) that had to be split up into three processes because of address space constraints, and I can even remember reading something in Byte (about the same time) of some huge application that had to run as eight processes for the same reason.

    Whether this was a driver for Unix’s cheap process model I don’t know – but, with the address space constraint, if Unix had *not* had cheap processes then it would not have been practical for any substantial task.

    1. >with the address space constraint, if Unix had *not* had cheap processes then it would not have been practical for any substantial task.

      I think that was arguably true on PDP-11s, but stopped being true on later hardware. So if you go back far enough, small address spaces could have driven lightweight process-spawning. But as I read the early accounds, it was actually driven by Thompson’s interest in implementing and then generalizing deas from Multics where address-space size wasn’t an issue.

  93. Strike one against systemd is the idea that pid 1 should do a lot. Strike 2 is the idea — repeated loudly and often by systemd fans — that we really are better off with binary daemon logs that can only be examined with specialized tools.

    That said, the sysvinit system — as implemented on most distributions — is needlessly complex and a crock. BSD init rocks.

    I like the main idea behind systemd, namely socked-based activation instead of manually specifying order of operations (sysvinit) or manually specifying dependencies (IIRC upstart). But it seems to be overcome with creeping featureism and (perhaps perceived only) bundling with other projects and ideas (journald, udev).

  94. So non-Unices with ancient roots lack lightweight process spawning by inertia from their younger days, and young non-Unices lack lightweight process spawning because their developers are convinced that their heavyweight process spawning mechanisms are better engineering than fork/exec.

    There is at least one non-Unix that had process spawning far more lightweight than almost any Unix: AmigaOS (and of course its open source cousin, AROS). But that was because there was no virtual memory or memory protection of any kind in AmigaOS, it having been designed to run on the original 68000 which lacked an MMU.

    I don’t know about processes, but BeOS is designed to make spawning new threads cheap and easy; multithreading is strongly encouraged in a BeOS environment.

    The only “new” non-Unix that I can think of in widespread use today is the Windows NT family, which I think inherited its heavyweight process spawning from its VMS heritage.

    1. >the Windows NT family … inherited its heavyweight process spawning from its VMS heritage.

      That is correct.

  95. What does it ignore that a GUI needs to do?

    Standard widgets, a standard look and feel, standard usability guidelines. Part of the reason to have a GUI is so all apps look and function exactly the same way. It’s the principle of least surprise.

  96. “I’d say this is unlikely. GCC’s architecture is ancient and crufty –in some ways deliberately obfuscated, as RMS doesn’t want proprietary compiler vendors co-opting the code”

    ” One important difference is that GCC can be forked if the maintainance group turns incompetent or evil.”

    Provided of course that someone undersrands the code, aka provided the code isn’t too obfuscated. If nobody can understand the code, your fork is just a fancy mirror site, as nobody can change the internals of the program in any meaningfull way (sort of rewriting everything). Aka, there is no difference between using a freeware program (with a license that obliges the vendor to give users free access to all future versions), and using a copyleft open source program but with the code obfuscated. No, portability to different archtectures doesn’t count, there might be archtecture-bound code in there.

    Which of course means the whole concept of copyleft is meaningless. Copyleft open source (and open source im general) still depends in the goodwill of the programmer (to provide readable code). You CANNOT assume open source == i can see how it works and change how it works (you can’t if the code is obfuscated).

    Stallman has a neat explanation over at fsf.org saying that obfuscated code is not considered source code, but failed to define “obfuscated”. You can obfuscate all you want, and then still claim your code “is readable” and open source. In fact Stallman (or the OSI cannot possibly come up with a definition of obfuscated without including lots of GNU code.

    1. >Aka, there is no difference between using a freeware program (with a license that obliges the vendor to give users free access to all future versions), and using a copyleft open source program but with the code obfuscated.

      What drivel. There are many difference, beginning with the fact that source code can be ported while binaries in general cannot. Furthernore, de-obfuscation of obfuscated source code is much easier than decompilation. These differences matter a lot in practice.

  97. Spoken like somebody who never had to use VMS for actual production.

    It’s simply amazing how often you fall back on attacking the person when you have no rational rebuttal for the point.

    The fact is I worked on the COBE Science Data Room from around the mid-80s to the early 90s on VMS and I wrote a lot of DCL scripts and a lot of production code on VMS. Then I went on to writing spacecraft ground control software on AIX right after.

    The point remains that unix commands are unnecessarily cryptic and have inconsistent switches/options. Hell, this has been a known deficiency forever and is the second chapter in Unix Haters.

    http://www.mpp.mpg.de/~huber/vmsdoc/VMS-UNIX_CMD-EQUIVALENTS.HTML

    And it’s not even any more verbose since most commands could be shortened to 2-3 characters. Like “dir”, “del”, etc.

    There was no piping; everything had to be done through tempfiles (OpenVMS eventually fixed this by adding a PIPE constructor). But worse than that, the process-spawn overhead of VMS was immense.

    VMS didn’t process clone+copy on write (fork) like Unix did because most of the work was done in long running processes. It’s a design trade. If you needed subprocesses typically you spawn them at startup and used IPCs to communicate with them.

    Lack of piping was more of an annoyance than a handicap given you could make do with temp files and connecting those to SYS$INPUT. Most of these are kludge scenario anyways to solve an immediate (and typically small in scope) problem. Stringing a bunch of little programs together via a stream of bytes is great concept for some things but you’re not going to use it for important processes.

    IMHO the dev toolchain on VMS was often better in terms of compilers, debuggers, cm, build scripts, etc for large projects.

    And for production systems the more fine grain user controls via ACLs and uptimes sometimes as high as a decade made VMS a reasonably secure and very stable and robust system.


    Yes, VMS was so wonderful that, by around 1983, about 25% of VAX 11/750s were getting VMS stripped off of them in favor of Unix the second they came in the door. Later, the percentage increased. Many of the installations that stayed with VMS were running EUNICE, a replacement CLI that – even though it emulated the Unix command line only rather crudely – was generally felt to be far superior to the native CLI.

    NASA stayed with VMS for a long time…primarily because VAXClusters worked very well on the kinds of problems we had. Our data pipeline was a mid-sized cluster to handle the data rates we needed to process. It wasn’t deterministic but supported near real time applications better than the unixes of that period.

    That Unix was more popular is indisputable. It doesn’t mean there aren’t a lot of crufty things about Unix where other systems were superior.

  98. @Nigel:
    >And it’s not even any more verbose since most commands could be shortened to 2-3 characters. Like “dir”, “del”, etc.

    I don’t know why you give those two as examples (rather than hypothetical abbrevations of “TYPE” (tp?) or “APPEND” (pnd?) to replace the infamous “cat”, or something of the like). The equivalent Unix commands (“LiSt” and “ReMove”) aren’t very cryptic (except for what comes naturally by reducing them to two characters).

    In fact, I’d say that “remove” was a much better command name than “delete” at the time Unix and VMS were being developed: They sound about equally suitable to me now in 2013, but I was raised with computers and the word “delete”. In the 70’s, someone my age (mid 20’s) would not have been raised around computers, and, given how little I use “delete” outside of a computing context, the word would probably not have been part of their active vocabulary.

  99. @nigel
    I have coded on VMS for a few years. I was not very fond of it. Seems I was not alone. As Unix has been recreated a few times, NT is the only attempt I know of of reconstructing VMS. And NT misses every feature that made VMS worthwhile.

  100. @jon I choose dir primarily because directory is long. The short form for a VMS command is however many characters are required to be unique. Type is probably TY or maybe even just T if 4 characters is too long for you. The short form of any command can be derived from the long form easily. More importantly the long form works and is English. Search vs grep. Also more importantly the switches are consistent across commands and less cryptic.

    Remove may or may not be more intuitive than Delete but rm is not more intuitive than Delete and you can’t use “remove”.

    This kind of stuff increases the friction in using the command line and it could have been fixed but never was to the detriment of unix on the desktop for non-hackers.

    OS X fixes Unix by hiding all the stupid complexities and it is the only successful unix desktop (as measured by market share).

    Likewise NT clobbered unix in terms of marketshare by hiding this cryptic stuff from users.

    All the stuff Jeff rails against in Linux for destroying the purity of unix is the stuff shown to have been successful in addressing the needs of the desktop market. Even the strengths of unix (like cli) could have been vastly improved except hackers like arcane.

    Me, I like what Ubuntu is doing but its not nearly as refined as OSX from the perspective of the total system. But that’s to be expected since it’s a lot younger and they’ve had less resources than even NeXT had.

  101. @winter VMS had its strengths and weaknesses like every other operating system. Some of the ideas were very worthwhile.

    As far as using NT as a negative for the VMS kernel and architecture all I can say is that if you count that then VMS dominated the desktop computing world for decades since the introduction of winNT and will dominate the desktop world for at least another before mobile computing relegates desktops into the past.

    Not a bad legacy at all.

    I also don’t agree that NT misses every worthwhile feature because it captures and refines the worthwhile kernel features of VMS pretty well.

  102. @nigel
    “I also don’t agree that NT misses every worthwhile feature because it captures and refines the worthwhile kernel features of VMS pretty well.”

    You do not mean uptime statistics and the high performance database friendly filesystem, I assume.

  103. @Jon Brase
    “@Nigel:
    >And it’s not even any more verbose since most commands could be shortened to 2-3 characters. Like “dir”, “del”, etc.

    I don’t know why you give those two as examples (rather than hypothetical abbrevations of “TYPE” (tp?) or “APPEND” (pnd?) to replace the infamous “cat”, or something of the like). The equivalent Unix commands (“LiSt” and “ReMove”) aren’t very cryptic (except for what comes naturally by reducing them to two characters). ”

    The problem is not the abbreviations, but the sheer number of commands, 1200.

    I consider that a strength, not a weakness.

  104. Hmmm… unique abbreviation vs. tab completion…

    Anyway, I wonder if “unique abbreviation” idea (used not only by VMS shell, but also e.g. by Mercurial) is a good one, and if the fact that uniqueness of abbreviation may change as new commands are added is a real or imagined problem.

  105. You do not mean uptime statistics and the high performance database friendly filesystem, I assume.

    Uptime for a desktop system is less important than for servers but Windows server does pretty well.

    NTFS has reasonable performance and windows servers tend to be departmental or workgroup servers (ignoring mail and active directory) using MS SQL. Would I prefer unix for oracle? Sure. Solaris + zfs.

    What I meant was Cutler was able to give windows a modern kernel architecture that was multi-user and multiprocessor capable and was easily portable to different CPU architectures based on lessons learned on VMS and did it using very similar constructs.

  106. The Importance of Excel. Apropos our prior discussion.

    Winter, I think you were right. Where you see Excel being used in finance, there is certainly also to be observed a comedy of errors brought about by the people who want nice looking numbers but don’t know WTF they’re doing:

    The new model “operated through a series of Excel spreadsheets, which had to be completed manually, by a process of copying and pasting data from one spreadsheet to another.”

    LOL! These guys don’t know how to data.

    Regardless of whether their programs or processes are shite, however, as the article points out the fact still remains that there is NO replacement for Excel in the financial industry, or really in business in general. It is a requirement.

  107. Hmmm… unique abbreviation vs. tab completion…

    There’s no cognitive tab completion for commands you can’t remember or don’t know…

  108. @Nigel:
    >OS X fixes Unix by hiding all the stupid complexities and it is the only successful unix desktop (as measured by market share).

    >Likewise NT clobbered unix in terms of marketshare by hiding this cryptic stuff from users.

    Problem is, OS’s don’t actually tend to clobber each other because of features. They tend to clobber each other because of network effects. Both NT and OS X inherited their market share from previous systems that their vendors had put out. (Said previous systems were crap and clobbered Unix because their vendors had been in the right place at the right time, not because of any actual virtue).

  109. Problem is, OS’s don’t actually tend to clobber each other because of features. They tend to clobber each other because of network effects. Both NT and OS X inherited their market share from previous systems that their vendors had put out. (Said previous systems were crap and clobbered Unix because their vendors had been in the right place at the right time, not because of any actual virtue).

    This virtues were ease of use (via GUI) and cost. From a user perspective both MacOS and Windows 3.1+ were far superior to unix.

    Solaris x86 had a small window for success but Sun priced it at $700 per copy killing any desktop or small server sales giving WinNT the server entry point.

    Sun also neutered itself by dropping OpenSTEP and instead living with CDE/Motif and not developing a Java desktop…which given their metal LAF probably would have looked like ass but worked better than CDE.

    That folks like you can’t seem to accept ease of use as a virtue is essentially why unix failed on the desktop until Apple.

  110. Mmm…is something still considered unix without the gnu or bsd userland?

    Granted the kernel itself has to have a lot of unixisms (process model, posix API, unix IPCs, etc) to support any unix OS above it but say MS built a version of Windows on top of a unix or unixy kernel (linux, bsd, xnu, etc) does it count?

  111. Mmm…is something still considered unix without the gnu or bsd userland?

    Granted the kernel itself has to have a lot of unixisms (process model, posix API, unix IPCs, etc) to support any unix OS above it but say MS built a version of Windows on top of a unix or unixy kernel (linux, bsd, xnu, etc) does it count?

    The first part is easy, you’ve already answered yourself. Yes. System V heritage… like AIX, or HP-UX or Solaris.

    Second part, ugh. Windows XT? (Xenix Technology)

  112. Second part, ugh. Windows XT? (Xenix Technology)

    It’s a plausible alternate history. Until OS/2 came along, Unix was pretty much considered the way that PC operating systems would evolve. Bill Gates himself was said to have supported Unix as the future of the desktop OS, and Microsoft promoted Xenix to that end.

    Then Microsoft partnered with IBM for OS/2, and being the official IBM solution, OS/2 was considered the future. The NT kernel was written for a hypothetical next-gen OS/2 to run on RISC PCs based on the i860; this never materialized, IBM and Microsoft had a falling out, and Microsoft focused on Windows, implementing it as an API layer on top of the NT kernel they developed for OS/2.

  113. No, I mean is a unix kernel sufficient for something to be unix or is the userland required as well.

    I wouldn’t think windows over a unix kernel would qualify.

  114. No, I mean is a unix kernel sufficient for something to be unix or is the userland required as well.

    From an end user or developer perspective, a Unix-like kernel is a necessary but not sufficient condition for having a Unix-like system. The POSIX API is itself provided by the libc, not the kernel (though it helps to have corresponding syscalls in the kernel), and of course there is the entire suite of command line tools, which even Apple provides.

  115. @Nigel:
    >This virtues were ease of use (via GUI) and cost. From a user perspective both MacOS and Windows 3.1+ were far superior to unix.

    Except that Win 3.x itself inhereted its market share from bare DOS, which sucked from a user perspective. If ease-of-use were the determining factor, Microsoft would not have survived long enough to release Win 3.1 and we’d all be using OS X.

    >That folks like you can’t seem to accept ease of use as a virtue is essentially why unix failed on the desktop until Apple.

    It’s certainly a virtue, and may be the number one virtue for you, but it’s not the number one virtue for most people (again, if it were, Microsoft would not have survived to release Windows 3).

  116. Jon,

    MacOS certainly was easier to use overall which destroys one half of your assertion.

    DOS was easier from the standpoint of less stuff in comparison to CPM and a lot cheaper. Windows took off only after 3.1 because the earlier variants sucked in comparison to just using DOS.

    Ease of use has been a determinant factor in iOS and android smartphones destroying Symbian, blackberry, win phone and Linux phone share.

  117. That should be Unix phone share except for iOS and not Linux since yes, Android is using the Linux kernel.

    Tizen might be as successful as Bada. I don’t know if it has the unix userland though.

  118. It really depends on what you mean by “ease of use”. For an average person Mac’s might have been easier to just get what they want done. Average people weren’t getting into computers much at that point. For a hobbyist, MSDOS was superior in certain ways, at least I found it so. It was easier for me to learn DOS than to learn my way around a Mac, and I could certainly fiddle with it’s innards more easily than I could in Macs. Apple is a firm believer in “There is one right way to do things, which is our way.” I might be a little Biased because I learned DOS first, but I never learned to feel comfortable in the old MacOS even when I had to use it a lot.

  119. >MacOS certainly was easier to use overall which destroys one half of your assertion.

    No it doesn’t. MacOS took second place to DOS/Windows. If ease of use were dominant Apple would have gained the whole microcomputer market.

    I don’t like the fact that the primary factor in determining the success of an OS is the vendor being at the right place at the right time any more than you do (though which factors I’d want to see be dominant are probably different than those you’d like to see), but it’s true whether we like it or not.

  120. So your contention is that OS success is primarily predicated on dumb luck?

    For the desktop, hell-to-the-yes; in fact, it probably largely hinged on whether the mother of the OS vendor sat on the board of the dominant business computer company of the time.

    DOS was easier from the standpoint of less stuff in comparison to CPM and a lot cheaper. Windows took off only after 3.1 because the earlier variants sucked in comparison to just using DOS.

    Now you’ve gone full retard. Or, more likely, you’re just trolling. CP/M was about as minimal as operating systems could get; DOS had by far more “stuff”, especially in the later versions.

    As for Windows, it sucked compared to DOS in one, crucial area: games. Even during Windows 95’s heyday, “restarting in MS-DOS mode” was a frequent occurrence. DOS’s relevance only waned when there was a sufficient library of DirectX games for Windows.

  121. Now you’ve gone full retard. Or, more likely, you’re just trolling. CP/M was about as minimal as operating systems could get; DOS had by far more “stuff”, especially in the later versions.

    This is completely accurate, but giving the age of DOS and especially CP/M, I can be willing to attribute his statement to failing memory.

    The comment about being able to fiddle with its “innards” I find to be off though. DOS and classic Mac OS weren’t entirely different from each other in how they handled being an operating system and how applications were done: they both relegated most of the real operating system stuff to firmware, leaving as minimal of a footprint on-disk as possible; being able to have the OS on a floppy disk almost mandated this decision. The programming interfaces were almost non-existent and you pretty much wrote for the bare-metal in your application code, whichever application was in the forefront (the only application in the case of DOS… disregarding weird hacks like TSRs) was in complete control; even on Mac OS, multitasking only happened whenever the active program felt like lending its precious CPU time to background processes (for some programs, this would be never).

    They were both fairly opaque to the user and you were pretty much entirely at the mercy of people that knew how to program the computers, if you weren’t inclined or smart enough to program them yourself.

  122. My recollection was that DOS was less capable than CP/M aka less stuff. I was never a day-to-day user of CP/M but my recollection was that DOS was less sophisticated. If you want to state that DOS was superior to CP/M that’s fine with me but I doubt it is true.

    As far as who’s mom was on what board (United Way and not IBM)…Digital Research dropped the ball and Gates picked up the fumble. Is that luck? Sure, but you don’t get to the super bowl without the skills to exploit luck when it breaks in your favor.

    As far as Windows vs DOS I think it’s fairly well accepted that Windows 1.x and 2.x sucked and it wasn’t until 3.1 that Windows took off. DOS was still a key part of the underpinnings and it’s not until NT that you could state that DOS’ relevance waned.

    My point is that Jon’s assertion that Windows and MacOS only beat unix on the desktop because they inherited share is wrong. Both systems were far more user friendly than command line driven Unix and far cheaper to boot.

    If inheriting share was important for consumer operating systems then Nokia and Blackberry would still be top of the heap.

    Simply providing a different opinion isn’t trolling unless you want to live in an echo chamber. Besides, why would I troll about DOS vs CP/M? Does anyone still care?

    My implication that unix is doomed to failure in consumer markets with the exception of OSX/iOS is probably more contentious except that thus far it appears true. Android isn’t unix using your criteria. Tizen might not be either if it is just TouchWiz on top of a Linux kernel and success might be as elusive as for Maemo/Meego/etc.

  123. As far as who’s mom was on what board (United Way and not IBM)…Digital Research dropped the ball and Gates picked up the fumble. Is that luck? Sure, but you don’t get to the super bowl without the skills to exploit luck when it breaks in your favor.

    But it has nothing to do with the merits of DOS. DOS was a piece of shit and didn’t win any ease-of-use awards. The whole PC platform of the day was crufty, slow, and frustrating; its success was not really tied to its superior ease of use compared to Unix systems. (Unix was actually easier to use.)

    If we’re really honest with ourselves, ultimately, the PC platform won because it was more open. Its technically superior rivals such Macintosh, Amiga, and Atari ST faltered or expired completely because their fates were too closely tied to one company and were subject to whatever management bungles happened at that company. But the PC platform allowed second-sourcing from innovators like Dell and Compaq, so even though IBM had had a few management blunders as well, they didn’t hurt the strength of the platform.

    So openness is also a factor in the long-term success of a platform. Which of Linux, Mac OS, or Windows is most open?

    You are right that DOS was cheaper than many of its rivals, which played a role in its success; which of Linux, Mac OS, or Windows is cheapest?

  124. Android isn’t unix using your criteria.

    The Bionic library provides a POSIX API, and there are ports of Busybox and many Unix apps to Android; you don’t even have to run rooted to get them if you install something like Terminal IDE.

    Android is, arguably, closer to a true Unix than unjailbroken iOS.

    unix is doomed to failure in consumer markets […] That should be “mobile markets”…aka post-PC or whatever buzz-word you prefer.

    Which is why I think it best that Linux fall back to its Unix roots and leave the “radical rethinking of userspace” stuff to Android. Android has delivered the shiny stuff in the way that a couple of decades of slavishly and poorly imitating Microsoft haven’t.

  125. So openness is also a factor in the long-term success of a platform. Which of Linux, Mac OS, or Windows is most open?

    I think it’s disingenous to attribute the word open to Windows. Windows is available to OEMs to pre-install on computers, but it is still an opaque mess not to be understood by users (and only barely by developers). Market availability may not be a short term, but it’s probably more accurate; Windows’s market share was won by ruthless business practices of Microsoft, not any technical merits.

    (The rest of the post is going to be almost all speculation but I feel like ranting a little…)

    The 1990s were probably the darkest years when it comes to aspirations of a kid trying to program their computers. In the 1980s, almost every computer came with some variant of BASIC. It’s not a pretty language by any stretch of the imagination, but it was an access to being able to control the machine that you just bought, and even though BASIC will inflict its own scars, it still allowed people to control their computer. It changed with Windows though, and in the early 1990s when Microsoft was able to push Windows 3 onto pre-installed PCs (largely through a “MultiMedia PC” campaign), it came with no programming environment at all. MS made Visual BASIC but charged for it, you had to pay extra for the right to control your computer. Windows 95 and 98 would still include QBASIC, but it was only a DOS interpreter, and you weren’t allowed to play with the big boys on Windows unless you paid for the privilege to, and for novices Visual BASIC was the option marketted towards them. Windows was not open; it was an elite club, poorer people need not apply. This is a platform for consumption, not production.

    It’s still largely like this, but at least Microsoft provides a watered-down version of Visual Studio for free that supports C# and Visual BASIC for download; the version is heavily restricted especially putting restrictions on how you can distribute the binaries you compile.

    Linux and Mac OS X both provide a better avenue towards learning how to program: OS X includes Python by default and it’s also installed by default on almost every Linux distribution there is. on the Linux side of things, it’s sometimes required just for the fact that many package managers at least start out being written in Python; when aspiring programmers realize that their package manager is done in Python, an easy-to-get-into language, it also demonstrates Python’s strength for being a real language that real important programs are written in (it’s hard to get more important than the program that fundamentally has the job of installing other programs).

    It may be something that your “average user” would never do, but more clever and curious individuals that want to know how their system works will be digging in and trying to figure out just what makes it tick… maybe they’ll open a bunch of binary files in gedit, which look like garbage, but notice that a few of the files in /usr/bin aren’t just binary garbage, but start out in “#!/usr/bin/python” (or variants of the line) and contain fully readable text, not understandable at first, but it’ll peak their curiosity, show them an opening into how programs are made. Nothing like this exists in Windows-land in the normal installs. Sure, Python runs on Windows as well as it runs on OS X and Linux, but Windows itself isn’t providing any guidance or hints towards it being possible, nor does it provide avenues by itself to any kind of programming at all.

    Windows is elitist in whole.
    Mac OS X is a system that wants to be open, but happens to be ran by a control-freak.
    Linux is a system that both wants to be open and fully embraces it.

  126. I think it’s disingenous to attribute the word open to Windows.

    What I said was that the PC hardware platform was open.

    The 1990s were probably the darkest years when it comes to aspirations of a kid trying to program their computers. In the 1980s, almost every computer came with some variant of BASIC.

    In ROM no less, and for most it was the first thing that you saw when you started the machine. This communicated a profound axiom to the young minds of the day: that computers are intended to be programmed, just like pencils are intended to be written with.

    The 90s were not completely dark. It was possible to write a serviceable, but simple, RPG in Qbasic. A lot of kids from the Pokemon generation started off with Qbasic the same way we started off with Apple or Commodore BASIC.

    But Windows still has 90% market share, and these days it ships with bupkus in terms of programming tools. Well, there is still VBScript, but come on. Oh, and remember Palladium, when we are all afraid that Microsoft was going to make it hard to scrape off Windows and run Linux instead? Well, 3 words: UEFI Secure Boot. They’ve largely succeeded.

    All I’ve got to say is thank God for Raspberry Pi and related projects to put the fun back in hobbyist computing.

  127. @Mike Swanson:
    > maybe they’ll open a bunch of binary files in gedit, which look like garbage, but notice that a few of the files in /usr/bin aren’t just binary garbage, but start out in “#!/usr/bin/python”

    More likely to happen with vim. You usually don’t get garbage in gedit (which is actually one of the great *misfeatures* of an otherwise great editor). If gedit can’t detect the encoding of a file, it will refuse to open it and prompt you to manually select an encoding. If it judges that the file doesn’t match the encoding you’ve selected, it will present you with another prompt to select an encoding. If the file isn’t semi-valid* text in one of the encodings gedit knows how to deal with, you don’t have a chance at opening it.

    *It will accept files with a few characters that don’t match the autodetected or user-selected encoding, but I’m not sure what the conditions are or rejecting a file.

  128. But it has nothing to do with the merits of DOS.

    You are the one that brought up someone’s mom being on a board being more of a critical factor.

    DOS was a piece of shit and didn’t win any ease-of-use awards. The whole PC platform of the day was crufty, slow, and frustrating; its success was not really tied to its superior ease of use compared to Unix systems. (Unix was actually easier to use.)

    If we’re really honest with ourselves, ultimately, the PC platform won because it was more open.

    Being open did not help Linux on the desktop. Nor did being zero cost.

    If we’re really honest with ourselves, ultimately, the PC platform won because Microsoft played their hand very well and offered USERS a usable, very high capability ecosystem at a very low cost.

    So openness is also a factor in the long-term success of a platform. Which of Linux, Mac OS, or Windows is most open?

    You are right that DOS was cheaper than many of its rivals, which played a role in its success; which of Linux, Mac OS, or Windows is cheapest?

    Windows won the desktop market. If you think the answer is supposed to be Linux then obviously you need to pick different criteria to generate the response you are looking for.

    The Android mindset is very different IMHO from the GNU/Linux one. The primary goal is to offer average users a usable, very high capability system at a very low cost.

    Free or Open is very much secondary to usefulness and commerce. Not even Ubuntu is as unencumbered.

  129. Linux and Mac OS X both provide a better avenue towards learning how to program: OS X includes Python by default and it’s also installed by default on almost every Linux distribution there is.

    From my experience in the STEM world I would argue that more kids have learned to code via Lego Mindstorm than Python. And that more kids have learned to code in LUA for game modding than has ever heard of Python. More kids have learned to code C# for XNA than has learned from Python.

    It may be something that your “average user” would never do, but more clever and curious individuals that want to know how their system works will be digging in and trying to figure out just what makes it tick…

    First, there’s nothing to stop clever and curious kids from coding on Windows.
    Second, the ability to dig in is very much secondary to what most folks want out of their computer. The primary desire is to do work or play games that the user wants or needs to do. In this Linux fails hard because the ease of use as a solution sucks. Fixing the ecosystem to not suck in comparison against Windows or OSX is not something Ubuntu or even Valve is likely going to be able to accomplish.

    This is what Android was successful at and Linux wasn’t. They dropped ideology in favor of utility.

    Or more accurately their ideology was whatever was best for Google is the right answer…

  130. @Jeff Read
    (Unix was actually easier to use.)

    Milage varies. I used VMS, DOS and Unix, back in the late 80s and early 90s, and my personal experience was that Unix was much worse from a user-friendliness perspective than either VMS or DOS. Not that any cli is good at user-friendliness, from my non-hackers POV, but the usual unix shells were bad even for a cli.

    Now maybe Unix and the various unix shells were/are more hacker-friendly than either DOS or the VMS DCL, but I’m not a hacker. Nor are most users.

  131. Now maybe Unix and the various unix shells were/are more hacker-friendly than either DOS or the VMS DCL, but I’m not a hacker. Nor are most users.

    Most users at my university weren’t hackers, either. But they still migrated to the Unix machine, to the detriment of the VAXen.

    It’s like that whole bit about if life in Cuba really is better, why are all the boats going the other way?

    1. >It’s like that whole bit about if life in Cuba really is better, why are all the boats going the other way?

      And yet you still talk nonsense about socialists winning arguments and how wonderful government-run healthcare is.

      It’s comments like the above that occasionally make me suspect that you’re executing a long-form parody of batshit-crazy leftism as opposed to actually being one.

  132. The question of when a hacker should use closed-source, is less interesting to me (as a hacker who wants to earn a profit as an entrepreneur), than when should I write closed-source. I originated in the closed-source world and over my career realized the benefits of open-source. Most of the end user population (if decision not made by a corporation for them) does not even consider the question when they select an application. So my decision is influenced by whether the benefits would outweigh the cost and if I need plus could attain worthwhile scale (inertia) of third party contribution to the source code.

    Thus I assert that the following claim from The Magic Cauldron is incorrect in many cases, “No software consumer will rationally choose to lock itself into a supplier-controlled monopoly by becoming dependent on closed source if any open-source alternative of acceptable quality is available.”. There is a somewhat opaque admission that follows, “…asymmetric information makes markets work poorly. Higher-quality goods get driven out when it’s more lucrative to collect rent on privileged information…”.

    There follows another statement which is sometimes incorrect, “we can expect that open source has a high payoff where (a) reliability/stability/scalability are critical, and (b) correctness of design and implementation is not readily verified by means other than independent peer review”.

    My Ice Cream Sandwich (Android 4) phone destroys hours per day of my time, and the slowness appears to be fundamentally big O algorithmic because it started before the screen input problem.

    Another example is the years Mozilla has dragged its feet ostensibly due to ego on a very unpopular bug “fix” that fundamentally broke it.

    The profit motive of collecting rents on secret bits can cause a company to be more attuned to customer complaints, with iPhone as a prime example. The users apparently love it, and it may not all be “reality distortion field” marketing and wanna-be status effects.

    Due to competition and if important to customers, then even closed-source would be required by the market to provide sufficient interoperable data export ability and aspects of vendor lock other than data jails.

    I have been thinking about how software programming could be organized such as broad portions could be open-sourced and the key market value portions remain closed. So the benefits of open-source could be mixed with the benefits of closed-source where applicable. This model could be named “Free the Ubiquitous, Sell the Unique”. The advantage would be that in theory more unique variants could be created from a common open source base. Thus, I ponder whether the following statement could be end up false in many cases, “…you can choose either an open-source ubiquity play or a direct-revenue-from-closed-source play—but not both”.

    I don’t know if someone already stated this, as I did not have time to read the prior comments.

  133. @JustSaying:

    > broad portions could be open-sourced and the key market value portions remain closed

    What you are talking about is referred to, usually derisively, as “open core.” Personally, I have no problem with it; the derision is because the software this works best with is GPLed, and the crowd that thinks there is a moral issue with proprietary software naturally gravitates towards the copyleft and away from the permissive.

    When I give stuff away, I use a permissive license, but I actually have an open-source business idea, and that will be a variant on open core, and possibly use the GPL, because it’s a much better weapon for avoiding free-riding competitors.

    > “Free the Ubiquitous, Sell the Unique”

    AKA understanding your value-add and your core competencies, and engaging in serious coopetition. It makes sense on a lot of levels, and is one of the reasons that the apache webserver became so popular. The people at the top didn’t understand that the people at the bottom were cooperating with the enemy to make the web work better, and at the end of the day, that was a good thing.

  134. Perhaps some readers might be interested in the debate/discussion about the merits of open source and iOS versus Android, with my former boss who was briefly (over a couple year recruiting courtship) on a home phone number basis with Jobs and now works on the screen and UI technology for Apple.

    I respect that Apple is playing an important role in driving computing forward. I think Apple is repeating the critical error of the 1990s that they did not open up enough to compete for global market share. I do remember Apple did briefly allow some third parties to make some systems or partial systems. Was it Radius? I forget most of the details, but I doubt they dove in seriously enough with that market test.

  135. @Patrick
    “What you are talking about is referred to, usually derisively, as “open core.””

    I find that term very confusing. It mixes re-licensing other people’s code (copyright assignment), honey traps (lock-in to get you onto the proprietary “Enterprise” version), and a separation between code and content (eg, PostgreSQL).

    It has been said many times, but the “economic value” of code is in its future maintenance.

    When code is orphaned, it becomes almost immediately worthless. So I think a much better separation is between credible Free/Open/Community maintenance and development at one end and essentially private code ownership at the other end. The license is of secondary importance.

  136. At the blog linked from my prior comment, I wrote the following. I hope this is significant enough and close enough to the open vs. closed source topic, to justify posting here.

    Motivated by the HTML5 vs. native issue, here is one possible threat scenario enabled by giving up 4/5ths market share. Someone (me?) could develop an Android plugin that offered a more complete GUI API for apps than the DOM. Initially this could be a platform for developing native apps on Android. Thus there probably wouldn’t be any resistance to adoption if the apps are popular.

    As this gained scale, this plugin would become a defacto API for web apps. Apple could be forced to adopt it, else lose more market share.

    Thus, giving up 4/5ths market share means Apple is potentially losing control over native apps in the near future. So Apple might as well do it now with a two-tiered approach and retain market share.

  137. @Winter:

    I find that term very confusing. It mixes re-licensing other people’s code (copyright assignment), honey traps (lock-in to get you onto the proprietary “Enterprise” version), and a separation between code and content (eg, PostgreSQL).

    It doesn’t necessarily mix any of those things, despite what you may read on the wikipedia page.

    If I release software under the GPL, I don’t have to accept any code contributions, which means I don’t require copyright assignment. If I maintain my package well enough and am responsive enough to the customers, I won’t create a technical incentive for a fork[*] — most programmers are happy enough when reported bugs and feature requests automagically result in new, better working code.

    At this point, I am set up nicely for dual licensing. This brings in revenue from people who want to use my code in another proprietary program.

    An additional revenue source could be an add-on program for a small vertical market. Since I am the copyright owner, I don’t have to license the add-on under the GPL, and can keep it secret.

    So, my GPL version doesn’t lock in anybody any worse than any GPL code, my proprietary add-on doesn’t lock in anybody any worse than any proprietary alternatives (and is arguably much better, since the amount of code they need to replace if I become unreasonable to deal with is probably smaller than a fully custom solution, since the main program is already GPLed), and there is no separation between code and content — I’m just licensing code.

    Additionally, for a price, I can give customers a freedom you don’t get with the GPL — the freedom to leverage the use of third-party libraries while still keeping the customers’ own code secret.

    [*] If the code is important enough there may be an economic reason for a fork. See PySide, LLVM, etc.

  138. @Patrick
    Ah apparently other programming languages are allowed in Apple’s app store if the app bundles all of the language’s runtimes.

    Apparently Kiva is not a browser plugin. My idea was a browser plugin that exposed more APIs to JavaScript (or other programming language that might be supported in the browser). This would bypass any central authority on apps, because the apps would be accessed on demand by the users over the web. Think of Flash in the browser, which Apple had to kill to protect their central authority on app approvals.

    In short, I think the distinction between a native and web app will disappear. On first access to a web app, the user will acknowledge the required security permissions for the app.

  139. @JustSaying:

    In short, I think the distinction between a native and web app will disappear. On first access to a web app, the user will acknowledge the required security permissions for the app.

    I’ve been thinking/hoping that for years. We are a lot closer with the new local storage options in HTML 5. I’d still like a very simple way to get to a flat file, but whatever.

    In terms of code, javascript is the new assembly language. There are several ways to get to javascript, from Java, Python, etc.

  140. @Patrick Maupin
    > javascript is the new assembly language

    For now, there are some languages (e.g.) which only compile to or compile more faithfully to the JVM.

    Android ships with a JVM. JVM has roughly from 2 to 5 times faster performance (thus less battery drain) on large, long-run apps. JavaScript is faster for small, short-run scripts where startup time and JIT iterations are a factor. Java’s HotSpot JIT takes more time (than Chrome’s V8) to reach maximum performance.

    > thinking/hoping that for years …HTML 5.

    HTML5 is a top-down (cathedral) model of design feedback.

    Best engineering practice is that a complex platform should be built incrementally with feedback from actual use in the real market place and competition between offerings from different designers, so the solution can anneal faster and with better fitness.

  141. @JustSaying:

    For now, there are some languages (e.g.) which only compile to or compile more faithfully to the JVM.

    No doubt. But I think that is changing quickly.

    Android ships with a JVM.

    Yeah, but AFAIK you can’t get to it from a web application. And AFAIK, this is on purpose and unlikely to change. Google is aiming for HTML5 and JavaScript all the way, which given their business model, makes sense. It’s only the cherry on the icing that this strategy might help to salt the earth on Oracle’s Java.

    JVM has roughly from 2 to 5 times faster performance (thus less battery drain) on large, long-run apps.

    Even if this is and continues to be true, that’s a smallish segment of the market.

    thinking/hoping that for years …HTML 5.

    HTML5 is a top-down (cathedral) model of design feedback.

    Hey! Careful with your quotes! You make it look like I was hoping for HTML5 for years :-)

    But committees are not all bad. I wouldn’t touch C with a barge pole pre-ANSI. After it borrowed enough stuff from other languages to make it less painful to work in (and the compilers got good enough at warning you about probable stupidities), it was OK. Pity it became good enough to completely kill Modula-2, though.

    Best engineering practice is that a complex platform should be built incrementally with feedback from actual use in the real market place and competition between offerings from different designers, so the solution can anneal faster and with better fitness.

    At the start. At some point, you have legacy considerations, especially when part of the complexity of the platform is the interoperation of hundreds or thousands of independent implementations. The IETF RFC process is a nice compromise. And even HTML — you can’t argue that it wasn’t built incrementally.

    I will argue that the main problem with HTML (even now) is that it has been engineered too much by web-centric companies. They view the browser as an ever more capable terminal. They even work hard to make it more capable. But this viewpoint is at odds with my viewpoint (and perhaps yours) that the browser could be the platform, and the web merely another resource. But things are changing in this direction, and Google and Mozilla seem to be leading the way.

  142. @Patrick Maupin
    I suppose the following is a rant if I don’t build.

    AFAIK, C and HTML both were standardized after private competition in the wild had enumerated the design issues. HTML5 may be attempting to put the cart before the horse, although one can argue that Flash and Silverlight are the real world test cases. The distinction perhaps is that standardization of C and HTML required minimal changes to the grammar. HTML5 is significantly different than Flash and Silverlight.

    CSS is an example of a W3C standards-first success, but it is accompanied by numerous stillborn market failures such as XPath, XEvents, etc.. CSS was drawing from real world popularity of style sheets in word processors and desktop publishing.

    Perhaps Google and Apple don’t want the web browser to become a native-capable application platform (or who is that wants the browser only to be WYSIWYG terminal)? But it is more important what users want. Thus the importance of testing in the market place before standardization.

    I am thinking that Flash demonstrated there is a demand for the browser to be general application platform. One could argue that Flash pages were mostly just interactive animations, but I suspect this was due to the difficulty of and proprietary tools for programming Flash (especially the earlier versions of ActionScript and the API), market preoccupation with mastering LAMP+DHTML first, the prerequisite to license Adobe’s server to do some features, and preceding the network effects of native apps and acceleration of cloud computing.

    HTML’s two killer features are the hyperlink and write-once, run every. A full featured native cross-platform toolkit is almost a hypothetical application browser, sans lack of integration with the hyperlink and HTML.

    One can argue that the third killer feature of HTML is the sandbox, so users don’t have to worry about malware everytime they click a hyperlink. For those links that would lead to more permissive sandbox variant, the user only needs to be prompted on the first visit. The incentive remains to use the existing sandbox in your webapp/webpage, so the user isn’t prompted. I disagree with the argument that users can’t be trusted to make security decisions– they do every time they install from an app store. Deciding on security happens once for each app, whereas installing and downloading a native app happens every time one buys a new device. I assert we don’t need to conflate the two, by forcing the tsuris of not allowing apps to run from a hyperlink. We don’t need yet some other proprietary means of maintaining the apps on a device– we already have the on-demand model of the hyperlink and browser cache.

    For shared computers, e.g. in a public netcafe or library, the machine can be configured to reboot to unaltered stated (e.g. DeepFreeze on Windows) after each user. The admin could optimize the disk image to include pre-download web apps caches for popular, approved apps.

    My understanding is that JavaScript will underperform for many long running native applications, but that is tested only by a few benchmarks. Oracle doesn’t own all the JVMs, perhaps we could deprecate the rest of Java.

  143. And prompting users once about security sandbox variants only needs to happen when the web page access an API that requires it. Thus the web page could be coded in such a way that it is display and interacts with the user some before requesting security permissions.

  144. Ken Burnside: Indeed, when Open Source attempts to be clever with UI design, there’s a backlash (Ubuntu Unity interface, anyone?)

    “Clever” is not necessarily equal to “good”.

    I think the bigger problem with stuff like Unity and Gnome 3 and whatnot is the lack of choice. “Clever and a radical departure” are a lot more likely to recieve positive reviews if they are adopted voluntarily by the people who want them, who will then talk them up to others and generate mostly good press, rather than being dumped on an entire major distro’s userbase by executive fiat. That way lies… well, the enormous avalanche of hits for “Gnome 3 sucks”.

    My own particular tidbit for this discussion is 3D CAD/CAM/CAE software. Solidworks is the gold standard for this type of product. Sadly, even the cheapest version costs $4,000, and is definitely closed source. And it only runs on Windows. But it’s way more than 40% better than the alternatives, at least any that I’ve discovered so far.

  145. My own particular tidbit for this discussion is 3D CAD/CAM/CAE software. Solidworks is the gold standard for this type of product. Sadly, even the cheapest version costs $4,000, and is definitely closed source. And it only runs on Windows. But it’s way more than 40% better than the alternatives, at least any that I’ve discovered so far.

    The best open source solution is BRL-CAD; which also takes the prize for world’s oldest extant public version control repository. But near as I can tell, the difference between BRL-CAD and Solidworks is like the difference between LaTeX and FrameMaker or Quark. In the one you instruct the computer how you want things done; in the other you get things done.

  146. @Jeff Read
    ” But near as I can tell, the difference between BRL-CAD and Solidworks is like the difference between LaTeX and FrameMaker or Quark. In the one you instruct the computer how you want things done; in the other you get things done.”

    It seems to me you have no idea how LaTeX is used. If there is any suit that “just works”, it is type-setting in LaTeX. However, it is targeted at people who know what type-setting and layout actually mean.

  147. Winter,

    I’ve done most of my “word processing” type tasks in LaTeX for some time now. It definitely is a techie’s tool — no WYSIWYG, no instant feedback. BRL-CAD appears to provide more instantaneous feedback, but is still based on a paradigm of typing commands to the computer rather than direct manipulation of objects. If you want your tool to be used by non-techies, it has to be graphical and it has to support some sort of direct-manipulation metaphor, at a bare minimum.

  148. @me:
    @Patrick Maupin:
    Unhosted.org seems to be pushing much of the web app conceptual platform I enumerated up thread.

  149. I think the bigger problem with stuff like Unity and Gnome 3 and whatnot is the lack of choice.

    Lack of choice is a feature. This is what Linux-heads don’t understand: the correct way to design an end-user-oriented system that Just Works and isn’t a hassle to configure and use is to put all of your eggs in one very good basket, and ideally have a Steve Jobs to pick the basket.

    The problem with GNOME 3 and Unity is simply that they suck. They’re UIs designed for the shitty and largely discredited netbook form factor.

    Linux on the desktop is fucking hosed. Windows gets the UX more right than Linux, is stable, and is virtually guaran-fucking-teed to work with hardware available at retail; there is no reason left — other than pure bloodyminded neckbeardery — to ever deinstall Windows from an end-user-facing PC. Even if the open-source crowd could get a toe in, they wouldn’t be able to compete: too unfocused, not enough resources are being committed to critical tasks; and just plain too many fucking cooks forking and branching and fragmenting.

    If you ask me, Linux should resign itself to remaining a niche OS with a shitty UX. Then, at least, its developers could play to its strengths and create a rock-solid OS where it’s easy to understand what’s going on.

  150. Usually such posts are expected and come from Apple fanboys, but it is admittedly quite amusing to see it applied to Windows. Especially in the day that Windows 8 completely uproots all UI expectations and even backwards compatibility with older applications, I can’t help but to laugh. :)

  151. Mike Swanson,

    Don’t get me wrong. I fucking hate Windows 8, and that Start Screen is a hallmark of UX fail.

    But even still, it’s infinitely better than the ugly, busted, fragmented clusterfuck that is the Linux desktop. Nothing ever just works right on Linux; there are always a few tweaks, config file hand-edits, and kernel recompiles that you have to apply first. (Don’t tell me you don’t have to recompile your kernel in today’s modern world. I just bought a new laptop, and in order to support its digitizer pen part, yes, I had to hand-patch and recompile the kernel. A single counterexample refutes a universal assertion.)

    Windows — even Windows 8 — is objectively a better operating system than Linux for most people’s day-to-day, and will remain so until the Linux community of squabbling children gets its shit together and delivers a PLATFORM.

    Oh, and by the way, most of the developers I know have switched to Mac. An increasing number of the rest leave Windows on as their primary OS and run Linux either as a VM or in the cloud. It’s simply less hassle not to run Linux on the desktop.

  152. Mikko, re: Wayland:

    I don’t actually think so at all, but never mind, this is all pretty far off-topic.

    If I were looking to bury X, what I’d come up with would look an awful lot like Plan 9’s rio(1). What we go instead was PulseVideo.

    If I seem self-contradictory there’s a reason for it. I prefer Linux to follow the Unix Way. If certain distro packagers want to kitbash it into a clone of Mac OS X or Windows, that’s fine; but they should make a clean break with their own SDK, as Google did, and not try to taint the Linux ecosystem by declaring the battle-tested Unix model old and busted, and agitating for its replacement by whatever new hotness they’ve released this week. GNU/Linux is too much a Unix to ever make an adequate Windows, and attempting to turn it into the latter is a) doomed to failure and b) likely to piss off people like me who strongly prefer Unix.

  153. Many of my fears about Wayland have been (heh) waylaid. It just gained a remoting-capable compositor as part of its core.

    Incidentally, this compositor uses Microsoft’s RDP as its transport protocol. Say what you will about Windows 8; Microsoft have succeeded in building a better X11 than X11.

    Now if only we could get the Linux community to abandon the mess that is OpenGL and standardize on Direct3D as a 3D hardware API…

Leave a comment

Your email address will not be published. Required fields are marked *