Comes the news that Nvidia just lost an order for 10 million graphics cards to AMD because it wouldn’t open the source for its driver. At a very conservative estimate, that’s north of $250 million in business Nvidia just threw to a major competitor because it couldn’t get its head out of its rectum. Somebody’s quarterlies are going to suck.
The really interesting aspect of this isn’t the amount of money Nvidia’s idiotic secrecy fetish just cost it, but why it happened – and why it’s likely to happen again, soon and repeatedly, to other hardware companies with equally idiotic secrecy fetishes.
In response to a bug report that was relatively easily fixed, I’ve just shipped release 2.8 of doclifter, a program that takes troff-based document markups – including man page markup – and lifts them to DocBook XML.
Yes, two software releases in a day is an unusually rapid tempo even from me. But freecode-submit is part of my release machinery for other projects, and when I shipped GIFLIB 5.0.0 I discovered it had gone all pear-shaped on me. Problem turned out to be an unannounced change in freecode’s JSON interface. I hate it when that happens…
I’ve just shipped the 5.0.0 release of GIFLIB, a graphics service library that is deployed pretty much everywhere that throws pixels on a display. Older versions live in your browser, your game console, and your smartphone. I have written about what it was like to go back to this code after 18 years previously, in The Long Past of C; also in my 4.2.0 release announcement.
Some people are obsessive about never using closed-source software under any circumstances. Some other people think that because I’m the person who wrote the foundational theory of open source I ought to be one of those obsessives myself, and become puzzled and hostile when I demur that I’m not a fanatic. Sometimes such people will continue by trying to trap me in nutty false dichotomies (like this guy) and become confused when I refuse to play.
A common failure mode in human reasoning is to become too attached to theory, to the point where we begin ignoring the reality it was intended to describe. The way this manifests in ethical and moral reasoning is that we tend to forget why we make rules – to avoid harmful consequences. Instead, we tend to become fixated on the rules and the language of the rules, and end up fulfilling Santayana’s definition of a fanatic: one who redoubles his efforts after he has forgotten his aim.
When asking the question “When is it wrong (or right) to use closed-source software?”, we should treat it the same way we treat every other ethical question. First, by being very clear about what harmful consequences we wish to avoid; second, by reasoning from the avoidance of harm to a rule that is minimal and restricts peoples’ choices as little as possible.
In the remainder of this essay I will develop a theory of the harm from closed source, then consider what ethical rules that theory implies.
I’ve now read Judge Alsup’s ruling in the Oracle vs. Google lawsuit addressing the copyrightability of the Java APIs as a matter of law. This is a bigger win for the good guys than appears at first glance; Alsup has subtly but definitely driven a stake through the heart of API copyrights. The interesting part is how he did it.
To the surprise of nobody who was actually familiar with the underlying law and precedent, the judge in the Oracle-vs.-Google mega-lawsuit ruled today that Oracle’s claim of copyright protection on the Java APIs is contrary to law.
This means Oracle’s claims against Google are toast. Their best case is now that they’ll get $300K in statutory damages for two technical copyright violations, almost noise compared to what Oracle spent in legal fees. The patent claims went just as thoroughly nowhere as I predicted back when the lawsuit was launched.
It’s all over the net today. As I repeatedly predicted, the patent claims in the Oracle-vs.-Java lawsuit over Android have completely fizzled. Oracle’s only shred of hope at this point is that Judge Alsup will rule that APIs can be copyrighted, and given the extent of cluefulness Alsup has displayed (he mentioned in court having done some programming himself) this seems rather unlikely.
First giflib release since I reassumed the lead. Short version: lots of useless old cruft thrown out, everything Coverity-scanned, one minor resource leak found and fixed.
My regular readers will know that (a) I’ve recently been pounding bugs out of GPSD with Coverity, and (b) I hate doing stupid clicky-dances on websites when I think I ought to be able to shove them a programmatically-generated job card that tells them what to do.
So, here’s a side-effect of my recent work with Coverity: coverity-submit. Set up a config file once, and afterwards just run coverity-submit in your project directory and stand back. Supports multiple projects. Because, manularity is evil.
Here’s the HTML documentation.
I’ve been pounding on GPSD with the Coverity static analyzer’s self-build procedure for several days. It is my great pleasure to report that we have just reached zero defect reports in 72.8KLOC. Coverity says this code is clean. And because I think this should be an example unto others, I shall explain how I think others can do likewise.
I got emailed summaries from a Coverity scan of the repo head version of GPSD today.
In a recent Google+ comment, H. Peter Anvin grumped about GPSD using “braindead heuristics” to determine which USB devices it should sniff as possible GPses when it gets a hotplug notification saying that one has connected. I was going to reply in a comment there, but the explanation ran too long for that.
Short version: yes, GPSD will very occasionally sniff at a device that is none of its business. We’re stuck in a bad place because of deficiencies in the USB standard, But it doesn’t happen often, and all the alternative behaviors I’ve been able to imagine would be worse in very obvious ways. Detailed explanation follows.
Hacking on the C code of giflib after an absence of nearly two decades has been an interesting experience, a little like doing an archeological dig. And not one that could readily be had elsewhere; nowhere other than under Unix is code that old still genuinely useful under any but carefully sandboxed conditions. Our reward for getting enough things about API design right the first time, one might rightly say. But what’s most interesting is what has changed, and giflib provides almost ideal conditions for noticing the changes in practice that have become second nature to me while that code has stood still.
In 1994 I handed off the maintainership of giflib, the open-source library used by pretty much everything in the universe that displays images for the single most widely used icon and image format on the World Wide Web, because patent issues made it unwise for the project to be run by someone in the U.S. Now, eighteen years later, Toshio Kuratomi (the hacker who took it over then) has asked me to resume the lead. I have accepted his request.
I guess it’s paleo-game theme week. For your retrocomputing pleasure, here’s my Python forward-port of the 1973 University of Texas FORTRAN Trek game: Super Star Trek.
Anybody old enough to remember TTYs probably played this on one. While it has accreted some features over time, it’s still functionally pretty close to the original FORTRAN Star Trek. You kids should
get off my lawn try it, too – it retains considerable play value despite the primitive interface.
Recent discussion of the 4X game Eclipse reminded me of a responsibility. I’ve just shipped VMS Empire 1.9. This is a close descendent of the original solitaire Empire computer game that was the ur-ancestor of all 4X computer games, including Civilization and Master of Orion.
Five weeks ago I wrote that direct Subversion support in reposurgeon is coming soon. I’m waiting on one final acceptance test before I ship an official 2.0; in the meantime, for those of you kinky enough to find the details exciting, description follows of why this feature has required such a protracted and epic struggle. With (perhaps entertaining) rambling through the ontology of version control systems, and at least one lesson about good software engineering practice.
An interesting question showed up in my mailbox today. So interesting that I think it’s worth a public answer and discussion:
In chapter 7 of The Art of Unix Programming, you classified threads under the section “Problems and Methods to Avoid”. You also wrote that with the increased emphasis on thread-local storage, threads are looking more like a controlled use of shared memory. This trend has certainly continued; recent programming languages like D, Scala, and Go encourage the use of threads as mostly isolated lightweight processes with message passing. Observing this trend, I have often wondered, why not go all the way and use multiple OS processes? I can think of two reasons to use threads in this newer, controlled way rather than using full processes:
1. Portability to Windows, which doesn’t have an equivalent of fork(2)
2. Performance, particularly because message passing between real processes requires serialization and deserialization, whereas message passing within a process can be done with shared memory and (maybe) locks
So what do you think? Are threads still a menace to be avoided in favor of full OS processes? Or has the situation improved since 2003?
I think it has, and I think you’ve very nearly answered your own question as to why. Bare threads were dangerously prone to deadlocks, livelocks, context-trashing, and various other sorts of synchronization screwups – so language designers set out to encapsulate them in ways that gave better invariants and locality guarantees without sacrificing their performance advantages. I think Scala’s transactional memory stands out as a particularly elegant stab at the problem.
I don’t develop for Windows or communicate much with people who do, so I’m not equipped to judge how important Windows portability is in motivating these features. But the performance issue you called out is real and quite alive on Unix systems.
UPDATE: Matt Campbell, who has materialized in the comments here, send the original question and has given me permission to cite him. Thanks for a good question!
For those of you who have been following the development of reposurgeon, a pre-announcement: the next version, probably to be numbered 2.0, will directly read Subversion dumpfiles and repositories.
I’ve got this feature working now – it’s why my blogging has been scant recently – but I intend to have a really good regression-test suite in place and at least one large repo conversion done before I ship it for general use.
Note an important limitation: it will not write Subversion repos. So it will be useful as a conversion tool but not directly as an editor.
Fear the reposturgeon!