Some years back I wrote a book titled The Art of Unix Programming. My goal in that book was to convey the Zen of Unix to today’s generations of eager young Linux and *BSD programmers. In the spirit of that book, I feel impelled to point out out a program I’ve recently learned as a striking, near-perfect example of Unix style in the modern day. rsnapshot, you’re doing it right!
Category Archives: Software
Analysis of scaling problems in build systems
My post SCons is full of win today triggered some interesting feedback on scaling problems in SCons. In response to anecdotal assertions that SCons is unusably slow on large projects, I argued that build systems in general must scale poorly if they are to enforce correctness. Subsequently, I received a pointer to a very well executed empirical study of SCons performance to which I replied in the same fashion.
In this post, I intend to conduct a more detailed analysis of algorithmic requirements and complexity in an idealized build system, and demonstrate the implied scaling laws more rigorously. I will also investigate tradeoffs between correctness and performance using the same explanatory framework.
SCons is full of win today
It’s not much of a secret that I loathe autotools and have been seeking to banish that festering pile of rancid crocks from my life. I took another step in that direction over the last four days, and have some interesting statistics to report.
Bookend consistency
I’ve been thinking recently about writing a shared-memory export for gpsd. The JSON-over-sockets client interface we have is very powerful and flexible, but more than is needed when network access to the server is not required. For embedded deployments, in particular – it would be useful to have a lower-overhead way of shipping results to clients.
Consequently, I’ve been thinking about coherence techniques for shared memory. In this particular case, we have one writer (gpsd) and multiple readers (the application clients). Updates to the shared-memory segment are long enough that writes aren’t guaranteed atomicity. It is permissible for a client to miss an update if it’s not inspecting the segment frequently enough, but required that after a read from the segment the client can always tell when it has a coherent update (as opposed to having read the segment while a write is in progress).
The obvious way to ensure update coherence would be with a semaphore. But a technique that is non-blocking and wait-free would be preferable. I have invented a method I call “bookend consistency”. I present it here for public critique, also because I’m curious whether any of my commenters can identify it with a known, published algorithm. It was inspired by a vague, distant memory of pioneering work by Butler Lampson on lock-free algorithms.
The bug that didn’t bite in the night-time: an anti-disaster story
A very curious thing happened with GPSD this week. In fact it’s so odd I’m still having trouble believing it. In software engineering we often have trouble getting seemingly simple things to work reliably. How does one react when an incredibly complex, fragile piece of bit-twiddling code works – perfectly – after six years without real-world testing, during which the surrounding architecture underwent such massive changes that any rational person would have expected the feature to bit-rot into garbage?
No, really, this one is weird. Let me unfold to you the strange tale of The RTCM2 Analyzer That Shouldn’t Have Worked. Really. At All.
The smartphone wars: Samsung folds under pressure
Some months ago I wrote (in Flattening the Smartphone Market) about the real significance of the Android 2.2 announcement. That was the moment that Google made clear that it intended to take control of the smartphone feature list from the cell carriers. Subsequently, carrier-loaded crapware and suppression of features like hotspot and tethering have been in decline under market pressure. The release of the T-Mobile G-2 and the Samsung Galaxy S (marketed as “the pure Google experience”) have been indicators of this trend.
I should have added that 2.2 takes control of the smartphone feature list away from handset vendors as well. A leak by someone claiming to be a T-mobile employee in the know alleged that Samsung has been dragging its feet on 2.2 upgrades for the Samsung Vibrant, hoping customers will upgrade to the Vibrant 4G in order to get the 2.2 that ships with it. Now comes word that Samsung has folded under pressure from the maneuver and announced an OTA update schedule for 2.2 on the Vibrant.
Embracing the suck
This is a followup to The Rollover of Doom: a Trap for Good Programmers. That post ended “This problem is a Chinese finger-trap for careful and conscientious programmers. The better you are, the worse this problem is likely to hurt your brain. Embrace the suck.”
That last phrase is a take on a military objurgation which translates as “The situation is bad. Deal with it.” Well, my friends, I am about to tell you how bad the GPS rollover situation really is.
The Rollover of Doom: a Trap for Good Programmers
GPS, the Global positioning System, was designed in the 1970s under hardware-cost constraints that would seem ridiculous today. This makes interpreting the data it sends into a black art, and produces some really painful edge cases.
There’s one edge case in particular that I’ve come to think of as the Rollover of Doom. This morning I came up with an evil, clever hack for getting around it. I call it clever because you have to think your way out of a conceptual box to see it. As to why it’s evil…well, you’ll see. If you can figure it out.
Plug and Pray in GPS-land
Welcome, ladies and gentlemen, to another darkly humorous tale of the seamy side of GPS interfacing. GPSD working with USB GPS mice has, when properly installed, lovely plug-and-play self-configuring behavior. That is, you plug a USB GPS into a USB port, the hotplug system notifies the gpsd daemon that the GPS is available, the daemon records this fact…and subsequently when you start up any GPSD client application It Just Works. Well, usually. There’s a dangerous weakness in the machinery, and yesterday it came around and bit us in the ass.
NMEA 2000 and the Obverse of Open Source
In discussion of the GPSD project, a commenter suggested that its role might be going away in part because the NMEA 0183 protocol historically used in GPS sensors is being replaced by NMEA2000. So far, this is not true, and the reasons it’s not true are worth a look because they illustrate a sort of flip side of the economic and technological tends driving the adoption of open source and open protocols in the wider technology market.
Off with their header files!
I released a new software tool today. The surprise about this one is that it turns out to be consistently more useful than I expected. And thereby hangs a tale.
If RCS can stand it, why can’t your system?
I’ve written software for a lot of different reasons besides pure utility in the past. Sometimes I’ve been making an aesthetic statement, sometimes I’ve hacked to perpetuate a tribal in-joke, and at least once I have written a substantial piece of code exactly because the domain experts solemnly swore that job was impossible to automate (wrong, bwahahaha).
Here’s a new one. Today I released a program that is ugly and only marginally useful, but specifically designed to shame other hackers into doing the right thing.
It’s good to be ubiquitous
So, while trying to discover the minor version of the Android 2.2 running on my G-2, I touched the tab labeled “Open source licenses”. Scrolled down, and “Eric S. Raymond” popped out at me.
Bleg for info – Linux backup tools and services
One of the comments that got lost in the recent database restore was a pointer to a backup program I can’t offhand remember the name of. I remember that it’s a command-line tool written in Perl (alas) and meant to be done by a cron job; what it does underneath is rsync with hardlinks to the remote target, so you get a Time-Machine-like effect for not much beyond the space requirement of the initial dump. Can someone remind me what this is. please?
Also, I’m in the market for a dropbox-like service that I can rsync to and from, for off-site backup. Any suggestions?
UPDATE: rsnapshot is what I was trying to remember. A very elegant little tool, thoughtfully written and handy. I may go with rsync.net for offsite backup.
Lessons learned from reposurgeon
OK, I’m officially coming out of my cave now, after what amounted to a two-week coding orgy. I’ve shipped reposurgeon 0.5; the code looks and feels pretty solid, the documentation is written, the test suite is in place, and I’ve got working repo-rebuild support for two systems, one of which is not git.
The rest is cleanup and polishing. Likely the next release or the one after will be 1.0. It’s time for an after-action report. As usual, I learned a few things from this project. Some are worth sharing.
Announcing reposurgeon – a tool for the good new days
I’ve been mostly blog-silent for the last week because I’ve been working my tail off on a new project. It’s reposurgeon, a tool for performing surgery on repository histories, and there are several interesting things to note about it.
Children of a Lesser Good
Regular readers of this blog are probably pretty clued in about my better-known software projects – gpsd, fetchmail, giflib, libpng, INTERCAL, ncurses, Battle for Wesnoth, Emacs VC and GUD modes, and the like. If those are the best, what about the rest? Here’s a tour of some of the lesser-known stuff I’ve written or had my fingers in. Warning: obscurity, trivia, and obsolescence lie ahead!
Risk, Verification, and the INTERCAL Reconstruction Massacree
This is the story of the INTERCAL Reconstruction Massacree, an essay in risk versus skepticism and verification in software development with a nod in the general direction of Arlo Guthrie.
About three hours ago as I began to write, I delivered on a promise to probably my most distinguished customer ever – Dr. Donald Knuth. Don (he asked me to call him that, honest!) had requested a bug fix in INTERCAL, which he plans to use as the subject of a chapter in his forthcoming book Selected Papers on Fun And Games. As of those three hours ago Donald Knuth’s program is part of the INTERCAL compiler’s regression-test suite.
But I’m not actually here today to talk about Donald Knuth, I’m here to talk about risk versus skepticism and verification in software engineering – in five part harmony and full orchestration, using as a case study my recent experiences in (once again) calling INTERCAL forth from the realm of the restless dead.
Killing the Founder
During the controversy I described in Condemning Censorship, Even of Werewolves one of the parties characterized me as “nuts and in decline.”. This failed to bother me, and not because I’m insulated against such insults by my natural arrogance. OK, I am largely insulated against such insults by my natural arrogance, but that’s not the main reason I easily shed this one.
In general I’m much less bothered about people who think I’m crazy than they usually think I should be because I know a lot about the life cycle of reform movements. I studied this topic rather carefully in early 1998, just after Netscape announced its intention to release the Mozilla sources, when I noticed that a burgeoning reform movement seemed to need me to lead it. I was particularly influenced in my thinking by the history of John Humphrey Noyes and the Oneida Community.
Here is part of what I learned: There comes a point in the development of every reform movement at which it has to kill the founder. Or anathematize him, or declare him out of his mind. Or neutralize him in a more subtle way by putting him on a pedestal so high that he can’t actually influence events on the ground.
Software licenses as conversation
An article published yesterday, I could license you to use this software, but then I’d have to kill you calls out some odd outliers in the open-source licensing space – odder, actually, than any I ever reviewed when I was the founding president of the Open Source Initiative. I wonder, though, if the author actually gets all the levels of the joke.