The C/UNIX library support for time and calendar programming is a nasty mess of historical contingency. I have grown tired of having to re-learn its quirks every time I’ve had to deal with it, so I’m doing something about that.
Announcing Time, Clock, and Calendar Programming In C, a document which attempts to chart the historical clutter (so you can ignore it once you know why it’s there) and explain the mysteries.
What I’ve released is an 0.9 beta version. My hope is that it will rapidly attract some thoroughgoing reviews so I can release a 1.0 in a week or so. More than that, I would welcome a subject matter expert as a collaborator.
In a recent discussion on G+, a friend of mine made a conservative argument for textual over binary interchange protocols on the grounds that programs always need to be debugged, and thus readability of the protocol streams by humans trumps the minor efficiency gains from binary packing.
I agree with this argument; I’ve made it often enough myself, notably in The Art of Unix Programming. But it was something his opponent said that nudged at me. “Provable programs are the future,” he declaimed, pointing at sel4 and CompCert as recent examples of formal verification of real-world software systems. His implication was clear: we’re soon going to get so much better at turning specifications into provably correct implementations that debuggability will soon cease to be a strong argument for protocols that can be parsed by a Mark I Eyeball.
Oh foolish, foolish child, that wots not of the Rule of Technical Greed.
If you’ve ever wanted a JSON parser that can unpack directly to fixed-extent C storage (look, ma, no malloc!) I’ve got the code for you.
The microjson parser is tiny (less than 700LOC), fast, and very sparing of memory. It is suitable for use in small-memory embedded environments and deployments where malloc() is forbidden in order to prevent leaked-memory issues.
This project is a spin-out of code used heavily in GPSD; thus, the code has been tested on dozens of different platforms in hundreds of millions of deployments.
It has two restrictions relative to standard JSON: the special JSON “null” value is not handled, and object array elements must be homogenous in type.
A programmer’s guide to building parsers with microjson is included in the distribution.
I’ve been blog-silent the last couple of days because I’ve been chasing down the bug I mentioned in Request for help – I need a statistician.
I have since found and fixed it. Thereby hangs a tale, and a cautionary lesson.
GPSD has a serious bug somewhere in its error modeling. What it effects is position-error estimates GPSD computes for GPSes that don’t compute them internally themselves and report them on the wire. The code produces plausible-looking error estimates, but they lack a symmetry property that they should have to be correct.
I need a couple of hours of help from an applied statistician who can read C and has experience using covariance-matrix methods for error estimation. Direct interest in GPS and geodesy would be a plus.
I don’t think this is a large problem, but it’s just a little beyond my competence. I probably know enough statistics and matrix algebra to understand the fix, but I don’t know enough to find it myself.
Hundreds of millions of Google Maps users might have reason to grateful to anyone who helps out here.
UPDATE: Problem solved, see next post.
Sometimes reading code is really difficult, even when it’s good code. I have a challenge for all you hackers out there…
Yesterday I shipped cvs-fast-export 1.15, with a significant performance improvement produced by replacing a naive O(n**3) sort with a properly tuned O(n log n) version.
In ensuing discussion on G+, one of my followers there asked if I thought this was likely to produce a real performance improvement, as in small inputs the constant setup time of a cleverly tuned algorithm often dominates the nominal savings.
This is one of those cases where an intelligent question elicits knowledge you didn’t know you had. I discovered that I do believe strongly that cvs-fast-export’s workload is dominated by large repositories. The reason is a kind of adverse selection phenomenon that I think is very general to old technologies with high exit costs.
The rest of this blog post will use CVS as an example of the phenomenon, and may thus be of interest even to people who don’t specifically care about high version control systems.
I just had a rather hair-raising experience with a phase-of-moon-dependent bug.
I released GPSD 3.11 this last Saturday (three days ago) to meet a deadline for a Debian freeze. Code tested ninety-six different ways, run through four different static analyzers, the whole works. Because it was a hurried release I deliberately deferred a bunch of cleanups and feature additions in my queue. Got it out on time and it’s pretty much all good – we’ve since turned up two minor build failures in two unusual feature-switch cases, and one problem with the NTP interface code that won’t affect reasonable hardware.
I’ve been having an extremely productive time since chewing through all the stuff I had deferred. New features for gpsmon, improvements for GPSes watching GLONASS birds, a nice space optimization for embedded systems, some code to prevent certain false-match cases in structured AIS Type 6 and Type 8 messages, merging some Android port tweaks, a righteous featurectomy or two. Good clean fun – and of course I was running my regression tests frequently and noting when I’d done so in my change comments.
Everything was going swimmingly until about two hours ago. Then, as I was verifying a perfectly innocent-appearing tweak to the SiRF-binary driver, the regression tests went horribly, horribly wrong. Not just the SiRF binary testloads, all of them.
I shipped point releases of cvs-fast-export and reposurgeon today. Both of them are intended to fix some issues around the translation of ignore patterns in CVS and Subversion repositories. Both releases illustrate, I think, a general point about software engineering: sometimes, it’s better to punt tricky edge cases to a human than to write code that is doomed to be messy, over-complex, and a defect attractor.
Here at Eric Conspiracy Secret Labs, we ship no code before its time. Even if that means letting it stay in beta for, er, nearly twelve years. But at long last I believe my shipper tool is ready for the world.
Since I’ve already described shipper in detail I won’t rehearse its features again. Suffice it to say that if you’re the kind of hacker who ships point releases more often than about once a week, and you’re tired of all the fiddly handwork that implies, you want this more badly than you know.
There’s a recent bug filed against giflib titled giflib has too many unnecessary API changes. For a service library as widely deployed as it is (basically, on everything with a screen and network access – computers, smartphones, game consoles, ATMs) this is a serious complaint. Even minor breaks in API compatibility imply a whole lot of code rebuilds. These are not just expensive (requiring programmer attention) they are places for bugs to creep in.
But “Never change an API” isn’t a good answer either. In this case, the small break that apparently triggered this report was motivated by a problem with writing wrappers for giflib in C# and other languages with automatic memory management. The last round of major changes before this was required to handle GIF animation blocks correctly and make the library thread-safe. Time marches on; service libraries have to change, and APIs with them, even when change is expensive.
How does one properly reconcile these pressures? I use a small set of practice rules I think are simple and effective, and which I think are well illustrated by the way I apply them to giflib. I’m writing about them in public because I think they generalize.
Blogging has been light lately because I’ve been up to my ears in reposurgeon’s most serious challenge ever. Read on for a description of the ugliest heap of version-control rubble you are ever likely to encounter, what I’m doing to fix it, and why you do in fact care – because I’m rescuing the history of one of the defining artifacts of the hacker culture.
I just shipped version 1.10 of cvs-fast-export with a new feature: it now emits fast-import files that contain CVS’s default ignore patterns. This is a request for help from people who know CVS better than I do.
This is a brief heads-up that the reason I’ve been blog silent lately is that I’m concentrating hard on a sprint with what I consider a large payoff: getting the Emacs project fully converted to git. In retrospect, choosing Bazaar as DVCS was a mistake that has presented unnecessary friction costs to a lot of contributors. RMS gets this and we’re moving.
I’m also talking with RMS about the possibility that it’s time to shoot Texinfo through the head and go with a more modern, Web-friendly master format. Oh, and time to abolish info entirely in favor of HTML. He’s not entirely convinced yet of this, but he’s listening.
Here’s a late New Year’s gift for all you repository-editing fiends out there: the long-awaited and perhaps long-dreaded reposurgeon 3.0.
In Heads up: the reposturgeon is mutating! I described the downside of a strategy of incremental small language changes aimed at preserving compatibility: you can wind up trapped by suboptimal early decisions. Sometimes, you have to bust out and do the big redesign, which I did and why there’s a bump in the major version number (the last time that happened was when reposurgeon got the ability to read Subversion dump files directly).
The biggest change is that the command language syntax has mutated from VSO to SVO. What? You’re not up on your comparative linguistic morphology and gave no idea what I’m talking about? That’s Verb-Subject-Object to Subject-Verb-Object.
My first gift of the new year. Read it here.
Not long ago I pulled the plug on one of the two CVS export utilities I was maintaining. One consequence of this is that I decided I needed to get the other one out of beta and into a state I would be willing to ship as 1.0.
And lo, it has come to pass. I just shipped cvs-fast-export 1.0. It has been well field-tested; a couple of weeks ago I used it to rescue the history of Gnu Troff.
There are several CVS exporters out there that suck pretty badly. (To be fair, the perversity of CVS is such that doing an even half-decent job of lifting CVS histories into a modern version-control system is quite difficult.) Now that this one is shipped I know of exactly two that don’t suck. The other one is Michael Haggerty’s cvs2git, which I’m working with him on improving.
Tradeoffs: cvs2git is slow and a bit clunky to use (I’m improving the latter but can’t fix the former). cvs-fast-export is blazingly fast (like, 3.7K commits a minute) but has a hard repository-size limit – above it you run out of core and the OS reaps the process in mid-flight. (Very few projects will hit this limit.)
For each tool there are weird CVS edge cases that it gets wrong. The sets of edge cases are different. cvs2git’s may be smaller, but I’m not sure of that; we haven’t set up head-to-head testing yet. Most projects will not trip over either set of problems.
cvs-fast-export is better documented, especially around error conditions.
Help stamp out CVS in our lifetime!
There are a lot of things people writing software do in the world of bits that don’t have easy analogs in the world of atoms. Sometimes it can be tremendously clarifying when one of those things gets a name, as for example when Martin Fowler invented the term “refactoring” to describe modifying a codebase with the intent to improve its structure or aesthetics without changing its behavior.
There’s a related thing we do a lot when trying to wrap our heads around large, complicated codebases. Often the most fruitful way to explore code to modify it. Because you don’t really know you have understood a piece of code until you can modify it successfully.
Sometimes – often – this can feel like launching an expedition into the untamed jungle of code, from some base camp on the periphery deeper and deeper into trackless wilderness. It is certainly possible to lose your bearings. And large, old codebases can be very jungly, overgrown and organic – full of half-planned and semi-random modifications, dotted with occasional clearings where the light gets in and things locally make sense.
There’s an ancient Unix maxim to the effect that a tool that gets 85% of your job done now is preferable to one that gets 100% done never. Sometimes chasing corner cases is more work than the problem really justifies.
In today’s dharma lesson, I shall illustrate this principle with a real-world and useful example.