Oct 11

A low-performance mystery

OK, I’ll admit it. I’m stumped by a software-engineering problem.

This is not a thing that happens often, but I’m in waters relatively unknown to me. I’ve been assiduously avoiding multi-threaded programming for a long time, because solving deadlock, starvation, and insidious data-corruption-by-concurrency problems isn’t really my idea of fun. Other than one minor brush with it handling PPS signals in GPSD I’ve managed before this to avoid any thread-entanglement at all.

But I’m still trying to make cvs-fast-export run faster. About a week ago an Aussie hacker named David Leonard landed a brilliant patch series in my mailbox. Familiar story: has a huge, gnarly CVS repo that needs converting, got tired of watching it grind for days, went in to speed it up, found a way. In fact he applied a technique I’d never heard of (Bloom filtering) to flatten the worst hot spot in the code, an O(n**3) pass used to compute parent/child links in the export code. But it still needed to be faster.

After some discussion we decided to tackle parallelizing the code in the first stage of analysis. This works – separately – on each of the input CVS masters, digesting them into in-core revision lists and generating whole-file snapshots for each CVS delta; later these will become the blobs in the fast-export stream. Then there’s a second stage that merges these per-file revision lists, and a third stage that exports the merged result.

Here’s more detail, because you’ll need it to understand the rest. Each CVS master consists of a sequence of deltas (sequences of add-line and delete-line operations) summing up to a sequence of whole-file states (snapshots – eventually these will become blobs in the translated fast-import-stream). Each delta has an author, a revision date, and a revision number (like 1.3 or 1.2.1.1). Implicitly they form a tree. At the top of the file is a tag table mapping names to revision numbers, and some other relatively unimportant metadata.

The goal of stage 1 is to digest each CVS master into an in-core tree of metadata and a sequence of whole-file snapshots, with unique IDs in the tree indexing the snapshots. The entire collection of masters is made into a linked list of these trees; this is passed to stage 2, where black magic that nobody understands happens.

This first stage seems like a good target for parallelization because the analysis of each master consists of lumps of I/O separated by irregular stretches of compute-intensive data-shuffling in core. In theory, if the program were properly parallelized, it would seldom actually block on an I/O operation; instead while any one thread was waiting on I/O, the data shuffling for other masters would continue. The program would get faster – possibly much faster, depending on the time distribution of I/O demand.

Well, that’s the theory, anyway. Here’s what actually happened…

Continue reading

Oct 09

Implementing re-entrant parsers in Bison and Flex

In days of yore, Yacc and Lex were two of the most useful tools in a Unix hacker’s kit. The way they interfaced to client code was, however, pretty ugly – global variables and magic macros hanging out all over the place. Their modern descendants, Bison and Flex, have preserved that ugliness in order to be backward-compatible.

That rebarbative old interface generally broke a lot of rules about program structure and information hiding that we now accept as givens (to be fair, most of those had barely been invented at the time it was written in 1970 and were still pretty novel). It becomes a particular problem if you want to run multiple instances of your generated parser (or, heaven forfend, multiple parsers with different grammars) in the same binary without having them interfere with each other.

But it can be done. I’m going to describe how because (a) it’s difficult to extract from the documentation, and (b) right now (that is, using Bison 3.0.2 and Flex 2.5.35) the interface is in fact slightly broken and there’s a workaround you need to know.

Continue reading

Oct 03

48-hour release heads-up for Time-Clock-Calendar HOWTO

I’ve been gifted with a lot of help on my draft of Time, Clock, and Calendar Programming In C. I think it’s almost time to ship 1.0, and plan to do so this weekend. Get your last-minute fixes in now!

I will of course continue to accept corrections and additions after 1.0. Thanks to everyone who contributed. My blog and G+ followers were very diligent in spotting typos, helping fill in and correct standards history, and pointing out the more obscure gotchas in the API.

What I’ve discovered is that the Unix calendar-related API is a pretty wretched shambles. Which leads directly to the topic of my next blog entry…

Sep 29

Announcing: Time, Clock, and Calendar Programming In C

The C/UNIX library support for time and calendar programming is a nasty mess of historical contingency. I have grown tired of having to re-learn its quirks every time I’ve had to deal with it, so I’m doing something about that.

Announcing Time, Clock, and Calendar Programming In C, a document which attempts to chart the historical clutter (so you can ignore it once you know why it’s there) and explain the mysteries.

What I’ve released is an 0.9 beta version. My hope is that it will rapidly attract some thoroughgoing reviews so I can release a 1.0 in a week or so. More than that, I would welcome a subject matter expert as a collaborator.

Sep 27

Program Provability and the Rule of Technical Greed

In a recent discussion on G+, a friend of mine made a conservative argument for textual over binary interchange protocols on the grounds that programs always need to be debugged, and thus readability of the protocol streams by humans trumps the minor efficiency gains from binary packing.

I agree with this argument; I’ve made it often enough myself, notably in The Art of Unix Programming. But it was something his opponent said that nudged at me. “Provable programs are the future,” he declaimed, pointing at sel4 and CompCert as recent examples of formal verification of real-world software systems. His implication was clear: we’re soon going to get so much better at turning specifications into provably correct implementations that debuggability will soon cease to be a strong argument for protocols that can be parsed by a Mark I Eyeball.

Oh foolish, foolish child, that wots not of the Rule of Technical Greed.

Continue reading

Sep 25

Announcing microjson

If you’ve ever wanted a JSON parser that can unpack directly to fixed-extent C storage (look, ma, no malloc!) I’ve got the code for you.

The microjson parser is tiny (less than 700LOC), fast, and very sparing of memory. It is suitable for use in small-memory embedded environments and deployments where malloc() is forbidden in order to prevent leaked-memory issues.

This project is a spin-out of code used heavily in GPSD; thus, the code has been tested on dozens of different platforms in hundreds of millions of deployments.

It has two restrictions relative to standard JSON: the special JSON “null” value is not handled, and object array elements must be homogenous in type.

A programmer’s guide to building parsers with microjson is included in the distribution.

Sep 19

Request for help – I need a statistician

GPSD has a serious bug somewhere in its error modeling. What it effects is position-error estimates GPSD computes for GPSes that don’t compute them internally themselves and report them on the wire. The code produces plausible-looking error estimates, but they lack a symmetry property that they should have to be correct.

I need a couple of hours of help from an applied statistician who can read C and has experience using covariance-matrix methods for error estimation. Direct interest in GPS and geodesy would be a plus.

I don’t think this is a large problem, but it’s just a little beyond my competence. I probably know enough statistics and matrix algebra to understand the fix, but I don’t know enough to find it myself.

Hundreds of millions of Google Maps users might have reason to grateful to anyone who helps out here.

UPDATE: Problem solved, see next post.

Sep 02

Adverse selection and old technology

Yesterday I shipped cvs-fast-export 1.15, with a significant performance improvement produced by replacing a naive O(n**3) sort with a properly tuned O(n log n) version.

In ensuing discussion on G+, one of my followers there asked if I thought this was likely to produce a real performance improvement, as in small inputs the constant setup time of a cleverly tuned algorithm often dominates the nominal savings.

This is one of those cases where an intelligent question elicits knowledge you didn’t know you had. I discovered that I do believe strongly that cvs-fast-export’s workload is dominated by large repositories. The reason is a kind of adverse selection phenomenon that I think is very general to old technologies with high exit costs.

The rest of this blog post will use CVS as an example of the phenomenon, and may thus be of interest even to people who don’t specifically care about high version control systems.

Continue reading

Aug 27

Phase-of-moon-dependent bugs suck

I just had a rather hair-raising experience with a phase-of-moon-dependent bug.

I released GPSD 3.11 this last Saturday (three days ago) to meet a deadline for a Debian freeze. Code tested ninety-six different ways, run through four different static analyzers, the whole works. Because it was a hurried release I deliberately deferred a bunch of cleanups and feature additions in my queue. Got it out on time and it’s pretty much all good – we’ve since turned up two minor build failures in two unusual feature-switch cases, and one problem with the NTP interface code that won’t affect reasonable hardware.

I’ve been having an extremely productive time since chewing through all the stuff I had deferred. New features for gpsmon, improvements for GPSes watching GLONASS birds, a nice space optimization for embedded systems, some code to prevent certain false-match cases in structured AIS Type 6 and Type 8 messages, merging some Android port tweaks, a righteous featurectomy or two. Good clean fun – and of course I was running my regression tests frequently and noting when I’d done so in my change comments.

Everything was going swimmingly until about two hours ago. Then, as I was verifying a perfectly innocent-appearing tweak to the SiRF-binary driver, the regression tests went horribly, horribly wrong. Not just the SiRF binary testloads, all of them.

Continue reading

Aug 12

Ignoring: complex cases

I shipped point releases of cvs-fast-export and reposurgeon today. Both of them are intended to fix some issues around the translation of ignore patterns in CVS and Subversion repositories. Both releases illustrate, I think, a general point about software engineering: sometimes, it’s better to punt tricky edge cases to a human than to write code that is doomed to be messy, over-complex, and a defect attractor.

Continue reading

Jun 01

At long last, shipper 1.0

Here at Eric Conspiracy Secret Labs, we ship no code before its time. Even if that means letting it stay in beta for, er, nearly twelve years. But at long last I believe my shipper tool is ready for the world.

Since I’ve already described shipper in detail I won’t rehearse its features again. Suffice it to say that if you’re the kind of hacker who ships point releases more often than about once a week, and you’re tired of all the fiddly handwork that implies, you want this more badly than you know.

May 17

Managing compatibility issues in ubiquitous code

There’s a recent bug filed against giflib titled giflib has too many unnecessary API changes. For a service library as widely deployed as it is (basically, on everything with a screen and network access – computers, smartphones, game consoles, ATMs) this is a serious complaint. Even minor breaks in API compatibility imply a whole lot of code rebuilds. These are not just expensive (requiring programmer attention) they are places for bugs to creep in.

But “Never change an API” isn’t a good answer either. In this case, the small break that apparently triggered this report was motivated by a problem with writing wrappers for giflib in C# and other languages with automatic memory management. The last round of major changes before this was required to handle GIF animation blocks correctly and make the library thread-safe. Time marches on; service libraries have to change, and APIs with them, even when change is expensive.

How does one properly reconcile these pressures? I use a small set of practice rules I think are simple and effective, and which I think are well illustrated by the way I apply them to giflib. I’m writing about them in public because I think they generalize.

Continue reading

Mar 29

Ugliest…repository…conversion…ever

Blogging has been light lately because I’ve been up to my ears in reposurgeon’s most serious challenge ever. Read on for a description of the ugliest heap of version-control rubble you are ever likely to encounter, what I’m doing to fix it, and why you do in fact care – because I’m rescuing the history of one of the defining artifacts of the hacker culture.

Continue reading

Jan 16

Dragging Emacs forward

This is a brief heads-up that the reason I’ve been blog silent lately is that I’m concentrating hard on a sprint with what I consider a large payoff: getting the Emacs project fully converted to git. In retrospect, choosing Bazaar as DVCS was a mistake that has presented unnecessary friction costs to a lot of contributors. RMS gets this and we’re moving.

I’m also talking with RMS about the possibility that it’s time to shoot Texinfo through the head and go with a more modern, Web-friendly master format. Oh, and time to abolish info entirely in favor of HTML. He’s not entirely convinced yet of this, but he’s listening.

Continue reading