Oct 20

Building the perfect beast

I’ve attempted to summarize the discussion of build options for the repository-surgery machine. You should see a link at the top of the page: if not, it’s here

I invite all the commenters who have shown an interest to critique these build proposals. Naturally, I’d like to make sure we have a solid parts list with no spec conflicts before we start spending money and time to build this thing.

Continue reading

Oct 18

Black magic and the Great Beast

Something of significance to the design discussion for the Great Beast occurred today.

I have finally – finally! – achieved significant insight into the core merge code, the “black magic” section of cvs-fast-export. If you look in merge.c in the repo head version you’ll see a bunch of detailed comments that weren’t there before. I feel rather as Speke and Burton must have when after weeks of hacking their way through the torrid jungles of darkest Africa they finally glimpsed the source of the Nile…

Continue reading

Oct 17

Spending the “Help Stamp Out CVS In Your Lifetime” fund

I just shipped cvs-fast-export 1.21 much improved and immensely faster than it was two weeks ago. Thus ends one of the most intense sieges of down-and-dirty frenzied hacking that I’ve enjoyed in years.

Now it comes time to think about what to do with the Help Stamp Out CVS In Your Lifetime fund, which started with John D. Bell snarking epically about my (admittedly) rather antiquated desktop machine and mushroomed into an unexpected pile of donations.

I said I intend to use this machine wandering around the net and hunting CVS repositories to extinction, and I meant it. If not for the demands of the large data sets this involves (like the 11 gigabytes of NetBSD CVS I just rsynced) I could have poked along with my existing machine for a good while longer.

For several reasons, including wanting those who generously donated to be in on the fun, I’m now going to open a discussion on how to best spend that money. A&D regular Susan Sons (aka HedgeMage) built herself a super-powerful machine this last February, and I think her hardware configuration is sound in essentials, so that build (“Tyro”) will be a starting point. But that was eight months ago – it might be some of the choices could be improved now, and if so I trust the regulars here will have clues to that.

I’ll start by talking about design goals and budget.

Continue reading

Oct 16

A low-performance mystery: Sometimes you gotta simplify

This series of posts is increasingly misnamed, as there is not much mystery left about cvs-fast-export’s performance issues and it is now blazingly, screamingly, bat-out-of-hell fast. As in both threaded and unthreaded version convert the entire history of groff (15593 CVS deltas in 1549 files in 13 seconds flat. That would be about 10K CVS commits per minute, sustained; in practice the throughput will probably fall off a bit on very large repositories.

I achieved the latest doubling in speed by not succumbing to the temptation to overengineer – a trap that lays in wait for all clever hackers. Case study follows.

Continue reading

Oct 14

A low-performance mystery: the adventure continues

The mystery I described two posts back has actually been mostly solved (I think) but I’m having a great deal of fun trying to make cvs-fast-export run even faster, and my regulars are not only kibitzing with glee but have even thrown money at me so I can upgrade my PC and run tests on a machine that doesn’t resemble (as one of them put it) a slack-jawed yokel at a hot-dog-eating contest.

Hey, a 2.66Ghz Intel Core 2 Duo with 4GB was hot shit when I bought it, and because I avoid bloatware (my window manager is i3) it has been sufficient unto my needs up to now. I’m a cheap bastard when it comes to hardware; tend to hold onto it until I actually need to upgrade. This is me loftily ignoring the snarking from the peanut gallery, except from the people who actually donated money to the Help Stamp Out CVS In Your Lifetime hardware fund.

(For the rest of you, the PayPal and Gratipay buttons should be clearly visible to your immediate right. Just sayin’…)

Ahem. Where was I? Yes. The major mystery – the unexplained slowdown in stage 3 of the threaded version – appears to have been solved. It appears this was due to a glibc feature, which is that if you link with threads support it tries to detect use of threads and use thread locks in stdio to make it safe. Which slows it down.

Continue reading

Oct 12

A low-performance mystery, part deux

Well, the good news is, I get to feel wizardly this morning. Following sensible advice from a couple of my regulars, I rebuilt my dispatcher to use threads allocated at start time and looping until the list of masters is exhausted.

78 LOC. Fewer mutexes. And it worked correctly first time I ran it. W00t – looks like I’ve got the hang non-hang of this threads thing.

The bad news is, threaded performance is still atrocious in exactly the same way. Looks like thread-spawn overhead wasn’t a significant contributor.

In truth, I was expecting this result. I think my regulars were right to attribute this problem to cache- and locality-busting on every level from processor L1 down to the disks. I believe I’m starting to get a feel for this problem from watching the performance variations over many runs.

I’ll profile, but I’m sure I’m going to see cache misses go way up in the threaded version, and if I can find a way to meter the degree of disk thrashing I won’t be even a bit surprised to see that either.

The bottom line here seems to be that if I want better threaded performance out of this puppy I’m going to have to at least reduce its working set a lot. Trouble is, I’m highly doubtful – given what it has to do during delta assembly – that this is actually possible. The CVS snapshots and deltas it has to snarf into memory to do the job are intrinsically both large and of unpredictably variable size.

Maybe I’ll have an inspiration, but…Keith Packard, who originally wrote that code, is a damn fine systems hacker who is very aware of performance issues; if he couldn’t write it with a low footprint in the first place, I don’t judge my odds of second-guessing him successfully are very good.

Ah well. It’s been a learning experience. At least now I can say of multi-threaded application designs “Run! Flee! Save yourselves!” from a position of having demonstrated a bit of wizardry at them myself.

UPDATE: One of my regulars found a minor bug in the mutex handling that cost some performance. Alas, fixing this didn’t have any impact above the noise level of my profiling. Also, I managed to unify the threaded and non-threaded dispatchers; the LOC specific to threading is now down to about 30.

Oct 11

A low-performance mystery

OK, I’ll admit it. I’m stumped by a software-engineering problem.

This is not a thing that happens often, but I’m in waters relatively unknown to me. I’ve been assiduously avoiding multi-threaded programming for a long time, because solving deadlock, starvation, and insidious data-corruption-by-concurrency problems isn’t really my idea of fun. Other than one minor brush with it handling PPS signals in GPSD I’ve managed before this to avoid any thread-entanglement at all.

But I’m still trying to make cvs-fast-export run faster. About a week ago an Aussie hacker named David Leonard landed a brilliant patch series in my mailbox. Familiar story: has a huge, gnarly CVS repo that needs converting, got tired of watching it grind for days, went in to speed it up, found a way. In fact he applied a technique I’d never heard of (Bloom filtering) to flatten the worst hot spot in the code, an O(n**3) pass used to compute parent/child links in the export code. But it still needed to be faster.

After some discussion we decided to tackle parallelizing the code in the first stage of analysis. This works – separately – on each of the input CVS masters, digesting them into in-core revision lists and generating whole-file snapshots for each CVS delta; later these will become the blobs in the fast-export stream. Then there’s a second stage that merges these per-file revision lists, and a third stage that exports the merged result.

Here’s more detail, because you’ll need it to understand the rest. Each CVS master consists of a sequence of deltas (sequences of add-line and delete-line operations) summing up to a sequence of whole-file states (snapshots – eventually these will become blobs in the translated fast-import-stream). Each delta has an author, a revision date, and a revision number (like 1.3 or 1.2.1.1). Implicitly they form a tree. At the top of the file is a tag table mapping names to revision numbers, and some other relatively unimportant metadata.

The goal of stage 1 is to digest each CVS master into an in-core tree of metadata and a sequence of whole-file snapshots, with unique IDs in the tree indexing the snapshots. The entire collection of masters is made into a linked list of these trees; this is passed to stage 2, where black magic that nobody understands happens.

This first stage seems like a good target for parallelization because the analysis of each master consists of lumps of I/O separated by irregular stretches of compute-intensive data-shuffling in core. In theory, if the program were properly parallelized, it would seldom actually block on an I/O operation; instead while any one thread was waiting on I/O, the data shuffling for other masters would continue. The program would get faster – possibly much faster, depending on the time distribution of I/O demand.

Well, that’s the theory, anyway. Here’s what actually happened…

Continue reading

Oct 09

Implementing re-entrant parsers in Bison and Flex

In days of yore, Yacc and Lex were two of the most useful tools in a Unix hacker’s kit. The way they interfaced to client code was, however, pretty ugly – global variables and magic macros hanging out all over the place. Their modern descendants, Bison and Flex, have preserved that ugliness in order to be backward-compatible.

That rebarbative old interface generally broke a lot of rules about program structure and information hiding that we now accept as givens (to be fair, most of those had barely been invented at the time it was written in 1970 and were still pretty novel). It becomes a particular problem if you want to run multiple instances of your generated parser (or, heaven forfend, multiple parsers with different grammars) in the same binary without having them interfere with each other.

But it can be done. I’m going to describe how because (a) it’s difficult to extract from the documentation, and (b) right now (that is, using Bison 3.0.2 and Flex 2.5.35) the interface is in fact slightly broken and there’s a workaround you need to know.

Continue reading

Oct 05

In which I have reason to sound like Master Po

This landed in my mailbox yesterday. I reproduce it verbatim except for the sender’s name.

> Dear authors of the RFC 3092,
>
> I am writing this email on behalf of your Request For Comment “Etymology of
> ‘Foo’.” We are currently learning about the internet organizations that set
> the standards of the internet and our teacher tasked us with finding an RFC
> that was humorous. Me and my two friends have found the “Etymology of
> ‘Foo'” and have found it to be almost as ridiculous as the RFC about
> infinite monkeys; however, we then became quite curious as to why you wrote
> this. Obviously, it is wrote for humor as not everything in life can be
> serious, but did your manager task you to write this? Are you a part of an
> organization in charge of writing humorous RFC’s? Are you getting paid to
> write those? If so, where do you work, and how may we apply? Any comments
> on these inquiries would be greatly appreciated and thank you in advance.
>
> Sincerely,
>
> XXXXXXXXXXXXXX, confused Networking student

I felt as though this seriously demanded a ha-ha-only-serious answer – and next thing you know I was channeling Master Po from the old Kung Fu TV series. Reply follows…

Continue reading

Oct 03

RFC for a better C calendaring library

In the process of working on my Time, Clock, and Calendar Programming In C document, I have learned something sad but important: the standard Unix calendar API is irremediably broken.

The document list a lot of consequences of the breakage, but here I want to zero in on what I think is the primary causes. That is: the standard struct tm (a) fails to be an unambiguous representation of time, and (b) violates the SPOT (Single Point of Truth) design rule. It has some other more historically contingent problems as well, but these problems (and especially (a)) are the core of its numerous failure modes.

These problems cannot be solved in a backwards-compatible way. I think it’s time for a clean-sheet redesign. In the remainder of this post I’ll develop what I think the premises of the design ought to be, and some consequences.

Continue reading

Oct 03

48-hour release heads-up for Time-Clock-Calendar HOWTO

I’ve been gifted with a lot of help on my draft of Time, Clock, and Calendar Programming In C. I think it’s almost time to ship 1.0, and plan to do so this weekend. Get your last-minute fixes in now!

I will of course continue to accept corrections and additions after 1.0. Thanks to everyone who contributed. My blog and G+ followers were very diligent in spotting typos, helping fill in and correct standards history, and pointing out the more obscure gotchas in the API.

What I’ve discovered is that the Unix calendar-related API is a pretty wretched shambles. Which leads directly to the topic of my next blog entry…

Oct 02

Press silence, black privilege, and unintended consequences

A provocative article at the conservative blog Hot Air comments on a pattern in American coverage of violent interracial crimes. When the perps are white and the victims are black, we can expect the press coverage to be explicit about it, with predictable assumption of racist motivations. On the other hand, when the perps are black and the victims are white, the races of all parties are normally suppressed and no one dares speak the r-word.

If I were a conservative, or a racist, I’d go off on some aggrieved semi-conspiratorial rant here. Instead I’ll observe what Hot Air did not: that the race of violent black criminals is routinely suppressed in news coverage even in the much more common case that their victims are also black. Hot Air is over-focusing here.

That said, Hot Air seems to have a a separate and valid point when it notes that white victims are most likely to have their race suppressed from the reporting when the criminals are black – especially if there was any hint of racist motivation. There is an effective taboo against truthfully reporting incidents in which black criminals yell racial epithets and threats at white victims during the commission of street crimes. If not for webbed security-camera footage we’d have no idea how depressingly common this seems to be – the press certainly won’t cop to it in their print stories.

No conspiracy theory is required to explain the silence here. Reporters and editors are nervous about being thought racist, or (worse) having “anti-racist” pressure groups demonstrating on their doorsteps. The easy route to avoiding this is a bit of suppressio veri – not lying, exactly, but not uttering facts that might be thought racially inflammatory.

The pattern of suppression is neatly explained by the following premises: Any association of black people with criminality is inflammatory. Any suggestion that black criminals are motivated by racism to prey on white victims is super-inflammatory. And above all, we must not inflame. Better to be silent.

I believe this silence is a dangerous mistake with long-term consequences that are bad for everyone, and perhaps worst of all for black people.

Continue reading

Sep 30

Underestimate Terry Pratchett? I never have.

Neil Gaiman writes On Terry Pratchett, he is not a jolly old elf at all.. It’s worth reading.

I know that what Neil Gaiman says here is true, because I’ve known Terry, a little. Not as well as Neil does; we’re not that close, though he has been known to answer my email. But I did have one experience back in 2003 that would have forever dispelled any notion of Terry as a mere jolly elf, assuming I’d been foolish enough to entertain it.

I taught Terry Pratchett how to shoot a pistol.

(We were being co-guests of honor at Penguicon I at the time. This was at the first Penguicon Geeks with Guns event, at a shooting range west of Detroit. It was something Terry had wanted to do for a long time, but opportunities in Britain are quite limited.)

This is actually a very revealing thing to do with anyone. You learn a great deal about how the person handles stress and adrenalin. You learn a lot about their ability to concentrate. If the student has fears about violence, or self-doubt, or masculinity/femininity issues, that stuff is going to tend to come out in the student’s reactions in ways that are not difficult to read.

Terry was rock-steady. He was a good shot from the first three minutes. He listened, he followed directions intelligently, he always played safe, and he developed impressive competence at anything he was shown very quickly. To this day he’s one of the three or four best shooting students I’ve ever had.

That is not the profile of anyone you can safely trivialize as a jolly old elf. I wasn’t inclined to do that anyway; I’d known him on and off since 1991, which was long enough that I believe I got a bit of look-in before he fully developed his Famous Author charm defense.

But it was teaching Terry pistol that brought home to me how natively tough-minded he really is. After that, the realism and courage with which he faced his Alzheimer’s diagnosis came as no surprise to me whatsoever.

Sep 29

Announcing: Time, Clock, and Calendar Programming In C

The C/UNIX library support for time and calendar programming is a nasty mess of historical contingency. I have grown tired of having to re-learn its quirks every time I’ve had to deal with it, so I’m doing something about that.

Announcing Time, Clock, and Calendar Programming In C, a document which attempts to chart the historical clutter (so you can ignore it once you know why it’s there) and explain the mysteries.

What I’ve released is an 0.9 beta version. My hope is that it will rapidly attract some thoroughgoing reviews so I can release a 1.0 in a week or so. More than that, I would welcome a subject matter expert as a collaborator.

Sep 29

Shellshock, Heartbleed, and the Fallacy of False Prominence

In the wake of the Shellshock bug, I guess I need to repeat in public some things I said at the time of the Heartbleed bug.

The first thing to notice here is that these bugs were found – and were findable – because of open-source scrutiny.

There’s a “things seen versus things unseen” fallacy here that gives bugs like Heartbleed and Shellshock false prominence. We don’t know – and can’t know – how many far worse exploits lurk in proprietary code known only to crackers or the NSA.

What we can project based on other measures of differential defect rates suggests that, however imperfect “many eyeballs” scrutiny is, “few eyeballs” or “no eyeballs” is far worse.

I’m not handwaving when I say this; we have statistics from places like Coverity that do defect-rate measurements on both open-source and proprietary closed source products, we have academic research like the UMich fuzz papers, we have CVE lists for Internet-exposed programs, we have multiple lines of evidence.

Everything we know tells us that while open source’s security failures may be conspicuous its successes, though invisible, are far larger.

Sep 28

Commoditization, not open source, killed Sun Microsystems

The patent-troll industry is in full panic over the consequences of the Alice vs. CLS Bank decision. While reading up on the matter, I ran across the following claim by a software patent attorney:

“As Sun Microsystems proved, the quickest way to turn a $5 billion company into a $600 million company is to go open source.”

I’m not going to feed this troll traffic by linking to him, but he’s promulgating a myth that must be dispelled. Trying to go open source didn’t kill Sun; hardware commoditization killed Sun. I know this because I was at ground zero when it killed a company that was aiming to succeed Sun – and, until the dot-com bust, looked about to manage it.

Continue reading

Sep 27

Program Provability and the Rule of Technical Greed

In a recent discussion on G+, a friend of mine made a conservative argument for textual over binary interchange protocols on the grounds that programs always need to be debugged, and thus readability of the protocol streams by humans trumps the minor efficiency gains from binary packing.

I agree with this argument; I’ve made it often enough myself, notably in The Art of Unix Programming. But it was something his opponent said that nudged at me. “Provable programs are the future,” he declaimed, pointing at sel4 and CompCert as recent examples of formal verification of real-world software systems. His implication was clear: we’re soon going to get so much better at turning specifications into provably correct implementations that debuggability will soon cease to be a strong argument for protocols that can be parsed by a Mark I Eyeball.

Oh foolish, foolish child, that wots not of the Rule of Technical Greed.

Continue reading