Nov 25

Like a football player, head down

OK, this is interesting: From some tabloid, we have the following quote:

The unidentified witness wrote that the 18-year-old Brown “has his arms out with attitude,” while “The cop just stood there.” The witness added, “Dang if that kid didn’t start running right at the cop like a football player. Head down.”

This is exactly how I reconstructed the event in This picture tells a shooting story. I said: the reason I’m sure Brown was moving is the extreme torso angle suggested by the lack of exit wounds on the back. A human trying to do that standing still would overbalance and fall, which is why I think he was running or lunging when he took the bullets.

The witness said “arms out with attitude”. I said “with his right arm stretched forward [...] probably while Brown was grabbing for Wilson or the pistol with his right hand.”

So much for “Hands up – don’t shoot.” It’s as I thought: Brown autodarwinated, bull-rushing an armed policeman he had already injured once.

UPDATE: I failed to make clear before that this account was part of the evidence dump from the grand jury proceedings, not just some random the tabloid turned up.

Nov 16

SRC 0.9: Ready for the less adventurous now

I just shipped SRC 0.9, and you no longer need to be adventurous to try it. It has a regression-test suite and real users.

Remarkably, SRC has had real users since 0.3, two days after it was born. Even more remarkably, the count of crash reports and botched operations from those users is zero. Zero. This is what you can gain from keeping code simple – I have has a couple of bug reports but they were both about filename quoting in the fast-export code, which is not a central feature.

Next, I’ll make a couple of what I think are important points about writing for zero defects. Then I’ll talk about a subtle issue or two in the design, and our one known behavioral glitch.

Continue reading

Nov 09

SRC 0.3 – ready for the adventurous

My low-power, low-overhead version control system, SRC, is no longer just a stake in the ground. It is still a determinedly file-oriented wrapper around RCS (and will stay that way) but every major feature except branching is implemented and it has probably crossed the border into being useful for production.

The adventurous can and should try it. You’re safe if it blows up because the histories are plain RCS files. But, as previously noted, it’s RCS behind an interface that’s actually pleasant to use. (You Emacs VC-mode users pipe down; I’m going to explain why you care in a bit.)

The main developments today include a fairly complete regression-test suite (already paying large dividends in speeding up progress) and a “src status” command that will look very familiar to Subversion/git/hg users. There’s a hack behind that status command I’m rather proud of; I’ll talk about that, too.

Continue reading

Nov 07

I wrote a version-control system today

I wrote a version-control system today. Yes, an entire VCS. Took me 14 hours.

Yeah, you’re looking at me like I’m crazy. “Why,” you ask, quite reasonably, “would you want to do a thing like that? We’re not short of powerful VCSes these days.

That is true. But I got to thinking, early this morning, about the fact that I haven’t been able to settle on just one VCS. I use git for most things, but there’s a use case git doesn’t cover. I have some document directories in which I have piles of things like HOWTOs which have separate histories from each other. Changes in them are not correlated, and I want to be able to move them around because I sometimes do that to reorganize them.

What have I been using for this? Why, RCS. The ancient Revision Control System, second oldest VCS in existence and clinging tenaciously to this particular niche. It does single-file change histories pretty well, but its UI is horrible. Worse than git’s, which is a pretty damning comparison.

Then I got to thinking. If I were going to design a VCS to do this particular single-file, single-user job, what would it look like? Hm. Sequential integer revision numbers, like Subversion and Mercurial used locally. Lockless operation. Modern CLI design. Built-in command help. Interchange with other VCSes via git import streams. This sounds like it could be nice

Then, the idea that made it inevitable. “I bet.” I thought, “I could write this thing as a Python wrapper around RCS tools. Use them for delta storage but hide all the ugly parts.”

Thus, SRC. Simple Revision Control, v0.1.

Continue reading

Nov 05

Chipping away at CVS

I’ve just shipped a new version of cvs-fast-export, 1.26. It speeds the tool up more, more, more – cranking through 25 years and 113300 commits of Emacs CVS history, for example in 2:48. That’s 672 commits a second, for those of you in the cheap seats.

But the real news this time is a Python wrapper called ‘cvsconvert’ that takes a CVS repository, runs a conversion to Git using cvs-fast-export, and then – using CVS for checkouts – examines the CVS and git repositories side by side looking for translation glitches. It checks every branch tip and every tag.

Running this on several of my test repos I’ve discovered some interesting things. One such discovery is of a bug in CVS. (Yeah, I know, what a shock…)

Continue reading

Oct 31

Cognitive disinhibition: not the whole story of genius

Here’s an interesting article with a stupid and misleading title on the role of what the author calls “cognitive disinhibition” – a fancy term for “allowing oneself to notice what others miss” – in enabling creative genius.

While in many ways I could be a poster child for Simonton’s thesis (and I’ll get to those) I also think there are some important things missing from his discussion, which is why I’m blogging about it. The most crucial problem is that his category of “madness” is not sharp enough. I know how to fine it down in a way that I think sheds considerable light on what he is trying to analyze.

Continue reading

Oct 29

When hackers grow old

Lately I’ve been wrestling with various members of an ancient and venerable open-source development group which I am not going to name, though people who regularly follow my adventures will probably guess which one it is by the time I’m done venting.

Why it so freaking hard to drag some people into the 21st century? Sigh…

I’m almost 56, an age at which a lot of younger people expect me to issue semi-regular salvos of get-off-my-lawn ranting at them. But no – I find, that, especially in technical contexts, I am far more likely to become impatient with my age peers.

A lot of them really have become grouchy, hidebound old farts. And, alas, it not infrequently falls to me to be the person who barges in and points out that practices well-adapted for 1995 (or, in the particular case I’m thinking of, 1985) are … not good things to hold on to decades later.

Why me? Because the kids have little or no cred with a lot of my age peers. If anyone’s going to get them to change, it has to be someone who is their peer in their own perception. Even so, I spend a lot more time than seems just or right fighting inertia.

Young people can be forgiven for lacking a clue. They’re young. Young means little experience, which often leads to unsound judgment. It’s more difficult for me to forgive people who have been around the track often enough that they should have a clue, but are so attached to The Way It’s Always Been Done that they can’t see what is in front of their freaking noses.

Continue reading

Oct 24

Moving the NetBSD repository

Some people on the NetBSD tech-repository list have wondered why I’ve been working on a full NetBSD repository conversion without a formal request from NetBSD’s maintainers that I do so.

It’s a fair question. An answer to it involves both historical contingency and some general issues about moving and mirroring large repositories. Because of the accident that a lot of people have recently dropped money on me in part to support an attack on this problem, I’m going to explain both in public.

Continue reading

Oct 20

Building the perfect beast

I’ve attempted to summarize the discussion of build options for the repository-surgery machine. You should see a link at the top of the page: if not, it’s here

I invite all the commenters who have shown an interest to critique these build proposals. Naturally, I’d like to make sure we have a solid parts list with no spec conflicts before we start spending money and time to build this thing.

Continue reading

Oct 18

Black magic and the Great Beast

Something of significance to the design discussion for the Great Beast occurred today.

I have finally – finally! – achieved significant insight into the core merge code, the “black magic” section of cvs-fast-export. If you look in merge.c in the repo head version you’ll see a bunch of detailed comments that weren’t there before. I feel rather as Speke and Burton must have when after weeks of hacking their way through the torrid jungles of darkest Africa they finally glimpsed the source of the Nile…

Continue reading

Oct 17

Spending the “Help Stamp Out CVS In Your Lifetime” fund

I just shipped cvs-fast-export 1.21 much improved and immensely faster than it was two weeks ago. Thus ends one of the most intense sieges of down-and-dirty frenzied hacking that I’ve enjoyed in years.

Now it comes time to think about what to do with the Help Stamp Out CVS In Your Lifetime fund, which started with John D. Bell snarking epically about my (admittedly) rather antiquated desktop machine and mushroomed into an unexpected pile of donations.

I said I intend to use this machine wandering around the net and hunting CVS repositories to extinction, and I meant it. If not for the demands of the large data sets this involves (like the 11 gigabytes of NetBSD CVS I just rsynced) I could have poked along with my existing machine for a good while longer.

For several reasons, including wanting those who generously donated to be in on the fun, I’m now going to open a discussion on how to best spend that money. A&D regular Susan Sons (aka HedgeMage) built herself a super-powerful machine this last February, and I think her hardware configuration is sound in essentials, so that build (“Tyro”) will be a starting point. But that was eight months ago – it might be some of the choices could be improved now, and if so I trust the regulars here will have clues to that.

I’ll start by talking about design goals and budget.

Continue reading

Oct 16

A low-performance mystery: Sometimes you gotta simplify

This series of posts is increasingly misnamed, as there is not much mystery left about cvs-fast-export’s performance issues and it is now blazingly, screamingly, bat-out-of-hell fast. As in both threaded and unthreaded version convert the entire history of groff (15593 CVS deltas in 1549 files in 13 seconds flat. That would be about 10K CVS commits per minute, sustained; in practice the throughput will probably fall off a bit on very large repositories.

I achieved the latest doubling in speed by not succumbing to the temptation to overengineer – a trap that lays in wait for all clever hackers. Case study follows.

Continue reading

Oct 14

A low-performance mystery: the adventure continues

The mystery I described two posts back has actually been mostly solved (I think) but I’m having a great deal of fun trying to make cvs-fast-export run even faster, and my regulars are not only kibitzing with glee but have even thrown money at me so I can upgrade my PC and run tests on a machine that doesn’t resemble (as one of them put it) a slack-jawed yokel at a hot-dog-eating contest.

Hey, a 2.66Ghz Intel Core 2 Duo with 4GB was hot shit when I bought it, and because I avoid bloatware (my window manager is i3) it has been sufficient unto my needs up to now. I’m a cheap bastard when it comes to hardware; tend to hold onto it until I actually need to upgrade. This is me loftily ignoring the snarking from the peanut gallery, except from the people who actually donated money to the Help Stamp Out CVS In Your Lifetime hardware fund.

(For the rest of you, the PayPal and Gratipay buttons should be clearly visible to your immediate right. Just sayin’…)

Ahem. Where was I? Yes. The major mystery – the unexplained slowdown in stage 3 of the threaded version – appears to have been solved. It appears this was due to a glibc feature, which is that if you link with threads support it tries to detect use of threads and use thread locks in stdio to make it safe. Which slows it down.

Continue reading

Oct 12

A low-performance mystery, part deux

Well, the good news is, I get to feel wizardly this morning. Following sensible advice from a couple of my regulars, I rebuilt my dispatcher to use threads allocated at start time and looping until the list of masters is exhausted.

78 LOC. Fewer mutexes. And it worked correctly first time I ran it. W00t – looks like I’ve got the hang non-hang of this threads thing.

The bad news is, threaded performance is still atrocious in exactly the same way. Looks like thread-spawn overhead wasn’t a significant contributor.

In truth, I was expecting this result. I think my regulars were right to attribute this problem to cache- and locality-busting on every level from processor L1 down to the disks. I believe I’m starting to get a feel for this problem from watching the performance variations over many runs.

I’ll profile, but I’m sure I’m going to see cache misses go way up in the threaded version, and if I can find a way to meter the degree of disk thrashing I won’t be even a bit surprised to see that either.

The bottom line here seems to be that if I want better threaded performance out of this puppy I’m going to have to at least reduce its working set a lot. Trouble is, I’m highly doubtful – given what it has to do during delta assembly – that this is actually possible. The CVS snapshots and deltas it has to snarf into memory to do the job are intrinsically both large and of unpredictably variable size.

Maybe I’ll have an inspiration, but…Keith Packard, who originally wrote that code, is a damn fine systems hacker who is very aware of performance issues; if he couldn’t write it with a low footprint in the first place, I don’t judge my odds of second-guessing him successfully are very good.

Ah well. It’s been a learning experience. At least now I can say of multi-threaded application designs “Run! Flee! Save yourselves!” from a position of having demonstrated a bit of wizardry at them myself.

UPDATE: One of my regulars found a minor bug in the mutex handling that cost some performance. Alas, fixing this didn’t have any impact above the noise level of my profiling. Also, I managed to unify the threaded and non-threaded dispatchers; the LOC specific to threading is now down to about 30.

Oct 11

A low-performance mystery

OK, I’ll admit it. I’m stumped by a software-engineering problem.

This is not a thing that happens often, but I’m in waters relatively unknown to me. I’ve been assiduously avoiding multi-threaded programming for a long time, because solving deadlock, starvation, and insidious data-corruption-by-concurrency problems isn’t really my idea of fun. Other than one minor brush with it handling PPS signals in GPSD I’ve managed before this to avoid any thread-entanglement at all.

But I’m still trying to make cvs-fast-export run faster. About a week ago an Aussie hacker named David Leonard landed a brilliant patch series in my mailbox. Familiar story: has a huge, gnarly CVS repo that needs converting, got tired of watching it grind for days, went in to speed it up, found a way. In fact he applied a technique I’d never heard of (Bloom filtering) to flatten the worst hot spot in the code, an O(n**3) pass used to compute parent/child links in the export code. But it still needed to be faster.

After some discussion we decided to tackle parallelizing the code in the first stage of analysis. This works – separately – on each of the input CVS masters, digesting them into in-core revision lists and generating whole-file snapshots for each CVS delta; later these will become the blobs in the fast-export stream. Then there’s a second stage that merges these per-file revision lists, and a third stage that exports the merged result.

Here’s more detail, because you’ll need it to understand the rest. Each CVS master consists of a sequence of deltas (sequences of add-line and delete-line operations) summing up to a sequence of whole-file states (snapshots – eventually these will become blobs in the translated fast-import-stream). Each delta has an author, a revision date, and a revision number (like 1.3 or 1.2.1.1). Implicitly they form a tree. At the top of the file is a tag table mapping names to revision numbers, and some other relatively unimportant metadata.

The goal of stage 1 is to digest each CVS master into an in-core tree of metadata and a sequence of whole-file snapshots, with unique IDs in the tree indexing the snapshots. The entire collection of masters is made into a linked list of these trees; this is passed to stage 2, where black magic that nobody understands happens.

This first stage seems like a good target for parallelization because the analysis of each master consists of lumps of I/O separated by irregular stretches of compute-intensive data-shuffling in core. In theory, if the program were properly parallelized, it would seldom actually block on an I/O operation; instead while any one thread was waiting on I/O, the data shuffling for other masters would continue. The program would get faster – possibly much faster, depending on the time distribution of I/O demand.

Well, that’s the theory, anyway. Here’s what actually happened…

Continue reading