Chipping away at CVS

I’ve just shipped a new version of cvs-fast-export, 1.26. It speeds the tool up more, more, more – cranking through 25 years and 113300 commits of Emacs CVS history, for example in 2:48. That’s 672 commits a second, for those of you in the cheap seats.

But the real news this time is a Python wrapper called ‘cvsconvert’ that takes a CVS repository, runs a conversion to Git using cvs-fast-export, and then – using CVS for checkouts – examines the CVS and git repositories side by side looking for translation glitches. It checks every branch tip and every tag.

Running this on several of my test repos I’ve discovered some interesting things. One such discovery is of a bug in CVS. (Yeah, I know, what a shock…)

CVS uses the RCS state field value of “dead” to mark files that have been deleted. I found a case in the CVS repo of a project called “timidity” where a file had somehow ended up with state dead at rev 1.2, Exp (the default live state) at 1.3, and dead again at its final revision of 1.4. This confused CVS badly; a checkout keyed to a tag made after 1.3 but before 1.4 should have included the file but did not.

This showed up as a defect (mismatched file manifests) in cvsconvert. I spent half a day looking for where cvs-fast-export had gone wrong before I figured out that cvs-fast-export was doing what the metadata in the master said it should – it was CVS that had screwed the pooch. Annoying, but not very surprising.

This was an example of the most common kind of defect – files that had been deleted in CVS showing up under tags in gitspace when they didn’t under the corresponding CVS tags. Maybe eventually I’ll figure out how to perfectly match CVS’s behavior here, but it’s not really a big deal – there tend to be only a few of these per CVS repository and a few minutes’ work with reposurgeon will snip them off nicely.

Reassuringly, I found no cases anywhere of manifest mismatches or file differences at master or any other branch tip. Well, other than some trivial file differences due to CVS keyword expansion, and those can be suppressed.

The design approach of cvsconvert seems quite successful. I may try writing something parallel to it to sanity-check Subversion lifts.

16 comments

  1. One of the great under-appreciated virtues of git is that it’s centered around a very simple and elegant functional data structure, of the sort that you might find crawling around in a Haskell program somewhere. This makes the implementation easier to reason about — in some cases, makes it possible to reason about — and makes it harder to mess up.

  2. Judging by your post, you don’t seem very confident that CVS bug will be fixed – or have I misread?

    How much faster is cvs-fast-export than when you started working on it back in Jan 2012 – two orders of magnitude?

    You’ve said you’re up to 670 commits/sec, or ~40K commits/minute – up from http://esr.ibiblio.org/?p=6385 where you reported approx 10k commits/minute, but on the groff repo, not emacs.

    1. >Judging by your post, you don’t seem very confident that CVS bug will be fixed – or have I misread?

      You haven’t. Releases have been infrequent the last decade, and the design was a pile of kluges to begin with.

      >How much faster is cvs-fast-export than when you started working on it back in Jan 2012 – two orders of magnitude?

      Pushing that by now, probably. It’s been getting concentrated attention from hackers who are demonstrably very good at both algorithmic speedups and microtweaking (David Leonard and Laurence Hygate have done particularly fine work). And we’re probably not done yet – Tom Enterline says he’s found a 5-10x speedup in the branch-merge code but hasn’t shipped the patch.

      Note also that comparing groff then to Emacs now understates the gains, as throughput falls with repo size and Emacs CVS is quite a bit larger.

  3. Judging by your post, you don’t seem very confident that CVS bug will be fixed – or have I misread?

    CVS is dead.

  4. Eric, thanks for the reply, and correcting my balls up arithmetical.

    I grabbed the figures for groff then because they were easily to hand.

    Is the bottleneck bouncing around in a system as you take breaching charges to the current one common in this sort of tweaking? In my own (admittedly very limited) experience, I’ve found that bottlenecks tend to cluster up – I’ll fix one, and next door will belt me across the face, not something waaay down the river.

    1. >In my own (admittedly very limited) experience, I’ve found that bottlenecks tend to cluster up – I’ll fix one, and next door will belt me across the face, not something waaay down the river.

      Didn’t work that way on this project, but I won’t try to generalize about it.

  5. Eric, could you list what have brought the most speedup? I think it could be interesting for others.

    CVS is either in maintenance mode, or stopped development – last release was in 2008. The last checkin (commit) also was a long time ago.

    1. >Eric, could you list what have brought the most speedup? I think it could be interesting for others.

      Some things do stand out:

      * Not trying to always emit the stream ops in git canonical order was huge – eliminated half the program’s I/O time.

      * There was an O(n**2) fileop-matching operation used in the export stage that used to be the biggest non-IO hotspot in the code. David Leonard flattened it into an O(n) merge sort.

      * Laurence Hygate eliminated a bunch of expensive sort operations at various stages by pre-sorting the order of the master files, a simple and brilliant hack.

      * Jens Bethkowsky contributed an rbtree implementation to speed up symbol lookup.

      * I added threading for the first-stage master analysis. I mistakenly believed this wasn’t a big win, but turns out that was because I was benchmarking on small repos – on really large ones it becomes significant.

      * I did a lot of work to reduce the working-set size and the number of malloc calls. While no single one of these changes made a large difference, I believe they were cumulatively important.

      Finally I will note that one optimization we thought was a big win turned out to be nothing of the kind. Bloom filtering – mistakenly got the credit for David’s merge-sort change because they were hard to tell apart in profiling. Eventually David, who wrote it, figured out it wasn’t doing anything and we removed it.

  6. I haven’t checked out the recent changes/speedups, but in my view the basic issue is that the program data structures are mostly linked lists. As Eric mentioned earlier, that results in o(n**2) and o(n**3) hot spots. Some of the fixes (like my work-in-progress) are attacking those.

  7. When you reach a natural stopping point in this process, it’d be great if you could write a high-level post discussing the specific advantages of preserving years of commits. In other words, why not just throw away most of Emacs’ history and start with a fresh repo? I have some ideas about the answers, but I’d like to see your perspective, which I’m sure is much better-informed that mine.

  8. I might have hit this bug in CVS 15 or so years ago. It’s hard to tell the CVS bugs apart.

    We took a series of release tarballs and attempted to put them into a CVS repo with one tag (or branch–we tried branches too) per tarball. Unfortunately some of the tarballs were complete releases and some were not, so from CVS’s perspective a lot of files were added and deleted multiple times. Apart from the egregious disk space usage required, CVS was utterly unable to correctly guess which files belonged to which tags. The tags were useless, so we burned the entire repository and started shopping for a new VCS.

    It never occurred to me to report this as a CVS bug. This use case seemed unfixable using CVS’s on-disk data format, so we fired CVS instead of trying to fix it.

  9. The day after I read the article by Yossi Kreinin’s article about how profilers lie (http://yosefk.com/blog/how-profilers-lie-the-cases-of-gprof-and-kcachegrind.html) I came up with two patches that increased the fast-out phase of code by a factor of 10 on emacs. It’s a very trivial optimization when you see it, but it hides itself from gprof.
    (patches https://gitorious.org/cvs-fast-export/cvs-fast-export/commit/69ec44c44cc4e85a343bfe7b5567b3d74146a779
    and https://gitorious.org/cvs-fast-export/cvs-fast-export/commit/7fbc8a8b600545961f6b58d871722d2aa5c86b70)

  10. Keep up the good work, this is very similar to the process we went through when developing svn-dump-fast-export. I shipped the tedious verification with the first public release. As the speedups came, we were able to find more defects from organic data sets. Pretty reliably, every order of magnitude larger input data exposed another gap in understanding of the subversion documentation. Your numbers are looking very good. With subversion, you’re lucky if you can get your hands on an archived repo dump. Actually getting the data out of a live repository is woefully slow. That was the biggest performance hurdle. Assuming you had a dump file, we eventually got fast enough that git-fast-import’s serial hashing was the bottleneck.

  11. @Jakub Narebski I’ve only tried it with gprof and callgrind. Callgrind had far superior output. For better output with gprof I think I would need to have a clibrary built with -pg.

Leave a Reply to esr Cancel reply

Your email address will not be published. Required fields are marked *