Progress towards the extinction of CVS

The Great Beast, designed for converting large CVS repos, is now in full production. It hasn’t killed off any specimens in the wild yet (and I’ll explain why in a bit), but it’s doing spectacularly well on our test repositories.

As a representative large example, the entire Emacs CVS history, 1985-2009, 113309 CVS commits, lifts clean in 37 seconds at a sustained rate of 3K CVS commits a second. Yes, three thousand.

The biggest beast known to us, the NetBSD src repository, converts in 22 minutes. To give some idea of what a speedup this is, the first time I ran a lift on it – on one of Wendell’s Xeon machines – it took a bit under six hours. That’s about a factor of seventeen, there.

Judging by performance on the other project devs’ machines the Beast is good for a 2x to 3x speedup over a conventionally-balanced PC design (that is, one with worse RAM latency, narrower caches, more cores but somewhat lower single-thread speed). That’s a big enough advantage to validate the design and be practically significant on large repositories.

The rest of the speedup is software. I did a lot of work on that two or three weeks back, but more recently Laurence Hygate has gotten the bit in his teeth and delivered some truly amazing improvements. At this point I’d have to say he has probably delivered a bigger cumulative performance delta than I have.

I, meanwhile, have been trying to concentrate on correctness issues. The present code does an excellent job in most cases – and I can now prove that that, having built a script wrapper that systematically compares CVS checkouts of tags and branches with their equivalents in a git conversion. But there are three remaining trouble spots.

One is CVS vendor branches. I think I know how the present code goes wrong handling these, but it’s not an easy fix. The symptom is content mismatches between tagged states in CVS and a git conversion.

Another is coping with CVS’s all-too-frequent failures near file deletions. I’m not even certain I completely understand all the problems here yet. This also manifests as content mismatches at tagged states – usually files persisting in the git conversion after the equivalent point at which they should have been deleted.

Finally, we’ve seen some repos that produce a fatal internal error complaining about a “branch cycle”. I have almost no understanding of what’s generating this.

I want to solve at least the vendor branch problem before I go hunting big game. I have some small test repositories that replicate it, so that should be doable.

The other big issue is target identification. This is where my blog followers and others who want to stamp out CVS in our lifetime can help. Find us projects still using CVS – best targets are those you’ve sounded out for interest in converting and gotten some positive response from.

21 comments

  1. Perhaps an opportunity to help on multiple sides.
    1. Make sure the video card can do dual 2560xN monitors AND 4K HDMI or find a tweak.
    2. Create a link to “You too can have your own “beast”.

    Integrating a system properly is hard. It is easier to tweak incrementally once you have a set of known working hardware.

    Congratulations on the new system. Fossils are already dead. Does that mean NetBSD is OSified, or OSSified?

  2. Eric,

    > One is CVS vendor branches. I think I know how the present code goes wrong handling these, but it’s not an easy fix. The symptom is content mismatches between tagged states in CVS and a git conversion.

    Exactly how do you think the code goes wrong as it handles these? I’m more than a bit curious, and rubberducking the process might kick a neuron or two in your brain.

    OT – Thanks for your encouragement and patience as I’ve slogged along trying to put How To Learn Hacking into practice. Now I’m a lot more comfortable doing larger refactors – such as superclassing common functionality, a very crude form of dependency injection (which has dramatically enabled testability), re-arranging classes to make more sense, and confirming apparently-duplicated code actually does the same thing before deduplicating it (blindly doing same has already bit me hard), to name four. The increase in refactor size/scope was a continuous process as I went along, not any great ringing crash (that seems to be your trick).

    I’ve also applied what I’ve learned on a friend’s codebase – even though I’m starting on a new codebase (central problems according to him are slow performance and an infintely-recursive data structure), I find I’m taking significantly bigger steps than I did the first time around, even though this codebase is bigger and gnarlier than the last one. Is this as normal as it feels?

    1. >Exactly how do you think the code goes wrong as it handles these?

      I’ll probably post about this.

      >I find I’m taking significantly bigger steps than I did the first time around, even though this codebase is bigger and gnarlier than the last one. Is this as normal as it feels?

      Yes. This is your learning process working as expected.

  3. Now that CVS is heading toward extinction, do you plan to revive ForgePlucker? Development on it seems… scattered.

    1. >Now that CVS is heading toward extinction, do you plan to revive ForgePlucker?

      I think that would be better targeted as part of building Federation, the next-generation forge system I sketched a design for in 2009.

    1. >But I didn’t find anything about Federation, either here or in catb.org. :’-(

      Search the post history for the word “forge”. I think I only used the name “Federation” in a comment.

  4. >Search the post history for the word “forge”. I think I only used the name “Federation” in a comment.

    Professor, is it really nece…

    (If you know your Mel Brooks, you know what you’re supposed to say now. ;) Besides, I just followed your advice and found several threads on the forge problem. Thanks.)

  5. So, I’ve never really done version control before.

    Most of my many fragmentary software projects and half-finished math libraries get organized by scp-ing them to a server of mine.

    I take it, given that you are expending a lot of effort to generate this updated system, that CVS is outdated and being replaced by other systems? (I’ve heard of SVN and git. I spent about two hours here and there trying to wrap my head around git, but couldn’t figure out how to configure the git server. I may need to spend some more time with it.

    If I were to attempt learning a version control system to organize my stuff, which one would you recommend? Which one is most widely used in the hacker community?

    1. >I take it, given that you are expending a lot of effort to generate this updated system, that CVS is outdated and being replaced by other systems?

      Long outdated. And pretty horrible.

      >If I were to attempt learning a version control system to organize my stuff, which one would you recommend? Which one is most widely used in the hacker community?

      Git has won the mindshare war. Which in at least one respect a shame because its UI is generally conceded to be ugly and difficult even by its proponents. Mercurial lingers as a minority choice with better UI, but its future prospects are dubious.

      If you’re working on single-file projects, SRC (which I wrote) works as a relatively gentle introduction. If you need to record changesets (correlated changes in multiple files) I recommend biting the bullet and learning git.

  6. Neither with centralized (like Subversion), nor with distributed (like Git, Mercurial, and others: going for integrated Fossil and Veracity, security-oriented Monotone, moribund and baroque-architectured Bazaar) you need server for working with version control system locally. The distributed ones are even easier – all you need to start is “init” (or equivalent), no administration necessary (like setting CVSROOT for CVS).

    IMVHO Git won because of “worse is better” effect,… but bottom-up development while usually leading to better features can lead to ugly and inconsistent API. Also, Git exposes “the index”, something that other version control system also have in some limited way implicitly – greater power, but one more moving part.

  7. > IMVHO Git won because of “worse is better” effect […]

    Though I wonder if winning mindshare war was the network effect, or this…

  8. >[Git’s] UI is generally conceded to be ugly and difficult even by its proponents. Mercurial lingers as a minority choice with better UI, but its future prospects are dubious.

    This raises an interesting question: how does the Hg-Git plugin for Mercurial fit into this? One could use Mercurial as a matter of personal preference, while working on projects where everyone else uses git… right? ;P

  9. I spent about two hours here and there trying to wrap my head around git, but couldn’t figure out how to configure the git server. I may need to spend some more time with it.

    Jakub was getting to this point, but it sounds like your approach was very wrong here. Setting up a Git server is not a typical task for most users of Git, even for many project managers who have to go seek out hosting (they’ll typically use one of the hosting sites like GitHub, Gitorious, BitBucket, and many others, that have done the hard work already). Nor is it necessary for using Git — from the command line, it’s merely a “git init” away from being ready to go. This is the beauty of a distributed VCS (DVCS), every local copy is a complete repository*, fully capable of serving the entire history of the project on its own; when collaboration is needed, a server is one (but not the only) mechanism of implementing it.

    * Almost. there are exceptions to this rule with shallow and narrow clones. Both are relatively advanced topics with limited use.

  10. @ams: You can find quite good explanation of version control features in The Git Parable post by Tom Preston-Werner (though admittedly it is a bit worse as to actually explain how to use Git).

    1. >How does the cygwin[1] repository compare to NetBSD?

      Dunno yet. Do you have a recipe for fetching it by rsync? The obvious cvssync call didn’t work.

  11. Sorry, I just got curious about the repo. I don’t have any connection to the project and use cygwin only very infrequently.
    The repo web interface shows some versioning antipatterns, like the ChangeLog with version 1.6600 and also at the same time files like “ChangeLog-1995” with year appended …

  12. FYI, I’m getting a build error with 1.29 on musl libc:
    cc -Wall -Wpointer-arith -Wstrict-prototypes -Wmissing-prototypes -Wmissing-declarations -Wno-unused-function -Wno-unused-label -Wno-format-zero-length -pthread -march=native -O3 -I. -I/home/idunham/cvs-fast-export-1.29/ -DVERSION=\”1.29\” -DTHREADS -DREDBLACK -DUSE_MMAP -DFILESORT -DLINESTATS -DTREEPACK -DSTREAMDIR -Drestrict=__restrict__ -c -o revcvs.o revcvs.c
    revcvs.c:80:43: error: ‘PTHREAD_RECURSIVE_MUTEX_INITIALIZER_NP’ undeclared here (not in a function)
    static pthread_mutex_t dir_bucket_mutex = PTHREAD_RECURSIVE_MUTEX_INITIALIZER_NP;
    ^
    : recipe for target ‘revcvs.o’ failed

    The *_NP macros and *_np functions are so named to indicate that they are not portable.
    I’ll try building without threads; on my laptop, it’s not useful anyway.

    1. >The *_NP macros and *_np functions are so named to indicate that they are not portable.

      I have pushed a fix to the repository, try the head version.

  13. Thanks! It works, though I can’t tell whether threading is working or not (the repo I’m processing is done in a couple seconds); git fast-import takes longer that cvs-fast-export.

Leave a Reply to idunham Cancel reply

Your email address will not be published. Required fields are marked *