reposurgeon 2.0 announcement – the full-orchestra version

I shipped reposurgeon 2.0 a few days ago with the Subversion support feature-complete, and a 2.1 minor bugfix release this morning. My previous release announcement was somewhat rushed, so here is a more detailed one explaining why anybody contemplating moving up from Subversion should care.

To go with this, there is a new version of my DVCS Migration HOWTO.

reposurgeon can now read and analyze Subversion stream dumps, and can translate them to git fast-import streams. This brings with it the ability to export not just to git but to any DVCS that can speak that stream format; reposurgeon currently has direct support for hg and bzr.

  • Branchy repos are automatically handled correctly, with trunk mapped to master and Subversion branches maped to gitspace branches.
  • Subversion tags are automatically mapped to git annotated tags (or to branches with tagged ends if the tag directory was modified after the copy).
  • Multibranch commits are automatically split into annotated per-branch commits.
  • Various kinds of meaningless cruft and artifacts generated by older versions of cvs2svn are automatically cleaned up. (But no potentially interesting comments or metadata are ever thrown away.)
  • Ersatz branch copies consisting of a plain directory copy followed by multiple adds are detected and treated like intentional branch creations.
  • svn:ignore property settings and clears are automatically translated to equivalent creations and removals of .gitignore files.
  • There is semi-automated support for lifting CVS and Subversion commit references in change comments to a VCS-independent date!committer format.
  • svn:special properties are translated to git symlink references.
  • It is never necessary to hint to reposurgeon or give it branch-rewrite rules to get a clean lift. In some very unusual theoretical cases, post-lift surgery to sort out branches might be required, but no example of this has yet been observed in the wild.
  • What reposurgeon does is carefully and exhaustively documented, even in the strange edge cases.

Most of the pre-existing conversion tools don’t do any of these things properly. reposurgeon does them all, with an extensive regression-test suite to demonstrate correctness. The code has also been field-tested on several large Subversion repositories (notably for the gpsd, Hercules, NUT, and Roundup projects) with good results.

I believe reposurgeon now does almost as good a job of lifting as is possible given the ontological differences between Subversion and git. I say “almost” only because there is still some room for improvement in recognizing Subversion branch-merges-by-copy and translating them as gitspace DAG merges.

Note one important restriction: reposurgeon can read Subversion dumps, but cannot write them – the downconversion from fast-import streams would be too lossy to be safe.

I started working on the Subversion-stream support about a year ago. What took so long was getting the multibranch support to automatically do the right thing in various semi-pathological merge cases.

Fear the reposturgeon!

34 comments

  1. This is really awesome, and I could even see us using it at work at some point. But if I could look a gift horse in the mouth for a moment, even if mostly to point out what others might see when examining the same equine — early in your DVCS migration howto you have this sentence:

    Before you can do your conversion, you’ll need to put together an authors file.

    For somebody looking for guidance on how to proceed with a conversion on a largish archive, this statement could be problematic. The reader will realize he is expected to provide his own miracle. Some readers will be undoubtedly be fine with this. Others will either feel completely incompetent, or feel that you are incompetent, because you have provided them with a “guide” about how to do clear a minefield, and the very first thing you tell them to do is to carefully mark on a detailed map where all the mines are, so they don’t step on one when when they proceed to the clearing step.

    If I were actually inclined to convert a repository at this very moment, I would have to bookmark my place in your directions, go google ideas for how to accomplish this, and then come back to your document, because you have defined this as a prerequisite, with zero guidance how to accomplish it. Directions and/or tools for extracting all the usernames from the existing archive so that they can create such a file would be really useful.

    1. >For somebody looking for guidance on how to proceed with a conversion on a largish archive, this statement could be problematic.

      I’ll add a bit of guidance.

  2. Fair point, Patrik. It’s just a text file that maps authors in the old system to authors in the new one; generally that’s just username = Full Name . A paragraph about how to grep a CVS or SVN repository for usernames wouldn’t hurt, although anything beyond that will be too unique to your situation. Personally, I suspect that I’m going to have to pull the full names out of an Active Directory server somehow. Ugh.

    Also, you can do this step at any stage of the process. In fact, just put a placeholder in your project’s lift file and Makefile for it, then fill in the authors once you’ve verified that everything else is working correctly. To be honest, no real harm will befall you if you skip it entirely, but it will mean that the early commits and the later commits from the same person end up with different strings in the author field.

  3. @db48x:

    Personally, I suspect that I’m going to have to pull the full names out of an Active Directory server somehow.

    Yeah, that’s probably a learning curve. It does sound useful, although I’m not real sure you have to do this (see below).

    Also, you can do this step at any stage of the process… To be honest, no real harm will befall you if you skip it entirely…

    I actually assumed that unmapped names would still translate in a reasonable fashion. But the howto, as written, doesn’t validate this assumption.

    if you skip it entirely … early commits and the later commits from the same person end up with different strings in the author field.

    I don’t know how long the process takes (obviously it varies by repository size), but this is the sort of task that, whenever I’ve done anything similar, I’ve almost always iterated multiple times, with new info gleaned from the conversion fed back into the process, and starting from scratch. It seems the usernames might lend themselves to this.

    However, rather than starting from scratch, IIUC reposturgeon itself is quite capable of editing the repository after any sort of conversion to fix up the name files, at least in the case of git, right?

    @esr:

    > I’ll add a bit of guidance.

    Somehow, I expected that. :-)

    But now I’m wondering if the “normal” right thing to do doesn’t actually involve ignoring the author info until the conversion is finished, then dumping out all the authors from git into a placeholder file that could be edited, and running reposurgeon to fix it back up. Not that I have been paying all that much attention to the capabilities of reposurgeon, but fixing up the names last seems (he said, waving his hands) like a fairly trivial thing to do given all of reposurgeon’s other capabilities…

    1. >fixing up the names last seems (he said, waving his hands) like a fairly trivial thing to do

      Hmmm. It is, actually. There’s a command “authors read” to apply the mapping that you can run at any time.

      I even wrote an “authors write” command that would dump your placeholder file.

      I guess I should note that you don’t absolutely have to do the mapping first. But you do have to do it before the reference-lifting step.

  4. OT: Android is now shipping a supermajority of new smartphones.

    “The figures are in: In the third quarter of 2012, no less than 75% of smartphones shipped ran Google’s Android operating system. This equates to 136 million Android handsets, almost doubling the 71 million Android smartphones from the same quarter last year. If we look at the entirety of 2012 to date, 68.2% of smartphones sold are powered by Android, up from 49.2% last year.”

    http://www.extremetech.com/computing/139458-android-now-powers-75-of-all-smartphones-sold-are-we-heading-towards-a-google-monopoly

    1. >By which I mean what is described here:

      That’s an interesting pathological case you have there. Do you have a small repo that exhibits this? I’d like to add it to my regression tests.

      What I think will happen depends on whether and when changes were made to the tag directories after the tag copy. A tag copy with no following changes gets lifted to an annotated gitspace tag. A tag copy with following changes becomes a branch, with a gitspace tag carrying the copy metadata pointing at the branch root. That generated tag may be useless, but one of reposurgeon’s design rules is: never automatically delete metadata entered by a human. The human operating the tool can choose to garbage-collect those tags.

      So, if neither of your tags ever had changes after copy, the history will look like this (I think – I’ll have to test): The first tag will first turn into a mini-branch containing only one change, which is a git deleteall, and a git annotated tag pointing at the branch root commit. The second tag copy will be lifted to a gitspace annotated tag. The cleanup phase will than turn the tip deleteall on the first branch into another tag, at which the point the first branch will have no ops and disappear. You’ll be left with three gitspace tags, two of which are junk which the human operator will delete.

      If both tags had changes after copy, I think the result will be what you’d call a history merge – the branch will be rooted at the first copy point and have a deleteall in the middle of it somewhere. If only one did, you’ll be left with it branched and the other branch location lifted to a tag.

  5. Can reposurgeon translate “what is merged in” information from svn:mergeinfo property in newer Subversion repositories (version 1.5 and up IIRC), together with svn:copyfrom to correctly generate DAG with merge commits (“merge tracking” information)?

    BTW from what I have heard it is another unnecessary flexibility of Subversion where you can shoot yourself in the foot wrt. future conversion, like non-full branch merge, or merging-in across branches etc…

  6. Handling svn:mergeinfo is in progress. It already handles svn copies fairly well, when the directory layout is standard.

  7. An example of a repo that exhibits this case is the Chocolate Doom Subversion repository; you can rsync the repository off Sourceforge using the details here:

    http://sourceforge.net/apps/trac/sourceforge/wiki/Using%20rsync%20for%20backups
    (Sourceforge project name is chocolate-doom)

    I have a simple constructed repository that I put together here to demonstrate it:

    http://www.soulsphere.org/random/git-svn-broken.tar.gz

    I ended up writing my own Subversion-Git conversion tool (Agito, which I previously linked to) to handle this case.

    1. >I have a simple constructed repository that I put together here to demonstrate it:

      Thanks; reposurgeon presently fails your success test on that repo. I’ve added it to my regression-test suite and will stare at it while I’m on the road this week.

  8. By the way, Git has a mechanism to adjust (correct) names and emails of authors as seen by user (in porcelain like git-log, git-shortlog or git-blame, where output is meant for human consumption) via .mailmap file, see MAPPING AUTHORS section e.g. in git-shortlog manpage. This conversion is done only for display, after the fact.

  9. I don’t use git, so I had to look up what you meant by “porcelain”. Why isn’t this analogy more common?

  10. I really like that analogy.
    I can see its potential beyond git or software development, eg in explaining users what is and isn’t the job of the IT department, or how users have impact on IT and vice versa.

    Things along the lines of
    “we deal with the plumbing and put in the fixtures, but it’s your responsibility to (learn how to) use them correctly”
    or
    “if you clog the pipes, it’s us who’ll have to deal with the shit”

  11. Seasons don’t fear the repo
    Nor do the wind or the sun or the rain
    We can be like they are…

    Sorry, got stuck in my head.

  12. So far as I know, I coined the usage and analogy of the term porcelain while otherwise losing an heated argument with Linus during the early stages of git development.

    http://www.gelato.unsw.edu.au/archives/git/0504/0881.html

    I was very surprised to see it become the term used to describe the human-friendly stuff written around git, later on.

    @Dave Taht what was this argument about?

    BTW. I have watched Git history unfolding from the very beginning, first via long defunct Kernel Traffic (and very short lived Git Traffic), then via KernelTrap which unfortunately is also no longer updated, and finally on git mailing list.

    Git was one of (rare?) projects that was developed bottoms-up, starting from scriptable plumbing. Linus originally intended Git to be a basis of SCM, and not SCM in itself. You can read in gitcore-tutorial(7) manpage how one used Git in those ancient pre-porcelain plumbing-only times.

    Initially it looked like there would be many competing user-level interfaces (porcelains); the first one was git-pasky which soon got renamed to Cogito. But git grew user-friendly porcelain itself, and Cogito got abandoned, with best things incorporated in git (for example it was Cogito that was source of git-filter-branch). Nb. there was also another attempt of writing Git porcelain on top of Git plumbing in form of EasyGit (eg), but it [also] stalled…

    Instead of Git serving as versioned filesystem storage / DVCS object oriented database for “true” version control system we have common SCM-agnostic exchange format in the form of fast-export / fast-import stream… which is the heart of reposurgeon.

    Nb. if I remember correctly git-fast-import was originally created to make it possible for Mozilla to change version control system to Git at the time it was choosing it…

  13. @Jakub

    During the early, heady days of git development, where people were piling on feature after feature into the single directory that git was in, I was appalled, and felt that the plumbing should become a library that could be more easily wrapped by ui code in various languages, other than shell (which is what cogito was in) and the general git concept extended into being a general purpose distributed database, perhaps.

    I quickly produced a prototype of “libgit” one weekend…

    http://www.gelato.unsw.edu.au/archives/git/0504/0801.html

    I haven’t been able to find linus’s whithering response, but he outlined his vision towards evolving the feature set and how he envisioned git’s future to be before anyone settled down to try to create a c callable API. While I agree with his choice now, (mostly) I still do wish that git’s core had been made into a library that was easier to wrap than it is today.

  14. @dave taht – maybe what’s really needed is an easier way to call external programs, construct pipelines, etc, from C and any other language that lack the ability – without having to manually deal with fork/exec signal masking, or the string escaping you end up having to do when you use the shell.

  15. @Dave Taht: Well, after a long time you idea slowly becomes reality in the form of libgit2

    BTW, from what I know Bazaar went for APIs over repository data structures…

  16. CWood,

    When I first heard about GNOME way back in the late nineties, my first sense was one of dread. “It begins… the Windowsization of Linux.”

    I should have paid closer attention to my instincts back then. For the most part I avoided GNOME and GNOME-based distros like Ubuntu, but the entire software stack is in danger of becoming more tightly coupled and un-Unix-like. An increasing number of distros are standardizing on systemd, a non-modular, complicated, untested, CADT-compliant replacement for init(8), for the love of FSM! And GNOME (and KDE?) will be built on top of that, and fuck you if you’re not using Linux with the userland that we all use now. And then people wonder why the Linux desktop still sucks.

    I came to Linux for the great software and the community. These days I see a lot of douchey neighbors moving in, knocking down perfectly good Victorian homes and erecting McMansions in their place.

    But maybe it’s just me.

  17. Jeff,

    Agreed on the Windowsization point. From what I can tell, the FLOSS world is slowly starting to become a place wherein developers are pretty much out to step on each others toes. FWIW, to me this seems not only counterproductive, but also, and forgive me for the pessimistic, doomsday nature of this, it threatens the community as a whole, and could potentially bring FLOSS to its knees.

    Personally, GNOME being owned by GNU and the FSF, I would have thought this could have been avoided. Granted, Stallman and his followers come out with quite a lot of bollocks, but I know for a fact that this is contrary to what they believe, and am surprised that it has gotten as bad as it has.

    That said, of course, a lot of what has been happening can, I believe, be traced back to a flock of Windows developers joining Linux, over the years and, similar to what happened in the Eternal September, have neglected to learn the correct manners and etiquette of the culture. And, similar to the Eternal September, unless something is done, this could potentially be quite damaging, given how widespread GNOME and KDE are, and the fact that Qt is now on its way out.

    This has, of course, happened before, in the AT&T vs BSD vs Roll-your-own vs everything else war of the 19(60s? 70s? Before I was born). In that particular instance it was dangerous, but fortunately that subsided with nothing more than slower growth than what it could have been. I pray that this is what will happen this time.

  18. Jakub,

    Unless I’ve missed something important, is Qt not owned by Nokia? Or at least received significant funding from? I’m fairly sure Nokia is just about dead in the water, so unless there’s been a change of hands that I’m not aware of, development may slow significantly in the near future.

  19. @CWood, and that makes Qt situation different from GNOME how?

    Besides, there is Digia which nows funds and does Qt development. Besides it is important for embedded and IVI: Maemo, MeeGo, Tizen, Mer, etc. all use Qt IIRC.

  20. GNOME just keeps recapitulating its founding vision. There were several ways to go in August 1997, after all:

    “We could modernize Athena.”

    “The current standard for the Unix desktop is already Motif/CDE, so let’s build on that. Massive project on improving and extending LessTif, and we’ll build GNOME on that foundation.”

    “KDE is already up and running, but Qt isn’t free. Let’s solve that with Harmony and with putting our effort into moving KDE components to run on Harmony. GNOME isn’t necessary.”

    “NextStep as an API has already been ported to a previous version of AIX, Sun released OPENSTEP on Solaris last year, Apple recently bought NeXT to use OpenStep as the base for the future Mac OS, and the GNU project’s official toolkit is GNUStep. The object model is quite powerful, and has been embarrassing CORBA in a bunch of journals. So, let’s put our efforts into completing GNUStep and build GNOME on GNUStep.”

    Instead, GNOME, on day one, started over from near-scratch with the skeleton toolkit used for the GIMP and no real concerns for compatibility with existing software, in order to build the Perfect Desktop on the Perfect Toolkit.

  21. Instead, GNOME, on day one, started over from near-scratch with the skeleton toolkit used for the GIMP and no real concerns for compatibility with existing software, in order to build the Perfect Desktop on the Perfect Toolkit.

    There’s a reason for that. The design of GNOME is deliberately and explicitly modeled on that of Microsoft’s COM and OLE. While I’m not sure whether to call Miguel de Icaza a fan(boy) of Microsoft, his appreciation for Microsoft’s “vision thing” — and contempt for the Unix Way — is well-known. His apology for the design of GNOME is called “Let’s Make Unix Not Suck”. Compatibility with existing software and respect for the old ways were not only not considered, they were thought to be part of what was holding Unix — and Linux — back; in order to effectively compete with Windows, Linux had to become Windows.

    Hence my comment about GNOME being the beginning of the Windowsization of Linux.

    Personally, I like the old ways. Microsoft’s architecture was designed around use cases involving the Big Three — word processor, database, spreadsheet. In other words Windows is, first and foremost, a substrate for Office. The Unix Way — the old Unix Way — on the other hand, is an architecture for building innovative applications out of smaller, complete programs.

  22. Personally, GNOME being owned by GNU and the FSF, I would have thought this could have been avoided. Granted, Stallman and his followers come out with quite a lot of bollocks, but I know for a fact that this is contrary to what they believe, and am surprised that it has gotten as bad as it has.

    I don’t think Stallman really gives a shit, as long as it gets people using free software over proprietary software. Somewhere along the way he’s gone from technical aesthete to political polemicist; anything in service to The Cause is acceptable to him.

    Besides which, it’s not like he has to put up with this. He doesn’t even fire up X, doing all his work from inside Emacs on a text terminal. I’d hazard a guess he uses gNewSense which, being based on an old version of Ubuntu, might not have even switched to Upstart yet, let alone systemd.

  23. Jeff Read said/asked: And then people wonder why the Linux desktop still sucks.

    Because it’s still using X.

    All the layers of crap on top of that, no matter whose and which, can’t fix X.

    And of course, since the alternative to X is making an entirely new graphical userland, it’ll never change. (Still sucks? Will always suck.)

    (This is why I only use Linux for a server solution, and my daily-use Unix is OSX.)

  24. Jeff Read said/asked: And then people wonder why the Linux desktop still sucks.

    Because it’s still using X.

    All the layers of crap on top of that, no matter whose and which, can’t fix X.

    @Sigivald: If by X you mean X11 aka X Window System, then I have one word answer for you: Wayland.

    This is getting further and further off-topic

  25. Wayland doesn’t actually solve the major problem with X. The major problem with X is in its stated philosophy: “mechanism, not policy”. Wayland is more of the same: it just hands out framebuffers and then pastes them together with OpenGL.

    The way to “fix” desktop Linux is to go soup-to-nuts on the graphics stack: provide a single low-level graphics layer, a single widget set, a single object model that encompasses GUI widgets but also things like users and processes and files, and a single set of UI guidelines. Get all the developers and major distros on board with your vision, and enforce it — either passively (via special tooling that makes it very easy to color within the lines and very difficult to do anything else), or actively (approval process, “app store”). Do not allow any alternatives to gain traction.

    And then watch me run like hell in the other direction.

    That’s what Microsoft and Apple did, and it’s worked out pretty well for them so far. It probably won’t work on desktop Linux, and even if Shuttleworth managed to pull this rabbit out of a hat, it won’t work for free Unix in general, because the ecosystem is too damn big and there are too many self-interested hackers looking to build the system they want. (We’ve gone from two competing mainstream DEs to six: KDE, GNOME, Mate, Cinnamon, LXDE, XFCE. I don’t think that number’s going to shrink as the GNOME devs keep pissing everyone off.)

Leave a Reply to esr Cancel reply

Your email address will not be published. Required fields are marked *