Your money or your spec

reposurgeon has been stable for several months now, since the Subversion dump analyzer got to the point where people stopped appearing in my mailbox with the Pathological Subversion Repository Fuckup Of The Week.

Still, every once in a longer while somebody will materialize telling me they have some situation in a repo conversion that they want me to help them fix. The general form of these requests is like this. “I have {detailed description of a branch/merge topology nightmare that makes Eric’s brain hurt to contemplate}. What do I do to fix it?”

I am now going to announce a policy about this. There are exactly two ways ways you can get me to solve your repository problem.

1. Pay me money to soothe away the pain. It will not be a small amount of money; my hours don’t come cheap and these jobs tend to eat a lot of them – not on the surgery itself but on the analysis leading up to it.

2. Specify a new surgical primitive that will fix your problem. To go this route, you need to (a) clearly describe the primitive, (b) send me a small test repository exhibiting its preconditions, and (c) explain what the postcondition is – that is, what you want the repository DAG to look like after the operation.

If your primitive is well-specified, and you’re willing to wait until I get it done at my own pace, I’ll write it for free. If you want a deadline date, you have to buy my time to guarantee that.

If your primitive is not well specified, and/or you can’t produce a test case, I’ll probably tell you to come back when you can fulfil both conditions. You can buy partial exemption from these conditions by paying me lots of money.

That is all.

43 comments

    1. >Now I am curious. What is the particular straw that broke this camel’s back?

      Rather not say. The querent was clueless, but he meant well.

  1. One has to say ‘stop’ at some point … does there have to be something in particular that triggers that?
    Now if I can just explain that to my customers :)

  2. Welcome to the Enterprise Software experience. There’s a reason why “enterprise” stuff costs so much – it’s because we have to charge upfront for all the back-end pain from our customers.

    We have well-defined interfaces designed to be used by automated tools via XML-RPC (hey – it may not be the in-vogue thing, but they’ve been around for 20 years with backwards and forwards compatibility, they are well-documented and they *work*), so instead customers will decide to to script our text-mode UI. Fair enough – we keep a good deal of backwards compatibility. However, our customers will then complain if we make any change to the text-mode UI, including fixing typos because “now their scripts don’t work”.

    You have to deal with customers who complain that your product doesn’t work with SunOS 6.5.1, despite that not being standards-compliant. Or backup software which can’t process TCP packets correctly (but it worked fine with the previous version of your software so clearly it must be your fault).

    We get customers who will wake us up in the middle of the night because they can’t use their system … which has been throwing bad DIMM errors for 3 months, but which weren’t fixed because they couldn’t afford the downtime. And now we have to smile and Jump To It.

    1. >Do you take gold and/or boxes of ammo as payment?

      Of course! .45ACP, please. A light seasoning of .40 for my wife’s Glock will be acceptable.

  3. I have a Subversion topological nightmare, which I’ve tried to import with Reposurgeon. I haven’t bothered Eric about this because feeding this SVN repo to reposurgeon is like trying to fly over Mount Everest with a helicopter–not impossible, maybe, but unreasonably hard.

    And by “unreasonably hard” I mean I brought 240GB of RAM, 1TB storage, six weeks of time, and reposurgeon to a fight with a 90GB, 700 kilocommit, 95 kilobranch, 144 megafile SVN repo. The SVN repo won.

    reposurgeon didn’t even finish the import step. It got all the way through the svn dump in the first 13 days, reported that it had read the last commit…and then swapped continuously for a month before I turned it off.

    To be fair, I had to patch a few bugs in svnsync to get it to stop segfaulting while merely copying the SVN repo. I also tried many of the other svn-to-git tools out there, and built several attempts of my own.

    I feel like Ahab, trying to import this whale of a Subversion repo at any cost.

    For now I’ve given up. Moby Dick is still out there.

    1. >I feel like Ahab, trying to import this whale of a Subversion repo at any cost.

      Holy shit. I thought I’d heard bad before, but that takes the cake. And the bakery, and the entire continent on which it stood.

      You are *screwed*, my friend. There is *nothing* out there that will handle a repo that large.

      I may have an idea that might work, though. Well…a research path, anyway. A variation on the strategy I used in a previous tool I wrote called svncutter.

      Is there an organization behind this monster that would pay me to work on the problem? I can solve it if anyone on the planet can, but the attempt would eat my life for a while.

    2. >For now I’ve given up. Moby Dick is still out there

      Zygo, find me on the #reposurgeon channel at freenode. We should discuss this.

  4. @Zygo –

    Not to be impertinent or anything, but what in the name of Cthulhu’s bastard spawn is this a repo of? I don’t want proprietary details, just some idea of what kind of fscking nightmare of a software (or hardware?) development project grows a repo that large and ill-formed.

    EMWTK (and shudder to speculate)….

  5. Who would have guessed the code for Windows 8.1 was kept in an svn repo? :-P

    Anyways, the only sensible thing to do is grab the head of that repo, run git init on it, and move forward … clean break from the embarrassing past. Archive the old repo somewhere safe so that in the very unlikely event someone ever needs an old version they can go get it.

    Sometimes “tough love” really is the best answer. :-)

  6. … or if you want to have fun and make money at the same time, you could port Wesnoth to the Android-based Ouya gaming console. Seriously it is a good idea, I think. There are few truly mature OS games out there with this level of complexity, so I guess quite a few Ouya owners would pay money to play it on the big screen.

  7. You are *screwed*, my friend. There is *nothing* out there that will handle a repo that large.

    I may have an idea that might work, though.

    … and thus the ball of string has been rolled.

  8. Eric, could you share some more cringeworthy (but spec-ed) Pathological Subversion Repository Fuckups?

    1. >Eric, could you share some more cringeworthy (but spec-ed) Pathological Subversion Repository Fuckups?

      I thought about this hard, but out of context they would be nigh-incomprehensible. Not that they were very comprehensible in context.

      I can tell you one thing that might be interesting. The largest single class of fuckups I’ve run into is due to pathological directory copies of various kinds. Like when someone uses cp rather than svn cp to create a branch, then uses svn cp to move individual files there. If they do mixed copies from different revisions (which is often the case), deducing what you ought to consider the branch’s immediate ancestor can get nasty.

      reposurgeon does a better job of recovering some kind of sane DAG from this kind of mess than anything else. Which isn’t necessarily saying much, as most conversion tools just punt this problem and give you a bad answer quickly. But I sweated this case hard and I think reposurgeon does as good a job as is possible in principle – which is still not all that good. There are cases that are just intractable.

  9. @everybody do you too have this weird feeling when you are not sure whether to envy Zygo or pity him?

    You hackers are good at creating new terminology, can you come with a verb that conveys “I don’t know whether to envy you because this task is so interesting and challenging, or to pity you for the pressure on you in case your career, business interests, reputation, or whatever important for you depends on getting this difficult task done soon” ?

  10. See, Zygo seems to have found a way to almost get an exception to the rule: have a really interestingly insane repository.

  11. Not to be impertinent or anything, but what in the name of Cthulhu’s bastard spawn is this a repo of? I don’t want proprietary details, just some idea of what kind of fscking nightmare of a software (or hardware?) development project grows a repo that large and ill-formed.

    Start with two CVS repos. Create a bunch of tools and processes based on CVS modules (where you can feed a few hundred paths into a build tool, and have it check out exactly the revisions of the directories you specify, renamed arbitrarily just because we can). Tag all files in all builds that are ever shipped. Encourage developers to make private branches in the same repo and work on them, but don’t define or enforce any namespace rules. Import both CVS repos into the same Subversion repo, and create scripts to adapt the CVS module definitions to SVN. Grow until there are hundreds of active products (each product being an arbitrary selection of several hundred directory paths from the main source tree) and thousands of builds (each one generating half a dozen tag commits). Watch as the Subversion maintainers learn the hard way about how to make one repo work with multiple different file name character encodings over a period of several years, during which tens of thousands of commits with such file names are accumulating. Every now and then, have a new developer do ‘svn cp’ from ‘/’ or ‘/branches’ into some random path, like ‘trunk’, do some work, then delete their erroneous “branch.” Do all that for multiple decades.

  12. @Shenpen: I believe that, rather than a single word, “That project sounds really cool…” is the correct reply. Perhaps followed by a lament about having too many cool projects of one’s own to allow one to lend a hand.

  13. “Start with two CVS repos. … Do all that for multiple decades.”

    Someone alert the Suicide Hotline folks they’re working overtime if the wrong people read that.

  14. @Zygo –

    “Start with two CVS repos. … Do all that for multiple decades.”

    Good sir, you do not have a Repo Problem – you have a Process Problem. (As I’m sure you are aware.)

    +1 on Michael Hipp’s solution – start afresh with a sane repo layout corresponding to the current state (and maybe immediate past, like the previous major version). Then take a well-seasoned clue-by-four and beat all of your so-called “developers” briskly about the head and shoulders to Do The Right Thing.

    If management won’t let you do that, because Reasons…. – run, do not walk, out of that hellhole and find a saner gig. Life is way too short to deliberately stare into the Tentacled Face of Madness.

  15. Decades? Looks up… wow, CVS was first released in 1990…. still I would swear I never heard anyone talk about concurrent version control before 2000 or so. Is hackerdom a decade ahead of the mainstream commerical software world?

    1. >Is hackerdom a decade ahead of the mainstream commerical software world?

      A decade at minimum, yes. Why does this surprise you?

  16. Good sir, you do not have a Repo Problem – you have a Process Problem. (As I’m sure you are aware.)

    The processes are fine for now, they’re just heavily dependent on Subversion’s implementation details (e.g. auditing requirements to have URL/Revision keyword strings built into every binary). There are noises about switching to Git, but all the credible rumors take the form of “build a new process in parallel based on Git, and cut each product over to it at the top of the next release cycle.”

    I don’t need the whole SVN repo in Git, and neither does anyone else. It’s just there, and there’s no reason why it can’t be imported. ;)

    1. >It’s just there, and there’s no reason why it can’t be imported. ;)

      I repeat: I might be able to do this. There’s nobody better qualified to try than the author of reposurgeon. But it would be a huge job with a significant chance of failure. And it would eat enough of my life to require that I be paid fairly handsomely.

      You know where to find me.

  17. @J.D. Bell said:
    >Life is way too short to deliberately stare into the Tentacled Face of Madness.

    Wimp.

  18. > There are noises about switching to Git, but all the credible rumors take the form of “build a new process in parallel based on Git, and cut each product over to it at the top of the next release cycle.”

    If historical Subversion repository will be ported to Git (perhaps incrementally? reposurgeon AFAIK builds whole layout of history in memory, and isn’t just operating on the fly on fast-import stream), it would be possible to temporarily connect current repo with historical repo for version-control archeology. It can be done either using “grafts” mechanism (used by Linux git tree after switch from BitKeeper (“BitKeeper fiasco”)), or more modern “git replace” mechanism.

    BTW. fast-import was originally created for the task of importing Mozilla version control history from Subversion to Git. Mozilla ultimately decided to use Mercurial, but I do wonder how well reposurgeon would work with repository of that size…

  19. >“That project sounds really cool…”

    I was thinking something to do with “interesting times”

  20. If they do mixed copies from different revisions (which is often the case), deducing what you ought to consider the branch’s immediate ancestor can get nasty.

    That particular question had a trivial answer for me: all of them. Such commits are effectively git ‘ours’ or ‘octopus’ merges with many ancestors. Previous commit on the target branch, if any, is the first parent, the rest can go in arbitrary order (maybe sort by revision number, but in practice nobody ever expects “HEAD^9” to do anything particularly useful).

    You can figure out which file came from which revision by matching up SHA1 hashes with the parent trees. There’s no tool in stock Git that will give you that information easily, but that’s Git’s problem. ;)

    Do you have a case in mind where that is the wrong answer?

    1. >You can figure out which file came from which revision by matching up SHA1 hashes with the parent trees.

      Yeah, reposurgeon uses that technique too. It’s about the only option there is, really.

      >Do you have a case in mind where that is the wrong answer?

      A large class of these for which resolving to a single ancestor commit is the right thing is Subversion artifacts resulting from branches originally created in CVS and then lifted into Subversion via cvs2svn. There is a lot of this sort of scar tissue in older Subversion repos.

  21. The largest single class of fuckups I’ve run into is due to pathological directory copies of various kinds

    That is a double-digit percentage of the commits in Moby Dick.

    My own attempt to solve this problem was based on two guiding principles: build rules to discover all the branches automatically, and make sure every Subversion repository operation resulted in a comparably expensive Git operation. I also have no particular need to reversibly preserve insanity–just a snapshot of the file data corresponding to each SVN commit is sufficient, and many SVN details are just irrelevant in a Git world.

    The expense-parity problem is easy to solve with a suitably patched git fast-import. Stock git fast-import goes out of its way to reject the empty path as a source for filecopy, but this is required for 12% of the commits in Moby Dick (these copy one complete tree from an existing branch to a newly created branch and then edit it). We filter data from svnadmin dump on an input pipe to git fast-import on an output pipe, and use a small amount of extra storage and RAM (less than 1GB of each) to store the branch model for the SVN repo. There are no temporary files, only inline fast-import blobs and references to existing data already copied into Git from the SVN repo.

    All you need is reliable information about branches. That is…harder.

    To discover branches, we maintain a shadow of the SVN filesystem. The shadow filesystem consists of any known branch root and any directory in SVN that contains no non-directory objects and is not a child of a known branch root. In practice it contains a few hundred thousand in-memory objects toward the end of the SVN history.

    The shadow filesystem tracks changes to the SVN filesystem, so if someone renames the parent directory of a bunch of branches in SVN, the branches all move. Deleting a branch removes it from the shadow filesystem but not from Git–if the branch is recreated later, we just keep adding commits to the existing Git branch. This produces a better result than a literal translation of what happened in SVN, since it means ‘git log branches/daily’ will search all of the history of ‘branches/daily’, and not stop at the last deletion (i.e. yesterday) as ‘svn log’ does; however, it means we have to mangle names for colliding Git branches because branches named “svn/foo” and “svn/foo/bar” cannot coexist in Git. Each branch node in the shadow filesystem knows the mangled name of its Git branch.

    When a file is created in a directory in the shadow filesystem, that directory becomes a branch root, either resurrecting a previously deleted branch, or creating a new one.

    If a SVN commit creates empty directories outside of any known branch and nothing else then the empty directories go into the shadow filesystem as potential future branch roots. If a commit creates directories and files then only the directories that were previously present in the shadow filesystem are considered as possible branch roots. This allows one SVN commit to create “branches/foo/bar/123” and later commit to create “branches/foo/bar/123/src/foo.c”, and we end up with a Git branch named “svn/branches/foo/bar/123” with a file in it named “src/foo.c”. Ironically in Moby Dick this almost always arises from using a non-Subversion tool (like CVS or git-svn) instead of SVN to create a branch.

    Moby Dick never creates one branch and updates another in a single commit, so it’s not allowed in my tool. Many such ambiguities used to be forbidden, but Moby Dick is a counterexample-rich environment.

    If a known branch root is copied, the destination becomes a new branch root with the existing branch’s head commit as parent. It is permitted to copy a branch into a subdirectory of another branch, which creates overlapping branches (the longer branch path gets a mangled name in Git). Sometimes people do this by mistake in Moby Dick, so we need a rule to handle it as well as the inevitable correction later on. Any activity on the longer branch path after it is deleted is considered part of the shorter branch path, i.e. if a shorter non-deleted branch path still exists, it is used instead of resurrecting the longer deleted path.

    There are two or three levels of nested loop and two or three passes for data collection and sanity checking for each commit. Every path has to be touched exactly once…unless someone recursive-copies ‘a’ from some other branch and replaces ‘a/b/c’ with something else in the same commit. Right. Uhhh…every explicitly mentioned path has to be mentioned exactly once…counterexample-rich environment.

    Git commits are sent to branches corresponding to paths that are modified. The Git parents of these commits correspond to SVN paths that are copied into the SVN branch (including the implicit copy of the previous revision of the branch tree). There is no representation of exactly which file:revision pair is included in the final tree–that can be determined after the fact by matching SHA1 of files in the committed tree against each of the parent trees. This turns build snapshot tag SVN commits with their thousands of directory@revision copies into simple Git octopus merges (or ‘ours’ merges with many parents).

    Every file touched in a commit gets mapped onto known branch root paths at that point in the SVN history. If this mapping determines there are multiple branches in a single SVN commit then we create a separate git tree for changes to each branch and commit them separately. If a file appears on two overlapping branches then we commit the files to both branches with different paths. This is not considered to be a merge.

    There are checkpoints every 10 kilocommits, recording the complete state of all Git branches and the shadow filesystem. I also built a simple tool to restore all the Git refs and shadow filesystem to a checkpoint state so I could quickly re-run imports over ranges of SVN commits with code that had been extended to deal with some newly discovered pathology.

    I got about 250 kilocommits into the whale last time I tried, then I ran out of time to work on it.

    1. >To discover branches, we maintain a shadow of the SVN filesystem.

      That sounds familiar. Sounds like you implemented an equivalent of about 60% of my dump analyzer in a somewhat more ad-hoc and specialized way. Which is reasonable; you could make historically specific assumptions that I couldn’t.

      My approach, by contrast, involved thinking hard about the semantics of Subversion op sequences and trying to get the mapping to a git-like DAG provably correct given those semantics. One side effect was that I rewrote and substantially improved the dumpfile documentation that the Subversion project itself uses. Their senior devs actually thanked me for making them think through the edge cases,

      I bet you ran into the same implementation problem I did, which is that the size of the shadow filesystem blows up ridiculously. The project stalled on this for a while, then some hacker from MIT wandered in, rewrote the filesystem store to have a sort of generational copy-on-write representation, and wandered out. After that it was tractable.

  22. A large class of these for which resolving to a single ancestor commit is the right thing is Subversion artifacts resulting from branches originally created in CVS and then lifted into Subversion via cvs2svn. There is a lot of this sort of scar tissue in older Subversion repos.

    There is evidence of cvs2svn in Moby Dick, but it didn’t create any special problems that I noticed. I didn’t consider trying to recover the original CVS commit intents, though. It seemed like the wrong direction, as I wanted to move from something that’s not-like-CVS to something that’s even-more-not-like-CVS.

    Autogenerated build tag commits look like this too, and those were bigger and there were a lot more of them.

    One side effect was that I rewrote and substantially improved the dumpfile documentation that the Subversion project itself uses.

    I think I read that document…after deriving most of it the hard way. :-P

    One thing that surprised me is how awful the SVN interface libraries and tools are. I had all kinds of problems (missing data, needing two or three separate APIs to find all the data, and crashes) until one day I just piped svnadmin dump output through ‘less’ and realized that rolling my own SVN dump parser would be so much easier than anything else.

    I bet you ran into the same implementation problem I did, which is that the size of the shadow filesystem blows up ridiculously.

    Nope, it stays roughly proportional to the number of branches plus revisions (granted, I have 100 kilobranches and 700 kilocommits, and was willing to throw a GB or two at storing them, so maybe that counts as ‘ridiculous’). I was able to keep the total number of revisions of the shadow filesystem to less than a dozen, and most of them happen at the beginning when the shadow filesystem is still small.

    It is only necessary to create a new shadow filesystem revision in rare circumstances. The shadow filesystem doesn’t store anything under a branch–anything that happens under a branch is translated to fast-import and then forgotten. One case is when branches are moved, which happens only a half dozen times in Moby Dick. We try to handle branch deletes without modifying the shadow filesystem, and fail in only one case that never occurs in Moby Dick. When branches are merely created there is no need to remember a version of the shadow filesystem that doesn’t contain them (unless we wanted to check the consistency of SVN data).

    For each branch in the shadow filesystem there is a table of SVN revisions on the branch and their corresponding fast-import marks or Git hashes. SVN copies are looked up against this table using some appropriate shadow filesystem revision then we send the hashes to fast-import.

    We keep deleted branches in the shadow filesystem, and if they become aliased later we merge the old branch with the new one. A fairly common pattern in Moby Dick is to copy branches/20131021 to branches/daily, then 24 hours later delete branches/daily, then copy branches/20131022 to branches/daily. In Git, the copy from branches/20131022 to (deleted) branches/daily becomes an ‘ours’ merge between branches/20131022 and the revision of branches/daily when it was deleted, keeping branches/20131022’s tree. There is a corresponding merge of the SVN revision/Git commit hash tables for the two branches in the shadow filesystem.

    One exception for deleted branches occurs when there is a SVN commit that modifies files on two or more branches. In such cases we create one Git commit per branch. If these branches are later merged by branch aliasing, there will be duplicate entries in the SVN revision -> Git commit hash table. Rather than try to solve that, I just fork a new shadow filesystem if there are duplicates. This happens zero times in Moby Dick.

    All this does rely heavily on SVN not having junk data (e.g. copy commands for a file that doesn’t exist in the revision it was copied from, which might succeed during branch-path mapping but fail later because the paths don’t exist in fast-import), but Moby Dick seems to be clean. I’ve separately analyzed the repo for that kind of anomaly while trying to figure out why git-svn was having so much trouble with it (it turns out that git-svn corrupts data on the way into Git under a surprising variety of circumstances).

    reposurgeon [matches SHA1 hashes to figure out which copy belongs to which ancestor] too. It’s about the only option there is, really.

    On import? That sounds like more of a constraint than a feature. If something like that was used to infer merges it would generate a crapton of noise in Moby Dick, as there are a lot of literal file copies without merge intent.

    I didn’t bother looking at file data at all. I translated SVN dump into fast-import as literally as possible, so if SVN copies a file, fast-import gets a filecopy command, and if SVN provides a blob, fast-import gets a blob. If files created by those different methods happen to be identical, only fast-import knows.

    The SHA1 hash matching I was talking about is done after the fact, when you’re looking at the converted Git repo, and have to figure out which revision of each file went into a build tag commit. In the original SVN you look at the log entry for the commit, and in Git you run a shell script that looks at ls-tree output of the commit and all of its parents and matches the files up.

    1. >I had all kinds of problems (missing data, needing two or three separate APIs to find all the data, and crashes) until one day I just piped svnadmin dump output through ‘less’ and realized that rolling my own SVN dump parser would be so much easier than anything else.

      Tell it, brother, tell it. The design of the dump format is much cleaner than most of the rest of what they layered over it.

      >I was able to keep the total number of revisions of the shadow filesystem to less than a dozen

      Ah. Your “shadow filesystem” is very different from mine, then. Mine is basically a copy-on-write map from {revision,path} pairs to the revision where the correct blob for each pair was created or last modified. That is, when path is a file. When it’s a directory, the entry value is an inner map object.

      >(it turns out that git-svn corrupts data on the way into Git under a surprising variety of circumstances).

      Indeed. It can mangle repos of any real complexity pretty badly.

      >I didn’t bother looking at file data at all.

      Not necessary; the blob hashes carry equivalent distinguishability information.

  23. If I bring too much software to a fight with Moby Dick, it finds the weakest seams of my algorithms and explodes them. If I don’t bring enough software, Moby Dick crushes my machine with its sheer size. If I added any feature that required determining whether files were distinguishable, Moby Dick would find some way to turn that feature into a crippling weakness.

    So I had to be ruthless. My filter doesn’t look at files at all. It is utterly stateless with respect to files. If there is a need to visit a file twice (i.e. due to a SVN copy) the filter recomputes its previous decision about the earlier version of the file from the branch history, and tells fast-import how to find the file in Git.

    Looking at this again I can see that there are opportunities to make the filter even lighter, e.g. I could turn my shadow FS into a proper versioned filesystem and eliminate the special cases for branch alias merges and renames. Unfortunately such trade-offs are irrelevant now, as saving a mere hundred MB of RAM is no longer a helpful step toward solving the final problem.

    Now the bottleneck is git. fast-import makes only a token attempt at compression, so it uses a lot of disk space compared to SVN. I have to stop fast-import every dozen kilocommits to run git gc. git repack’s complexity is effectively O(n^2) with a very tiny constant term, but Moby Dick is very large. After git gc runs I have to drop .keep files on the git packs to prevent git gc from spending weeks repeatedly repacking them. The output git repo after repacking (even a full repack) is larger than the input SVN repo at around 250 kilocommits, not even half way through the SVN history.

    That last phenomenon I have not seen in any other SVN-to-git translation I have done–usually the Git repo is hundreds of times smaller than SVN.

  24. @Zygo: “a 90GB, 700 kilocommit, 95 kilobranch, 144 megafile SVN repo”

    You may have considered this and dismissed it, but: Look at svndumptool’s ‘sanitize’ operation (which I contributed… wow, years ago now… time flies). For your case, you’d only want to “sanitize” the file data, but it would allow you to cut the GB count dramatically while you work out the structural issues in the repo. From your description, you’d be looking at 2 weeks for the svnadmin dump, and another 2 weeks for the sanitize operation, but you’d be left with a repo with all of the structural complexity of Moby Dick and significantly less of its bulk, potentially allowing faster iterations on your work. Once you can process Moby Dick’s skeleton, you may have a better shot at the rest of him.

    (And people look at me funny when I talk about my 1megacommit, 760GB svn repo… though it has none of the structural interest of your Moby Dick. :) )

    I also have some code I wrote to do transforms on svn dump files to prepare them for import by other tools. I really ought to clean that up and post it; it was getting to a point where you would specify a set of transforms so you could elide “oops, we deleted trunk” commits, auto-detect and re-arrange branches to be sane, etc. From your description of a shadow filesystem you’re likely past needing something of that sort, though.

    @esr: Speaking of svndumptool sanitize… The intent behind that feature was to allow someone with a troublesome svn repo containing proprietary data to be able to turn it into a repo that retained its full troublesome nature but none of its proprietary information. It could use an update to a modern hash function instead of md5, but when someone has an interesting repo you might be able to get them to send you a sanitized version of it that still exhibits its interesting properties.

    1. >Speaking of svndumptool sanitize…

      OK, that’s funny. Not knowing of this, I wrote almost exactly the same feature, for exactly the same reasons, in svncutter. It’s in the Subversion contrib directory.

  25. @Zygo @ESR I have very little experience in this kind of stuff so forgive me if my question is silly, but isn’t there a way to determine that of the gazillion versions in this Moby Dick, which are the 10 or 20 or 30 or 100 the developers actively care about, the ones that get maintained or compiled, put them into git as manually as necessary and forget about the rest? I think there is a limit of human capacity here, I don’t think developers remember thousands of versions and regularly say, aha, I need those pieces of code from the 9997th branch, can they?

  26. > >For now I’ve given up. Moby Dick is still out there
    >
    > Zygo, find me on the #reposurgeon channel at freenode. We should discuss this.

    It warms my heart to see a hacker respond to a problem like, to quote Scott Adams, a starving chihuahua on setting upon a pork chop :)

  27. @Zygo: What size is the svn dump that you feed to reposurgeon ? Note that the reposurgeon tool isn’t really an SVN-to-Git converter; it is a SVN and git-fast-import stream parser/rewriter. That has profound implications on what strategies and optimizations are available to us.

    For instance, if the only goal was to produce a git repository, all blobs could be fed to git right away and referred to with their hashes (which would alleviate the need for a seekable dump and/or a manual file store of blobs). Also, the PathMap store (our shadow filemap) could be implemented with git trees, which would replace the copy-on-write mechanism quite nicely (it is not completely straightforward because we store more than a pointer to the file itself, but you get the point).

    Another point is that unfortunately Python is not that memory-efficient, especially since we memoize some operations for them to be less prohibitive (we store both parents and children links even though the latter are computable from the former, we store “SVN revision”->commit maps, etc…)

    reposurgeon aims at applying heuristics to fix histories while converting from SVN, and salvage as most branch/merges information even when the SVN commits were not made correctly (e.g. it even tries to cater for straight cp’s then svn adds instead of svn copy, copying a dir by copying its contents instead of asking for a dir copy, etc…). If your case is simpler than that you don’t need all that complexity.

    Heck, if you don’t care about preserving merge information, you can even write a far simpler importer that dumps all in master with branches as dirs 1:1 from SVN, then use filter-branch –subdirectory-filter.

    Maybe in that case reposurgeon can even import correctly your repository (with set svn_nobranch before reading) and perform the subdirectory filter’s itself. But the PathMap generation is mostly used to get enough info for branching heuristics, so a less stateful ad-hoc tool might be the way to go anyway.

    Note that not so long ago memory usage of reposurgeon was 10x higher… Imagine that ! Also, a company in France proposes dedicated servers with 1TB RAM, maybe you can use those :)

Leave a comment

Your email address will not be published. Required fields are marked *