A low-performance mystery: Sometimes you gotta simplify

This series of posts is increasingly misnamed, as there is not much mystery left about cvs-fast-export’s performance issues and it is now blazingly, screamingly, bat-out-of-hell fast. As in both threaded and unthreaded version convert the entire history of groff (15593 CVS deltas in 1549 files in 13 seconds flat. That would be about 10K CVS commits per minute, sustained; in practice the throughput will probably fall off a bit on very large repositories.

I achieved the latest doubling in speed by not succumbing to the temptation to overengineer – a trap that lays in wait for all clever hackers. Case study follows.

To review, for each master there’s a generation loop that runs to produce all its revision snapshots. Because CVS stores deltas in reverse (that is, the tip node contains the entire most recent revision, with the deltas composing backward to an empty file) the snapshots are emitted in reverse order.

These snapshots are then stashed in a temp directory to be picked up and copied out in the correct (canonical git-fast-export order) – forward, and just in time for the commits that reference them.

The reason for this generate-then-copy sequence (which doubles the program’s I/O traffic) was originally twofold. First, I wanted the output streams to be look as much as possible like what git-fast-import would ship from an equivalent git repository. Second, if you’re going to make incremental dumping work (give me a stream of all commits since time T) you must use this canonical order. For some people this is a must-have feature.

Then, when I added multithreading, the temp files achieved a different utility. They were a place for worker threads to drop snapshots without stepping on each other.

When I last posted, I was preparing to attempt a rather complicated change in the code. To get rid of the temp files, but preserve canonical ordering, my plan was to pick apart that generation loop and write a function that would just give me snapshot N, where N is the number of revisions from base. I could then use this to generate blobs on demand, without I/O traffic.

In preparation for this change, I separated snapshot generation from master analysis and moved it into stage 3, just before export-stream generation. When I did this, and profiled, I noticed a couple of things.

(1) The analysis phase became blisteringly fast. To the point where it could parse and analyze 15,000 CVS masters in less than a second. The output traffic to write the snapshots had been completely dominating not just the analysis computation but the input cost to read the masters as well.

(2) Snapshot writes were no longer threaded – and that didn’t make a damn bit of difference to the throughput. Snapshot generation – the most compute-intensive part of the program – was also completely dominated by I/O time. So the utility of the temp files began to look at best questionable.

(3) Threading stopped making any noticeable difference in throughput, either positive or negative.

Reality was trying to tell me something. The something was this: forget being clever about threading and incremental blob generation in core. It’s too complicated. All you need to do is cut the snapshot I/O traffic. Ditch the canonical dump order and ship the snapshots as they’re made – never do a copy.

Keep it simple, stupid!

That is what I did. I couldn’t give up on canonical order entirely; it’s still needed for incremental dumping, and it’s handy for testing. But the tool now has the following behavior:

* Below a certain size threshold (byte volume of master files) it defaults to dumping in canonical order, with temp file copies.

* Above that size, it dumps in fast order (all blobs first), no copying.

* There are -C and -F command-line option to force the dump style.

The threshold size is set so that canonical order is only defaulted to when the resulting dump will take almost no time even with the copies.

The groff repo is above the threshold size. The robotfindskitten repo is below it. So are my regression-test repos. Yes, I did add a regression test to check that canonical-order and fast-order conversions are equivalent!

And I think that brings this saga nearly to a close. There is one more optimization I might try – the bounded-queue-with-writer-thread thing some of my regulars suggested. I frankly doubt it’ll make a lot of difference; I’ll probably implement it, profile to show that, and then remove it to keep the code simple.

This does not, however, mean that the bucks people threw at the Help Stamp Out CVS In Your Lifetime fund were misdirected. I’m going to take the combination of cvs-fast-export and a fast engine to run it on in hand, and use it. I shall wander the net, hunting down and killing converting the last CVS repositories. Unlike hunting mammoths to extinction, this will actually be a good thing.

73 comments

  1. The last holdouts, as always, will be behind the walls of government “security”.

  2. Bravo.

    The morals I take from this story are

    1. Most computers are I/O bound, not compute bound

    2. If things are running slow, the speed bumps may not be where you think they are, and attempts to speed things up will have no effect or a negative effect because they aren’t addressing the problem.

    3. If things are running slow, analyze I/O first, and insure you are doing only what is necessary, in the best way.

    I’m curious, though: how good is the sort of profiling normally done at telling you something is slow because it’s blocking on pending I/O, and that optimizing or eliminating I/O is where you need to focus? The biggest time sink in this effort seems to have been discovering I/O *was* the issue. Do you see possible changes in your profiling approach because of that?

    1. >The biggest time sink in this effort seems to have been discovering I/O *was* the issue.

      Actually, the biggest time sink was glibcs’s thread-locking feature in stdio. It distorted everyone’s model of what was going on.

      >Do you see possible changes in your profiling approach because of that?

      Actually, I’d like to change my profiling style, but the tools I have make it difficult to measure a program’s I/O usage directly.

    1. >Is ‘stage 2? still considered BFM? And is it worth still trying to scry the inscrutable?

      Oh hell yes. It’s still a maintainence worry that that code is still opaque.

  3. Ohh, William, don’t get me started…

    I do kinda wonder if this is not the last we’ve heard of the temp file approach. That might be necessary FAIK when commit repos reach a certain size relative to the computer’s memory. OTOH, it beat groff, so maybe not. (Plus, I confess I don’t fully understand what all needs to be kept in memory – if the program can forget everything about a commit entry as soon as it’s converted, then we’re fine in perpetuity.)

  4. I/O speed’s enough of a big “look here first!” thing that I’m mildly surprised that it wasn’t looked at first. I’m no computer architecture guru, but “disk is orders of magnitude slower than RAM” was pounded into every lower division CS major’s head in college. I wasn’t even paying enough attention to it when I was following the previous threads; I assumed it had been looked at and discarded as a candidate. Unintentional error cascade….

  5. Paul, even if you do wind up flooding RAM, swapping I/O is not going to be any slower than file I/O, and in the usual case, you’re still winning – and you can solve the swapping I/O problem by throwing more RAM at the system.

  6. I could be wrong, but ISTR swapping I/O could be slower than file I/O, if the swapping algorithm were reasonable but the algorithm it’s swapping for just happened to be pathologically incompatible with it. Certainly agreed with the general point though.

  7. > To get rid of the temp files, but preserve canonical ordering ordering

    I don’t think this is an error, but might read a bit better with “canonical order ordering”

  8. @esr

    And I think that brings this saga nearly to a close.

    How about removing the threading? It is apparently not helping and makes the code more complex. Is there any possibility that threading will cause a problem in some cases?

    There seems to be a problem with the fund total – like you only counted $50 of the $25 donation…

    2014-10-13 at 12:14:59 +$50
    2014-10-13 at 12:18:34 $225 total
    2014-10-15 at 00:48:13 +$25
    2014-10-15 at 04:06:20 +$10
    2014-10-15 at 04:12:20 +$50 $310
    2014-10-15 at 07:46:42 +$250
    2014-10-15 at 12:35:09 +$20 $380 (+$200 uh… virtual?)
    2014-10-15 at 12:36:31 +$10
    2014-10-15 at 13:04:13 +$10 $400
    2014-10-15 at 21:53:05 +$10
    2014-10-16 at 04:58:15 +$23
    Totals $433 + $200 = $633

    In any case, you have one hell of a lot of followers on this blog and you got 11 donations. Part of this may be that many people may have stopped looking at “A low-performance mystery…” posts. A reference to your fund on a new post with a totally different name may help a lot.

  9. @Brian Marshall

    Here’s how I count the $$$:

    Name Amt Date Reported (blog timestamps)
    me $ 100 2014-10-12 at 22:19:34
    you 50 2014-10-13 at 00:14:17
    Andrew Piskorski 50 2014-10-13 at 12:14:59
    Michael Ciagala 25 2014-10-13 at 12:18:34
    James Richardson 25 2014-10-15 at 00:48:13
    Duncan Bayne 10 2014-10-15 at 04:06:20
    Phillip Kopp 50 2014-10-15 at 04:12:20
    Jason Azze 250 2014-10-15 at 07:46:42
    Robert Conley 20 2014-10-15 at 12:35:09
    Steven Wright 10 2014-10-15 at 12:36:31
    Phillip Rhodes 10 2014-10-15 at 13:04:13
    Daniel Sharpe 10 2014-10-15 at 21:53:05
    ‘A’ 23 2014-10-16 at 04:58:15
    TOTAL $ 633 2014-10-16 at 15:05 (my wall time – EDT in the USA)

    So, yeah, Eric under-reported (I’m sure unintentionally).

    P.S. – I’ve tried to format this within a “<PRE>” block – we’ll see how this actually looks. (ERIC – YOU NEED A “PREVIEW” BUTTON!)

    1. >So, yeah, Eric under-reported (I’m sure unintentionally).

      Arithmetic fail. Oh well, at least I got everyone’s name right.

      This is a happy mistake. Because (a) I had decided that my budget for this is 2 x what people threw in, and (b) a configuration I have my eye on is just about $1000.

  10. @esr: So, your success means that you no longer need to upgrade to that super-fast, multicore, multigig-cache cray-busting replacement for your present machine….Eric, why are you looking at me that way?…ERIC!….put that sword down!…ERIC! STOP!!!!

    1. >ERIC!….put that sword down!…ERIC! STOP!!!!

      I just rysnced the entire NetBSD CVS. It’s 11 gigabytes. Of CVS. Yes, I believe I will have a use for the monster machine.

  11. @ John

    Yeah – I just started two posts back, which didn’t include your donation or mine, but I caught esr’s first total. And, in my editing, I didn’t write the $25 that put the total to $225. In any case, our totals match.

    I formatted using tabs (I was using my email Compose as a spell-checking editor). Lemme try a little table here…

    ABC123
    DEF456

    1. >Have you considered using Brendan Gregg’s USE method?

      That looks interesting, but it’s oriented towards tuning a sytrem for a continuous job load rather than finding the I/O or computing hot spots in individual applications.

  12. Francis Turner donates $42, bringing the corrected total to $675. Thank you, Francis; I shall buy extra RAM in your honor.

  13. > I just [rsynced] the entire NetBSD CVS. It’s 11 gigabytes. Of CVS.

    Oh. My. Sweet. Fornicating. Goddess.

    HEEELLLLLOOOOOOO, NetBSD! Welcome to the 21st Century!

    1. >HEEELLLLLOOOOOOO, NetBSD! Welcome to the 21st Century!

      Yes. When I spoke of hunting down and killing the last few giant CVS repos I wasn’t kidding.

      So, my plan is: Solve any technical problems with the conversion, then go to the NetBSD folks and say “When would you like to switch? I can push a button.”

      Part of what’s been influencing my thinking is that one of the NetBSD guys actually took an informal swing at this with me about a year ago. His cvs-fast-import instance ran out of memory. And now you know why I want a machine I can stuff unholy amounts of RAM into.

  14. > His cvs-fast-import instance ran out of memory.

    Ran out of memory or out of address space? How much swap was available? (And, I know it might not be [i]ordinarily[/i] advisable to use an SSD as swap, but…)

  15. > … and say “When would you like to switch? I can push a button.”

    “I felt a great disturbance in the ‘Net, as if millions of ,v files suddenly cried out in terror, and were suddenly silenced. I fear something wonderful has happened.”

  16. ESR: I just [rsynced] the entire NetBSD CVS. It’s 11 gigabytes. Of CVS.

    John D. Bell: Oh. My. Sweet. Fornicating. Goddess.

    HEEELLLLLOOOOOOO, NetBSD! Welcome to the 21st Century!

    And just in time for Halloween!

  17. > It’s 11 gigabytes. Of CVS.

    This might have been answered in a previous post, but just what is the memory footprint like, relative to the size of the repo you’re working on?

    > I know it might not be [i]ordinarily[/i] advisable to use an SSD as swap, but…

    Something like this would be fun if it still existed: http://www.newegg.com/Product/Product.aspx?Item=N82E16815168001

    Unfortunately that’s several generations old, and currently useless. As far as I can tell no one else has taken up the concept, either. I suppose the market for “tons and tons and *tons* of RAM-speed storage in a consumer system” is too small to support the idea.

    I wonder if this is the sort of task where you’re better off renting time on a server than trying to build something that can do it. Compute time on a 512GB monstrosity runs a couple bucks an hour at my employer.

    I would probably go the build route myself anyway. It’s more fun that way.

  18. Hi, non technical reader here. Still really enjoyed this series, so thanks. Was waiting for payday to donate to the Help Stamp Out CVS In Your Lifetime fund.

  19. @esr:
    >This is a happy mistake. Because (a) I had decided that my budget for this is 2 x what people threw in, and (b) a configuration I have my eye on is just about $1000.

    Given that my target price for my current rig was also around $1k (to see how much computer I could get for the price of my old laptop), do you mind if I ask what configuration you’re looking at?

  20. @Jon Brase, et al –

    > do you mind if I ask what configuration you’re looking at?

    I and another A&D regular are helping Eric spec out a new system. I expect he will be shortly blogging about what this is, and soliciting suggestions for improvements.

    1. >I expect he will be shortly blogging about what this is, and soliciting suggestions for improvements.

      That’s the plan, yes.

      Donations are still coming in. Most recently from Jeremiah Shepherd, who threw $10 in the kitty.

  21. > Part of what’s been influencing my thinking is that one of the NetBSD guys actually took an informal swing at this with me about a year ago. His cvs-fast-import instance ran out of memory. And now you know why I want a machine I can stuff unholy amounts of RAM into.
    One technique I’ve heard about (but never tried) is using zlib to compress data in-memory to reduce RAM footprint. You might end up having to do that eventually.

  22. @Max E:
    >One technique I’ve heard about (but never tried) is using zlib to compress data in-memory to reduce RAM footprint. You might end up having to do that eventually.

    The kernel actually supports swapping to a compressed ramdisk before bumping pages to disk as an experimental feature (see Documentation/vm/zswap.txt).

  23. @esr:
    >His cvs-fast-import instance ran out of memory. And now you know why I want a machine I can stuff unholy amounts of RAM into.

    Do you know how much RAM he ran out of?

  24. For clarity – if groff takes 13 seconds now, how long did it take before these changes?

    1. >For clarity – if groff takes 13 seconds now, how long did it take before these changes?

      90 to 97 seconds, depending on the machine’s total I/O load (it got significantly slower when mlocate was running).

      So that’s about a factor of seven speedup. Not too shabby.

  25. Are the hairy dense inscrutable parts of the code also still significant performance hotspots?

    1. >Are the hairy dense inscrutable parts of the code also still significant performance hotspots?

      No. They’re opaque, but they’re so fast they don’t get above the profiler’s noise level.

      I should qualify that. Before this hacking run, there were two parts of the code I didn’t understand. One was the black-magic branch merge code. The other was the machinery for integrating delta sequences from masters into snapshots.

      The delta-assembly code is actually the worst computational hotspot in the program, consistently accounting for about 40-45% of the non-IO running time. But compared to the I/O transaction time it turns out to be almost down to noise level itself. Furthermore, having refactored the bejesus out of it in order to make all the data structures thread local, I actually pretty much understand it now, and have performed several intelligent mods on it.

      So it would more exact to say that the one remaining opaque bit – the branch-merge black magic in stage 2 – is not a performance problem, and never has been.

  26. One William Nickless has said “Enjoy your hot new magic elf box!” and donated $25 to the Help Stamp Out CVS In Your Lifetime fund, bringing the total to $710 (unless I made another arithmetic error).

    Thank you, William. Though I’m not really thinking of it as a magic elf. “To mega therion” is more like it. What rough beast, its hour come round at last, slouches towards Malvern to be booted?

  27. > What rough beast, its hour come round at last, slouches towards Malvern to be booted?

    Malvern? Dangit, now I’m missing the Flying Pig, darn you.

    1. >Malvern? Dangit, now I’m missing the Flying Pig, darn you.

      The Flying Pig is less than a mile from my house, dude. It would be my neighborhood bar, if I drank. For the rest of you: it’s a very well-run, wholesome place, not unlike the better grade of English pub.

  28. So, did excursion into multithreading improved speed intentionally, or just accidentally (by causing better understanding of compose part, and associated mods and speedups)?

    1. >So, did excursion into multithreading improved speed intentionally, or just accidentally

      Accidentally. It turned out that the job load was so I/O dominated that threading didn’t yield measurable performance gains on my machine – something I would have figured out much sooner if not for the mystery slowdown in stage 3 due to glibc’s thread-safery feature.

      However – I think it’s still possible that threading might be a win on the 8-core beast I’m thinking about building. I’m doing a (non-threaded) trial run now on the 11GB netbsd repo, and the part of the analysis phase that’s multithreaded actually takes significant time.

      Also, the massive elimination-of-globals thing I did to support threading turned out to be all kinds of good for the code. It was an essential prerequisite for the change I made later, separating analysis from snapshot generation.

      1. I said:

        >the part of the [NetBSD] analysis phase that’s multithreaded actually takes significant time.

        Analyzing masters…done, 2650212 total revisions (1153.038787sec)
        Find heads…303436 of 303436(100%) (153.941831sec)
        Sorting…done (87.051062sec)
        Find branch parent relationships…done (50.127518sec)
        Merge common branches…0 of 303436(0%)

        So, that’s 24 minutes for the fastest part. The branch merge analysis has been running for a bit less than an hour; it took only a couple hundredths of a second on my benchmark repos, but I have a bad feeling that something is O(n**2) or possibly O(n**3) in there and has blown up big-time.

        My machine’s a bit sluggish at the moment…

  29. > I have a bad feeling that something is O(n**2) or possibly O(n**3) in there and has blown up big-time.

    Do I understand it correctly that it is in “black magic” section, so you cannot examine code (or algorithm) to find out which is which? Perhaps you can do plot of number of revisions (for a synthetic repo, and for real repos) against time this stage takes, and fit polynomial, or use log-log scale and do linear fit (y = a*x^n log y = n*log x + log a).

    1. >Do I understand it correctly that it is in “black magic” section, so you cannot examine code (or algorithm) to find out which is which?

      It’s spang in the middle of the black-magic part, lines 806-819 of revlist.c. I comprehend enough to be pretty sure it’s O(n**3). Could be worse depending on what rev_find_head() is doing. Yuck.

  30. > It’s spang in the middle of the black-magic part, lines 806-819 of revlist.c. I comprehend enough to be pretty sure it’s O(n**3). Could be worse depending on what rev_find_head() is doing. Yuck.

    It might be worse: rev_find_head() looks to be O(n) inside the O(n**2) loop but rev_branch_merge() looks to be more than O(n**2) inside the O(n) loop. I see what you mean about black-magic, the individual pieces look simple enough but I need a map to understand the complete picture. A blind analysis could be: for the various search methods log parameters and return values; identify whether there is a lot of repeated computation. There’s probably a dynamic programming solution lurking in the dark there.

    1. >There’s probably a dynamic programming solution lurking in the dark there.

      Maybe, but somebody has to comprehend it first. :-(

  31. >> It’s spang in the middle of the black-magic part, lines 806-819 of revlist.c. I comprehend enough to be pretty sure it’s O(n**3). Could be worse depending on what rev_find_head() is doing. Yuck.

    > It might be worse: rev_find_head() looks to be O(n) inside the O(n**2) loop but rev_branch_merge() looks to be more than O(n**2) inside the O(n) loop. I see what you mean about black-magic, the individual pieces look simple enough but I need a map to understand the complete picture. A blind analysis could be: for the various search methods log parameters and return values; identify whether there is a lot of repeated computation. There’s probably a dynamic programming solution lurking in the dark there.

    I wonder if naive/brute-force memoization (algorithm-independent) could help here, trading for example O(n) memory for O(n) time…

    1. >I wonder if naive/brute-force memoization (algorithm-independent) could help here, trading for example O(n) memory for O(n) time…

      Maybe. Meanwhile, the OOM killer ended this run while I was sleeping.

    1. Is it known where the memory goes?

      The very question I’m asking myself, Jakub. Fortunately the program uses a set of malloc wrappers that would make it trivial to log fine-grained allocation statistics. On my to-do list.

  32. So now that I’m reading about the NetBSD challenge and wondering if there’s a way to break it down into chunks. Last time I hit a problem like this where memory and/or CPU ran out no matter what I did I was able to find some good ways to partition the data into chunks that were independent. In that case it was taking some analysis of the ipv4 address space and breaking the 32 bit space down into the 220ish actual 24 bit /8s, doing the analysis on each /8 and then gluing the lot together again at the end. If /8s hadn’t been enough I could have broken it up again into /9s etc. but as it turned out /8 was enough to avoid the swap balloon

    In your case maybe you could break it down by year? Do the converts for year N, N+1 etc. and then have a glue function that connects the years together

    1. >In your case maybe you could break it down by year? Do the converts for year N, N+1 etc. and then have a glue function that connects the years together

      CVS has some … let’s say “incoherence” … properties that would make that dicey. Besides, because of the reverse-delta representation it turns out you have to compute almost everything in year N to be able to do year N-1.

  33. > It would be my neighborhood bar, if I drank.

    And for those of you who like beer, they have a great selection in general, and Belgians in particular.

    We now return you to the code brewing…

  34. CVS has some … let’s say “incoherence” … properties that would make that dicey. Besides, because of the reverse-delta representation it turns out you have to compute almost everything in year N to be able to do year N-1.

    Oh well it was an idea. I still think divide and conquer is going to be a winning strategy, we just need to partition on something else. Can you split it up by sub-directory tree maybe? Yes I get there will be occasional commits that include files in multiple parts of the tree but I figure most of them won’t and the ones that do would leave you with multiple commits with the same metadata which should be fairly straight forward to merge at the end.

    1. >Yes I get there will be occasional commits that include files in multiple parts of the tree but I figure most of them won’t and the ones that do would leave you with multiple commits with the same metadata which should be fairly straight forward to merge at the end.

      That is an interesting idea. It might run into serious trouble near an obscure CVS feature called “vendor branches”, though, were merging in patches scattered across the tree is rather the point. I’ll try running some tests, though.

  35. Do I understand it correctly that it is in “black magic” section, so you cannot examine code (or algorithm) to find out which is which?”

    esr:
    “It’s spang in the middle of the black-magic part, lines 806-819 of revlist.c. I comprehend enough to be pretty sure it’s O(n**3).”

    Actually, this may be good news. At this point, you have achieved a sevenfold increase in speed, and there is probably little more speedup available outside of the black magic section. You have already tremendously increased the scope of the repositories that can be handled in reasonable time.

    The downside, of course, is that you are running out of things to work on that don’t involve reverse-engineering the black-magic section…if that’s even worthwhile now.

    1. >The downside, of course, is that you are running out of things to work on that don’t involve reverse-engineering the black-magic section…if that’s even worthwhile now

      Well, I’m clearly not going to be able to convert NetBSD otherwise….

  36. @Eric:
    >>Do you know how much RAM he ran out of?
    >Alas, no.

    My machine has 32 GB of RAM and a bit more than a terabyte and a half of swap, with enough unused drive space that I could push it to at least 3 TB in a pinch (After having been fairly tight on space on my old laptop, I decided to provision my new machine with enough drive space to have tons of free space for the life of the machine. I prob^H^H^H^H definitely overdid it a bit. The swap is there because I have tons of unused disk space, not because I need it).

    Do you think working on the NetBSD repository would OOM in a terabyte or three, or would take impossibly long swapping in and out of 32 GB of actual core?

    1. >Do you think working on the NetBSD repository would OOM in a terabyte or three, or would take impossibly long swapping in and out of 32 GB of actual core?

      I don’t know. I’m planning to try implementing the malloc wrappers to see where the memory goes in a more fine-grained way. That may give me more insignt.

  37. My machine has 32 GB of RAM and a bit more than a terabyte and a half of swap, with enough unused drive space that I could push it to at least 3 TB in a pinch (After having been fairly tight on space on my old laptop, I decided to provision my new machine with enough drive space to have tons of free space for the life of the machine. I prob^H^H^H^H definitely overdid it a bit. The swap is there because I have tons of unused disk space, not because I need it).

    BTDTGTTS

    No really you will end up in swap hell if you do this with the wrong algorithm/dataset.

    There are times when finding a server with %silly%GB of RAM dedicated to you works but in almost all cases I’ve come across when a task ens up in the “swap balloon” the answer is to figure out the malloc()/garbage collect problem not throw more memory at it.

  38. I haven’t had a chance to look at larger repositories yet but I wrapped with callgrind while importing the groff repository. The largest activity outside of blob handling was the sorting activity within rev_pack_files. I made heavy use of the callgrind tool of valgrind and kcachegrind for visualisation while optimising svn-dump-fast-export. I strongly recommend them.

    1. >The largest activity outside of blob handling was the sorting activity within rev_pack_files.

      Yeah, that makes sense. I’ll bear in mind this as a hot spot to be flattened if possible.

      Actually I have some doubts about that whole section of code. I’m not certain what Keith thought he was accomplishing by re-packing the directory structures after the branch merge; after all, at that point the allocation cost of the unpacked directories had already been paid.

      I’m not entirely out of optimization tricks. One thing I’m doing now is trying to eliminate malloc calls for individual objects in favor of slab allocations that make entire per-master-file arrays of them – because often we can know early in the analysis exactly how many objects of any given type there will be. Doing this is time efficient, but more importantly it may significantly reduce working set for very large repositories.

  39. I wonder if it would be possible to use OpenMP to blindly parallelize the “black box” section…

    #pragma omp parallel for

    1. >I wonder if it would be possible to use OpenMP to blindly parallelize the “black box” section…

      Probably not. We know that what it’s doing is a rendezvous among the trees representing the masters.

      The good news is that I think I’m really starting to understand the input data structures in a way I didn’t even a week or so. Figuring out what it’s actually doing still seems … unlikely, but no longer impossible.

  40. @ESR
    >CVS has some … let’s say “incoherence” … properties that would make that dicey. Besides, because of the reverse-delta representation it turns out you have to compute almost everything in year N to be able to do year N-1.

    Then can you start with 2014 and work backwards in year chunks?

    1. >Then can you start with 2014 and work backwards in year chunks?

      I’m not at all sure. CVS is full of traps for people who think they can get clever with it. I don’t know there’s one here, but my instincts are screaming “Run! Run away!”.

Leave a Reply to esr Cancel reply

Your email address will not be published. Required fields are marked *