Black magic and the Great Beast

Something of significance to the design discussion for the Great Beast occurred today.

I have finally – finally! – achieved significant insight into the core merge code, the “black magic” section of cvs-fast-export. If you look in merge.c in the repo head version you’ll see a bunch of detailed comments that weren’t there before. I feel rather as Speke and Burton must have when after weeks of hacking their way through the torrid jungles of darkest Africa they finally glimpsed the source of the Nile…

(And yes, that code has moved. It used to be in the revlist.c file, but that now contains revision-list utility code used by both stages 1 and 2. The black magic has moved to merge.c and is now somewhat better isolated from the rest of the code.)

I don’t grok all of it yet – there’s some pretty hairy and frightening stuff happening around branch joins, and my comprehension of edge cases is incomplete. But I have figured out enough of it to have a much better feel than I did even a few days ago for how it scales up.

In particular I’m now pretty sure that the NetBSD attempt did not fail due to an O(n**2)/O(n**3) blowup in time or space. I think it was just what it looked like, straight-up memory exhaustion because the generated gitspace commits wouldn’t fit in 4GB. Overall scaling for computational part (as opposed to I/O) looks to me like it’s roughly:

* O(m**2) in time, with m more related to maximum revisions per CVS master and number of branches than total repo or metadata volume.

* O(n) in space, where in is total metadata volume. The thing is, n is much larger than m!

This has implications for the design of the Great Beast. To match the implied job load, yes, serial computation speed is important, but the power to rapidly modify data structures of more than 4GB extent even more so. I think this supports the camp that’s been arguing hard for prioritizing RAM and cache performance over clock speed. (I was leaning that way anyway.)

My estimate of O(n) spatial scaling also makes me relatively optimistic about the utility of throwing a metric buttload of RAM at the problem. I think one of the next things I’m going to do is write an option that returns stats on memory usage after stages 1 and 2, run it on several repos, and see if I can curve-fit a formula that predicts the stage 2 figure given Stage 1 usage.

Even without that, I think we can be pretty confident that the NetBSD conversion won’t break 32GB; the entire repo content is 11GB, so the metadata has to be significantly smaller than that. If I understand the algorithms correctly (and I think I do, now, to the required degree) we basically have to be able to hold the equivalent of two copies of the metadata in memory.

(In case it’s not obvious, I’m using NetBSD as a torture test because I believe it represents a near worst case in complexity.)

I’m also going to continue working on shrinking the memory footprint. I’ve implemented a kind of slab allocation for the three most numerous object classes, cutting malloc overhead. More may be possible in that direction.

So, where this comes out is I’m now favoring a design sketch around 1.35V ECC RAM and whichever of the Xeons has the best expected RAM cache performance, even if that means sacrificing some clock speed.

50 comments

  1. Many xeon models will dynamically “over clock” individual cores when inactivity on the rest of the die makes that thermally feasible – you may find you’re not giving up nearly as much as you might expect in the low-thead-count case.

    1. >Many xeon models will dynamically “over clock” individual cores when inactivity on the rest of the die makes that thermally feasible – you may find you’re not giving up nearly as much as you might expect in the low-thead-count case.

      That’s good to hear. How do we find out if an individual model has this feature?

  2. Intel’s terminology for this is “Turbo Boost”, so that would be the thing to look for.

    With the caveat that I haven’t track this stuff closely for a couple years: I find the detailed results at spec.org useful, along with their system configuration summaries, e.g.: http://spec.org/cpu2006/results/res2014q4/cpu2006-20140909-31376.html

    It’s often instructive to look at the difference in the (raw, unscaled) scores between cint2006 (single-threaded) and cint_rate2006 (multithread) for a given processor, and the differences between
    different sub-benchmarks. That said, this benchmark is probably a bit obsolete now, and is possibly too L3 cache-friendly on modern systems to match your workloads. Caveat emptor and all that.

    One quick other thing, about memory speed: filling all the slots will usually give highest throughput and be the cheapest way to achieve a given RAM size, but means getting rid old low-capacity RAM when you upgrade… a few of these systems it may be true that you can run RAM at the highest clock rate only if you don’t use all the slots (the furthest are too far away!)

  3. Taking a quick check… it looks like all the Intel CPUs mentioned in the previous discussion do support turboboost; certainly all i7s do, and most of the E3 and E5 models.

  4. It’s pretty universal across the product line, at least as far as I’ve ever looked. Not sure if this has come up recently, but one think I always like to do is go direct to the source – ark.Intel.com.

  5. For cooling: check the “Arctic Freezer” and “Arctic Alpine” lines.

    Additionally, always get a power supply that’s overrated by about 25% over what you think you could possibly need. This is particularly important when running low-voltage components; overpowered PS usually prevents borderline brownouts from other system loads from causing trouble in the low-power elements. Plus its less hassle later if you decide to throw some SLI cards in the system for multi-core GPU crunching. :-)

    A link to Newegg’s selection of Arctic coolers. Check reviews for quiet ratings. I use one, don’t recall the model but it’s very quiet compared to the GeForce 660 fans when they are running full blast.

    http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&DEPA=0&Order=BESTMATCH&Description=Arctic+CPU+cooler&N=-1&isNodeId=1

  6. Um, Dzmitry, ESR is running git (cf gitorious link).

    ESR: Did you have a CatB-style great ringing crash in your head when enlightenment dawned?

    1. >ESR: Did you have a CatB-style great ringing crash in your head when enlightenment dawned?

      Not this time. :-)

    1. >I’d guess the Gentoo CVS repo might be larger than NetBSD’s. And they’re in the process of migrating to Git. Maybe you can help them out?

      I’d certainly be willing to. Who would you recommend I contact with an offer?

  7. Either there is something funky with whitespace (tabs versus spaces) in merge.c and/or its rendering on gitorious, or you have inner if inside for loop not indented – it is aligned flush to for loop it is in…

    1. >Either there is something funky with whitespace (tabs versus spaces) in merge.c and/or its rendering on gitorious, or you have inner if inside for loop not indented – it is aligned flush to for loop it is in…

      Not jumping out at me in Emacs. What lines?

  8. I wonder if ewah map that JGit implemented and was ported to Git (to speed up obect enumeration) would help with speed; though if it is memory bound it might be not worth it.

    1. >Probably Gitorious web interface problem, see e.g lines 33 and 34:

      Looks like it’s screwing up on tabs. The file is OK in Emacs.

  9. > a metric buttload

    That made me laugh. Is a metric buttload larger or smaller than an Imperial buttload?

  10. Argh, posted too quickly. It would have been funner to ask “Is a metric buttload larger or smaller than an Imperial shit-ton?”

  11. “Additionally, always get a power supply that’s overrated by about 25% over what you think you could possibly need.”

    Yes, and I speak from experience here. The last time I had a PC custom built, I upgraded one level from the basic power supply knowing that it was good to have margin. But I really didn’t have a lot of accessory options, so why go higher?

    I ended up having a system failure because my motherboard + video card drew more than the supply could easily handle. (I vaguely remember that there was also an issue with the motherboard/video interface having an issue; this was some years ago, details forgotten.)

  12. For the large (multi-GB) repositories, have you considered a piece-meal approach?

    For example, if the respository has 20 years worth of updates, process 5 years at a time.

    Seems like mostly what would be needed would be some filters in stage 1 (e.g. start and end date).

    A related question – you mentioned that some users need the –incremental option – what is their use case?

    1. >For the large (multi-GB) repositories, have you considered a piece-meal approach?

      I have. CVS has some incoherence properties that make this a rather scarifying prospect. You’d be begging for nasty artifacts around the segment joins, especially if you got unlucky and an immediately preceding or following commit was affected by client clock skew.

      >A related question – you mentioned that some users need the –incremental option – what is their use case?

      Mirroring CVS repos to git. I think this is a bad idea, but the git maintainers consider it a checklist feature.

  13. I found a rsync link to the NetBSD repository and am running it through your profiling script. I’m using the instructions Max E provided here, substituting NetBSD for groff and running without threading. The results look a bit different than what you reported for your trial run here, but assuming that they’re valid, they don’t look very encouraging.

    I ran a command of the form:

    rsync $OPTIONS $URL && cd src && time benchmakr.sh before going to bed, and as of around 9:00 central time this morning I was seeing a whole bunch of lines of the form:

    “2014-10-19T12:52:45Z: Analyzing masters… $NUMBERcvs-fast-export: warning – putting $FILE rev 1.1.0.1 on unnamed branch master-UNNAMED-BRANCH off master”

    Followed by:

    “2014-10-19T12:52:45Z: Analyzing masters…done, 2651189 total revisions (218.386122sec)
    2014-10-19T12:56:23Z: Make DAG branch heads…303505 of 303505(100%) (0.604423sec)
    2014-10-19T12:56:24Z: Sorting…done (77.909403sec)
    2014-10-19T12:57:42Z: Find branch parent relationships…done (30.134590sec)
    2014-10-19T12:58:12Z: Merge common branches…0 of 303505(0%)”

    And it seemed to be just staying there at 0.

    As of around 14:00 central time, the “Merge common branches” line was up to 10, and incrementing once every minute or so. It got up to 26, then I started getting lines of the form:

    “2014-10-19T12:58:12Z: Merge common branches…27cvs-fast-export: warning – branch point netbsd-2-0 -> master-UNNAMED-BRANCH matched by date”

    With about the same update rate (once a minute or so).

    And as I’m typing, things are starting to look much more encouraging. Working through the reported 300,000-odd branches at one a minute would have taken on the order of seven months. around branch 54 things started moving a lot faster (around one a second), and now it even seems that it’s not going to do all 300,000 branches (which you might have expected, but I didn’t, not being familiar with the problem domain). It declared itself done with the branch merging around 14:50 central time at 178 branches, and now things seem to be going quickly.

    The process is currently using 18 gigs of RAM (hasn’t yet hit swap), and during the branch merging bit that took up most of the time I don’t think it was I/O bound, I was seeing hardly any disk activity at all.

  14. Thanks to Mario Landgraf, who has contributed $25 to the “Help Stamp Out CVS In Your Lifetime” fund. Total now stands at $915.

  15. >now it even seems that it’s not going to do all 300,000 branches (which you might have expected, but I didn’t, not being familiar with the problem domain)

    What makes you think that? I in fact expect that it will do all the branches. You may be misled by the effects of doing the oldest branches first; those will be the longest, so they’ll take the most compute time.

    >The process is currently using 18 gigs of RAM (hasn’t yet hit swap), and during the branch merging bit that took up most of the time I don’t think it was I/O bound, I was seeing hardly any disk activity at all.

    That sounds like it’s scaling almost exactly the way I expected it would. You’ll see disk I/O big-time when it starts to do export.

    I don’t know quite what’s up with the UNNAMED-BRANCH messages. It’s almost certainly a repository pathology rather than a code problem. I’ll look into it when I can run a conversion here.

  16. esr:

    >What makes you think that? I in fact expect that it will do all the branches. You may be misled by the effects of doing the oldest branches first; those will be the longest, so they’ll take the most compute time.

    It finished up the branch merge process with the following line:

    2014-10-19T12:58:12Z: Merge common branches…178 of 178(100%) (24698.132992sec)

    178 is a lot less than 300,000, so I thought it had finished early. Or was the 300,000 pre-merge and the 178 post-merge? If so, the status reports could be a bit clearer about what exactly they’re counting.

    Unfortunately, in copying that line from the terminal, I managed to hit Ctrl-C rather than Shift-Ctrl-C, which terminated the process. It was 40% of the way through “Saving in fast order”.

    Final runtime was:

    26715.15s user 45.44s system 98% cpu 7:32:57.23 total.

    Late in the branch merge process I was seeing warnings of the form:

    “cvs-fast-export: warning – $FILE: too late date through branch kame”

    After branch merge and before I accidentally killed the process during “Saving in fast order”, the longest item was “Generating snapshots” at 443 seconds, during which time I noticed a fair amount of disk I/O.

    “Saving in fast order” was running for about 1000 or 1500 seconds before I fat-fingered it to death. I was not seeing any disk I/O. If I understand cvs-fast-export and your profiling script right, I believe this was because the script redirects output to /dev/null.

    If you wish, I can provide a transcript of the terminal output from the run (Half a meg of text. gnome-terminal’s unlimited scrollback option comes in handy sometimes).

    1. >Or was the 300,000 pre-merge and the 178 post-merge?

      That’s it.

      >If so, the status reports could be a bit clearer about what exactly they’re counting.

      Noted. I will attempt to clarify.

      >If you wish, I can provide a transcript of the terminal output from the run

      That would be interesting, yes.

  17. Drives etc: depending on how much physical RAM you intend to use, and doubtless depending on kernel config, and _additionally_ depending on the selected motherboard’s drive bus layout, you may wish to consider something like the following:

    1. SSD in the 200G range depending on how you install anything you compile… smaller if you use stock installed stuff in /usr and compiled-custom stuff under /usr/local etc etc. Make it larger if everything is under /usr.

    2. Second bus master (don’t think it really matters master/slave on SATA) is the swap drive in as big a partition as the kernel will support. I usually do a minimum of 4 times the physical RAM. Clearly this won’t take up a whole modern SATA drive but don’t that drive our bus for anything other than swap. Highest throughout & RPM of course. I don’t know if modern kernels will complain if you mix spindle RPMs.

    3. Consider external e-SATA drives for data (as opposed to libs, executables etc). If most of your operations are pretty much big reads into memory then handling and then big writes, you might not notice much difference and the cooling problem is this outside the main box, and archives are more easily stored on bookshelves (cat proof ones). I’d you do a huge amount of smallish read-op-write you probably do want them internal to the box.

    Summary, dedicated swap SATA, a SDD that’s almost exclusively read-only files, big chunks of data on external e-SATA.

  18. Oops: forgot.
    1.5: locally compiled/developed stuff under /usr/local can just go on a USB external HDD. :’)

  19. @Mr. Non Entity:
    >2. Second bus master (don’t think it really matters master/slave on SATA) is the swap drive in as big a partition as the kernel will support. I usually do a minimum of 4 times the physical RAM. Clearly this won’t take up a whole modern SATA drive but don’t that drive our bus for anything other than swap. Highest throughout & RPM of course. I don’t know if modern kernels will complain if you mix spindle RPMs.

    Really, anything more than the size of physical RAM (and that mostly to handle hibernation) is overkill these days. I very rarely touched swap on my old laptop (3 gig of RAM), I doubt I ever really will on my current 32 gig machine. I do have a 1.5 terabyte swap partition, but that’s because I have copious unused disk space, not because I have any realistic need for it (I plan to take bites out of it incrementally to set up LVM volumes for VMs).

  20. ESR: Yet another probably dumb question. You said the NetBSD repo (“Abandon Five Bucks All Who Enter Here!”) was ~11GB pre-conversion. How big did it work out post-conversion?

  21. Alex: It hasn’t been converted yet. Eric’s current hardware doesn’t have the RAM to handle it. I just carried out a feasibility test today proving that it can be done, but Eric’s benchmarking script that I used directs the output of cvs-fast-export to /dev/null, and even if it didn’t, I accidentally killed the process before it finished. Plus, I believe there’s more work that has to be done after cvs-fast-export to actually finish the conversion, and I’m quite certain that I lack the expertise to carry that work out, so the actual conversion will have to wait until Eric’s new machine gets built, or at least until someone with the necessary hardware and expertise volunteers.

    1. >Plus, I believe there’s more work that has to be done after cvs-fast-export to actually finish the conversion

      That is correct. At a minimum the resulting repository needs to be manually inspected for defects resulting from both general CVS deficiencies and malformations specific to the history of the individual repository, and those defects need to be corrected where possible. This is the job reposurgeon was built for before it semi-accidentally became the most capable repository converter in the world (mission creep is both a terrible and a wonderful thing).

      We know in the NetBSD instance that a good many commits have been dropped on synthetic unnamed branches. This is probably a result of operator errors and these things can be snipped off without loss – but someone should go look. Also it is generally a good idea to sniff around the branch joins a bit, as this is where either CVS itself or the conversion tool is most likely to have gone awry.

      In the “a bigger hammer is actually useful” department, one practical problem with this is that reposurgeon, having been written in Python, can be quite painfully slow on very large repositories. Since rewriting it is impractical, the workaround is to run it on faster hardware. This is one of the reasons I’m weighting single-threaded performance so heavily in the build.

  22. Whoops. I got the impression it had been done already.

    Can I have some salt and a bit of tomato sauce for my hat while I munch on it?

  23. Sadly im going away for 14 days,, but i do have an old ibm bladeserver that could do some workout (14x 2 5450 3GHz) so if you need processing power…

  24. Why in the world would you build a server for this? Any of the big “cloud” shops have enormous machines available nowadays. Why not rent some cpu time for what will likely be a one time task?

    1. >Why in the world would you build a server for this? Any of the big “cloud” shops have enormous machines available nowadays. Why not rent some cpu time for what will likely be a one time task?

      See upthread. I do these monsters regularly. And speed of turnaround – which would be slowed down by what is in effect turnaround – is important in order to minimize downtime during cutovers.

    1. >Have u tried compiling the python?

      Yes. For the big jobs (notably the Emacs conversion) I use a Python JIT called cython. I would have liked to try PyPy, but last I checked it didn’t really work on 64-bit hardware.

      cython doesn’t help much – I get at most a 20% or so speedup. Which isn’t trivial over 10 hours, but still.

      I think I know why it’s so relatively ineffective. It’s designed to optimize out tight loops in programs with relatively static data structures. What reposurgeon mostly does (I think) is not tight loops but beating the object allocator and GC to death with incessant demands.

  25. Are you going to do a straight conversion of the entire NetBSD repository into one single git repository, or will you be using submodules for each component?

    1. >Are you going to do a straight conversion of the entire NetBSD repository into one single git repository, or will you be using submodules for each component?

      That depends on what the customer wants. Though I was working with one of the NetBSD guys some months ago, there wasn’t yet any official “let’s do this”, so requirements for a final conversion had not been set.

      What I intend to is demonstrate the capacity to conver the whole thing, then go the the NetBSD folks and say “OK, proof of capability. Now how do you want this sliced?” With reposurgeon in hand, once the whole thing is in gitspace with the artifacts cleaned up it will be extremely easy to cut it into sections.

  26. There are times when you need a bigger hammer. If you’re moving a household’s worth of packing boxes, even if they all fit in the trunk and rear seat of your sedan, there comes a point at which it’s simply faster and more efficient to use a box truck. If you’re going to be doing it more than once, perhaps you should buy the truck instead of renting it

  27. > What reposurgeon mostly does (I think) is not tight loops but beating the object allocator and GC to death with incessant demands.

    That sounds like something the JVM is optimized for, so you could try to run under Jython (which says it aims for Cython compatibility, so it should be painless to try).

    You should try several of the garbage collector options in the (Open-/Oracle-)JDK. I assume that one of the parallel GC’s will win on the new machine.

  28. Yeah, say what you want about Java the language, but as a software execution platform, the JVM is incredible. You should get much better throughput in anything involving massive GC loads, particularly on a nice multiprocessor system.

Leave a comment

Your email address will not be published. Required fields are marked *