Proving the Great Beast concept

Wendell Wilson over at TekSyndicate had a good idea – run the NetBSD repo conversion on a machine roughly comparable to the Great Beast design. The objective was (a) to find out if it freakin’ worked, and (b) to get a handle on expected conversion time and maximum working set for a really large conversion.

The news is pretty happy on all fronts.

The test was run on a dual Xeon 2660v3 10 core, 256GB memory and a 3.0GHz clock, with conventional HDs. This is less unlike the Great Beast design than it might appear for benchmarking purposes, because the central algorithms in cvs-fast-export can’t use all those extra cores very effectively.

The conversion took 6 hours and 18 minutes of wall time. The maximum working set was about 19.2GB. To put this in perspective, the repo is about 11GB large with upwards of 286,000 CVS commits. The resulting git import stream dump is 37GB long.

The longest single phase was the branch merge, which took about 4.4 hours. I’ve never seen computation dominate I/O time (just shy of two hours) before – and don’t think I will again except on exceptionally huge repositories.

A major reason I needed to know this is to get a handle on whether I should push micro-optimizations any further. According to profiling the best gains I can get that way are about 1.5%. Which would have been about 5 minutes.

This means software speed tuning is done, unless I find an algorithmic improvement with much higher percentage gains.

There’s also no longer much point in trying to reduce maximum working set. Knowing conversions this size will fit comfortably into 32GB of RAM is good enough!

We have just learned two important things. One: for repos this size, a machine in the Great Beast class is sufficient. For various reasons, including the difficulty of finding processors with dramatically better single-thread performance than a 3GHz 2660v3, these numbers tell us that five hours is about as fast as we can expect a conversion this size to get. The I/O time could be cut to nearly nothing by doing all the work on an SSD, but that 4.4 hours will be much more difficult to reduce.

The other thing is that a machine in the Great Beast class is necessary. The 18GB maximum working set is large enough to leave me in no doubt that memory access time dominated computation time – which means yes, you actually do want a design seriously optimized for RAM speed and cache width.

This seems to prove the Great Beast’s design concept about as thoroughly as one could ask for. Which means (a) all that money people dropped on me is going to get used in a way well matched to the problem, neither underinvesting nor overinvesting, and (b) I just won big at whole-systems engineering. This makes me a happy Eric.

UPDATE: More test results. Wendell reports that the same conversion run on a slightly different Beast-class box with a 400Mhz slower clock but 5MB more RAM cache turned in very similar times – including, most notably, in the compute-intensive merge phase. This confirms that more cache compensates for less clock on this load, as expected.

63 comments

  1. Two thoughts. Run perf. Stash the data on the kernel and other run times somewhere, someone might find it interesting.

    Did you give pypy a shot during any of this process?

    I note that that box also features an enormous cache….

  2. While I definitely think that the Beast is worthwhile as speced, I was kind of wondering whether trying to push for that 64GB build wasn’t a bit like shopping for arrows after you already slew the Wumpus. How many more of these monster repos are actually still haunting the cyberearth?

    1. >How many more of these monster repos are actually still haunting the cyberearth?

      Currently we know of three conversion targets in this effort class: NetBSD, Gentoo, and Emacs – the latter in progess and not a CVS lift but would probably be sigificantly accelerated by the Beast. That’s enough to make me pretty sure there will be others lurking in the mist.

  3. If you’ll only need this machine for 7 hours at a time, and not all that frequently, why not just use an EC2 instance (or your cloud provider of choice) and pay only for the time the test/conversion is running?

    1. >why not just use an EC2 instance

      If I answered this question again it would be the fifth time. I’ll put a FAQ on the build page.

    1. >So … now that it *can* be done … does the NetBSD team *want* their repo converted?

      One of them did, some months ago, I shall dig up his name and ping him.

      1. >One of them did, some months ago, I shall dig up his name and ping him.

        It was about 11 months ago, as it turns out, and his name was Alan Barrett. I’ve notified him and some other interested parties.

  4. I guess you’re halfway to answering my question about post-conversion repo size. So the git fast-import stream is an unpacked version of the full-up git repo?

    1. >I guess you’re halfway to answering my question about post-conversion repo size. So the git fast-import stream is an unpacked version of the full-up git repo?

      With no delta compression on the content, yes.

  5. Eric, are you measuring the time to create the import stream, or the time to create a git repository?

    I only ask, because I tried converting Emacs last night and surprisingly it only took about 15 minutes to create the export stream. The output file was about 33GB. This was running cvs-fast-export HEAD built with FAST_OUT, and using the shared node slab allocator patch.

    I’m not done testing yet, the results may be completely invalid. I need to check that what you get when rsyincing savanah is the full emacs cvs history, and that it can be imported into git, and looks similar to the gitorious test conversions, etc, etc.

    This was running on Cygwin, so lots of emulation layers. I have fairly nice hardware – Intel Quad Core i7-3770K @3.5GHz. SSD for the operating system. 32GB DDR3@1600mhz memory. The storage for the input and output was the same RAID-5 of four large but unspectacular hard drives.

    1. >Eric, are you measuring the time to create the import stream, or the time to create a git repository?

      Time to create the import stream.

      >I need to check that what you get when rsyincing savanah is the full emacs cvs history,

      It isn’t. They switched to bzr in 2009.

      I must grab a copy of historical Emacs CVS for my profiling set. Done.

      For those interested, this is a du -s listing of my current profiling set:

      680308    emacs
      45616     groff
      12133660  netbsd
      976       robotfindskitten
      

      Emacs is large; NetBSD is huge.

    2. >This was running cvs-fast-export HEAD built with FAST_OUT, and using the shared node slab allocator patch.

      I think I won’t take that patch into mainline. It’s a nice piece of work, but it adds a lot of complexity for relatively small gain – would have netted maybe 6 minutes out of 6 hours on the NetNBSD conversion. Also I’d have to make it thread-safe, which would be additional complexity and begging for a maintenance issue.

      The hot spot in current head is snapshotline(). You’ve already done good work on speeding that up – maybe the buffer-editing algorithm could be improved?

      BTW, on the benchmark groff repo the throughput is now up to 349 commits a second, or about 20K a minute.

  6. Since people keep asking about doing this on an E2C instance, I’m now wondering about how long it actually would take. If the process is pretty easy, I’d be willing to toss together a few dollars and run a reasonable performance test just to see how well one of their compute-oriented instances would handle the case.

    If somebody can write me up the steps in a copy/paste fashion I need to run in order to set up the test cases as well as collect the desired profiling data, I’ll do it some time this week. If good data can be obtained with something smaller than the NetBSD repo, I’d be happy to use it.

    1. >If the process is pretty easy, I’d be willing to toss together a few dollars and run a reasonable performance test just to see how well one of their compute-oriented instances would handle the case.

      That sounds like a good plan. And I think I just grabbed a repo of ideal size for this: historical Emacs CVS, their repo up to 2009. Bigger than groff, not the monster netbsd is.

      1. Get and build cvs-fast-export. Make sure it and the auxiliary tool cvssync are in your executables path.

      2. Get an account on GNU Savannah, if you don’t already have one. Install your public key there. (Sorry, this is required in order for the rsync in the next step to work.)

      3. Get Emacs CVS from Savannah: cvssync cvs://cvs.savannah.gnu.org/emacs#emacs

      3. Run “find emacs -name ‘*,v’ | cvs-fast-export -p -l errors >emacs.fi”. Save the timing info emitted to stderr when it’s done.

      4. Expect to wait for about 10 minutes.

      5. Mail me the timing data and the contents of ‘errors’. Or paste the timing data in a comment here.

      Here’s what a run looks like on snark:

      esr@snark:/home3/esr/export-profile$ find emacs -name '*,v' | cvs-fast-export -p -l errors >emacs.fi
      2014-10-22T14:19:27Z: Reading file list...done, 667745.362KB in 5091 files (0.023715sec)
      2014-10-22T14:19:27Z: Analyzing masters...done, 270266 revisions (17.341359sec)
      2014-10-22T14:19:45Z: Make DAG branch heads...5091 of 5091(100%)    (0.005673sec)
      2014-10-22T14:19:45Z: Sorting...done  (0.102390sec)
      2014-10-22T14:19:45Z: Find branch parent relationships...done  (0.097206sec)
      2014-10-22T14:19:45Z: Merge common branches...37 of 37(100%)    (225.703719sec)
      2014-10-22T14:23:30Z: Compute tail values...done  (0.187232sec)
      2014-10-22T14:23:31Z: Find tag locations...done  (18.999552sec)
      2014-10-22T14:23:50Z: Generating snapshots...done (246.086804sec) 
      2014-10-22T14:27:56Z: Saving in fast order: done (81.971845sec)0%)   
      cvs-fast-export: no commitids before 2007-05-12T16:59:34Z.
             after parsing:	17.365	 420292KB
        after branch merge:	262.462  627136KB
                     total:	590.554	 652912KB
      113300 commits/35281.395M text at 191 commits/sec.
      cvs-fast-export: 10 warning(s).
      

      Here’s what a run looks like on Wendell’s quasi-beast:

      2014-10-22T14:44:50Z: Reading file list...done, 667745.362KB in 5091 files (0.014004sec)
      2014-10-22T14:44:50Z: Analyzing masters...done, 270266 revisions (4.347591sec)
      2014-10-22T14:44:54Z: Make DAG branch heads...5091 of 5091(100%)    (0.003411sec)
      2014-10-22T14:44:54Z: Sorting...done  (0.015412sec)
      2014-10-22T14:44:54Z: Find branch parent relationships...done  (0.015240sec)
      2014-10-22T14:44:54Z: Merge common branches...37 of 37(100%)    (79.220978sec)
      2014-10-22T14:46:13Z: Compute tail values...done  (0.032168sec)
      2014-10-22T14:46:13Z: Find tag locations...done  (2.930606sec)
      2014-10-22T14:46:16Z: Generating snapshots...done (57.485034sec)  
      2014-10-22T14:47:14Z: Saving in fast order: done (45.343349sec)0%)   
      cvs-fast-export: no commitids before 2007-05-12T16:59:34Z.
             after parsing:	4.362	433812KB
        after branch merge:	86.580	677740KB
                     total:	189.411	703256KB
      113300 commits/35281.395M text at 598 commits/sec.
      cvs-fast-export: 10 warning(s).
      

      Note the difference in wall time: 590 seconds drops to 189.

      I think an EC2 instance will clock in somewhere between those two numbers. The interesting question is which end it will be closer to…and whether a good result, if you get one, will scale up to a conversion with a memory footprint 36 times larger.

      If you’re feeling really brave,

      rsync -v -a –delete –exclude ‘#cvs.lock’ rsync://anoncvs.NetBSD.org/cvsroot/src .

      will get you the netbsd source repo. Be warned, that conversion will want 18GB of RAM.

      If an EC2 instance really is within shouting distance of Great-Beast-class hardware, though, what am I going to do with that $2790 that will meet my donors’ expectations?

  7. What to do with the money? I’d still build a system you can use to do these CVS conversions, and also have available for future projects.

    Plus, I speak for myself here, I would rather you used the money donated to build something that you can use on a daily basis to help speed along all the work that you do. If you don’t do a lot of multi-threaded tasks, an EC2 instance isn’t going to be of much help. The strength of systems in EC2 is found when scaling horizontally.

  8. Oh, it should be well within the capabilities of EC2. You can get a memory-optimized r3.8xlarge EC2 instance with 32 CPU cores, 244GB RAM, and 2x 320GB of SSD instance storage. Running one of those instances for 7 hours would cost roughly $20 ($2.8 per hour).

    http://aws.amazon.com/ec2/instance-types/

    Also, I don’t know if it’s possible to divide up the dataset and tackle it with multiple instances, but using cloud infrastructure makes *time* a determining factor of cost rather than just the size of the instance. For the same cost, you can run one instance for X time, or two instances for half of X time.

    1. >Running one of those instances for 7 hours would cost roughly $20 ($2.8 per hour).

      Yes, but would it actually take only 7 hours? That’s what we don’t know – what the effective clock speed even of a “compute-optimized” instance is. Garrett will probably be able to tell us.

      From first principles I am skeptical. EC2 is, in effect, a time-sharing system that runs instances in VMs. That overhead has to be paid in clocks. If they could deploy substantially faster processors than we can put in the Beast the throughput might net out faster anyway, but I don’t think they can – and we know multithreading doesn’t help in this case.

    2. >Also, I don’t know if it’s possible to divide up the dataset and tackle it with multiple instances,

      Alas, not really. The problem is that no matter how you carve up the CVS repository, you might miss some changeset builds that should cross those cuts.

      In theory, you could slice it up any way you like, generate N git streams, and do a secondary merge phase on those in gitspace, looking for matching metadata in gitspace changesets and fusing them. In practice I can think of some subtle and nasty ways that could fail. Especially in the presence of our old nemesis, client-side clock skew screwing up the CVS commit dates…

    1. > Though it doesn’t look like they offer dedicated instances of quite that size…

      It would surprise me if they did. The Great Beast design is optimized in some unusual ways for an unusual job load.

      1. I wrote: “The Great Beast design is optimized in some unusual ways for an unusual job load.”

        I shouldn’t be so elliptical about this. The reason EC2 doesn’t sell instance types that really fit is that this job load’s working-set sizes are exceptionally large. There are very, very few application domains in which you’re going to be routinely bashing around 18GB of coherent in-core data structures – that’s more than three times the entire address range of a 32-bit processor!

        Making it more unusual is that in most of the few applications where the data sets do get this large the tasks can be structured for heavy parallelism; this is why dual-socket Xeon boards with ten compute units per processor exist. This one can’t be carved up that way.

        The only application class I can think of with similar demands is some kinds of really high-end database deployments where the query/update frequencies are so high that you need to keep the authoritative copy of the database in RAM and dynamically mirror it to disk.

  9. Oh, prepare to be surprised! I was using my phone to look at the instance types and I missed it. The r3.8xlarge instance type actually *is* available as a dedicated instance. It’s an extra $2 per hour per region to run a dedicated instance. Technically, you can run as many dedicated instances in the region as you like and only be charged $2/hour for all of them, plus the normal rate for the instance (which is $2.800/hour for the r3.8xlarge instance).

    If you don’t need 244GB RAM, you could go with the c3.8xlarge for $1.680 per hour. It has 32 cores, 60GB RAM, and 2x 320GB of SSD instance storage.

  10. Out of curiosity for those of us wanting to do this actual conversion (NetBSD -> git, or possibly eventually NetBSD -> hg through git) and having a machine of that size available, are the necessary steps documented? Last time I tried cvs-fast-export still wasn’t up to the task, but I’d be interested to try it again with an eye towards a regular workflow.

  11. esr> If an EC2 instance really is within shouting distance of Great-Beast-class hardware, though, what am I going to do with that $2790 that will meet my donors’ expectations?

    Speaking as one of your donors, and only for myself, I expect you to do whatever you think is best for the whole of your projects. Building the Great Beast provided a good chance to show my appreciation for your work, but I never thought of it as a string attached.

    Independently, EC2 would surprise me greatly if it actually did come out within shouting distance of the Quasi-Beast, or with the Great Beast should you choose to build it.

  12. >Making it more unusual is that in most of the few applications where the data sets do get this large the tasks can be structured for heavy parallelism; this is why dual-socket Xeon boards with ten compute units per processor exist. This one can’t be carved up that way.

    Am I the only one that finds it cool that no one, not Google or the NSA or anyone else (except maybe Donald Knuth), could possible get this job done in less than about 7 hours?

    1. >Am I the only one that finds it cool that no one, not Google or the NSA or anyone else (except maybe Donald Knuth), could possible get this job done in less than about 7 hours?

      It is possible to get faster single-thread speeds than 4GHz, but tends to take heroic effort. If I had three times the budget I do, I could go out and find the fastest DDR4 RAM in the world, then I’d build a water-cooled super-Beast and overclock the shit out of an i7 variant with an unlocked clock. I gather you can push some of them to around 5GHz.

      Here’s how that’d go. Yeah, I’d cut the compute time by about 4:5 on a good day, but performance would be unstable. The super-cryo-Beast would live hard, die young, and leave a pretty corpse.

  13. Also speaking as one of your donors, I agree with Thomas Blankenhorn – spend the money to benefit your work. It does sound like you really could benefit from a new system, and the Beast will serve you well.

    Personally, I feel that you should not strain to match more money than you expected; people gave you the money to build the Beast – it shouldn’t hurt you financially. I say: match as little or as much as you feel is appropriate.

    (I also tend to update to newer hardware every 4 years or so.)

  14. A conversion of NetBSD’s CVS repository to git has been done multiple times.
    There is even a public git mirror available.
    However there are enough NetBSD developers that don’t want to use git that the chances of converting the main repository to git are slim to none.
    These two items mean that you may have found your experiment interesting, but it had no pratical purpose.

    1. >A conversion of NetBSD’s CVS repository to git has been done multiple times.

      Two questions:

      1. How was it done?

      2. Do you have any idea why I was approached to do a full conversion last December?

      I detect different agendas at work. The politics around big conversions are often murky and nasty.

  15. > There are very, very few application domains in which you’re going to be
    > routinely bashing around 18GB of coherent in-core data structures –
    > that’s more than three times the entire address range of a 32-bit processor!

    These days?

    I think you’d be surprised how common datasets of that size are. Two jobs ago we had processes that would write ~400MiB/s to disk for as long as we could.

    The Post Office has a SGI (well 3 years ago they did) with something like 21 TB of memory and ~2048 cores that they used to look for postal meter fraud. They’d store *every* metered stamp number in memory and look for it to get reused. They could do this in real time. I think they had a 30 or 60 day window.

    This job (content distribution network) one machine logs about 1.1 GiB per hour at peak. We’ve got 370 machines in the CDN. We log all the transactions and errors for billing and analysis. I can search ~30 days of logs with little difficulty using Splunk.

    Oh, and those front end servers have 196GiB of ram, 32 cores and 25TiB of hard drive per box.

    Last week we had 6 servers in San Francisco moving about 8.5GiB/s out of each server. Admittedly only for about an hour or so. And yes, these are hooked right up to routers. We’re trying to get a second port on each machine hooked up, then our limit moves to the RAID controller–about 18GiB/s. Note, that wasn’t just the logging traffic, that was stuff served to clients.

    We’re not a *huge* CDN, but we’re not small either.

    So yeah, workloads like you’re talking aren’t exactly *common*, but they’re not rare, and they’re getting more common.

    There are laptops and desktops out there with 16 and 32 GiB of ram. Heck, my 2008 vintage iMac has 4 cores and 12 gig of ram, and the 2006/7 Dell 690 I have has 8 cores and 8 gig–only because I’ve never bothered to bump it up because most of the high horse power requirements I have are work.

    Moore’s law isn’t working any more, at least not per-core (the number of transistors on a die are still going up, but it’s multiple cores). You guys are going to have to learn the sorts of programming tricks that make use of that sort of stuff :)

    Erlang and Go sort of build that in. In C it’s harder to do.

    1. >They’d store *every* metered stamp number in memory and look for it to get reused. They could do this in real time. I think they had a 30 or 60 day window.

      Yeah, that’s exactly the kind of high-end database job I was pointing at.

      Your CDN stuff isn’t quite the same use case. You’re mostly pumping bits, not doing rendezvous analyses on very large blobs of them. Your workload is much more susceptible to parallelization.

  16. It is possible to get faster single-thread speeds than 4GHz, but tends to take heroic effort. If I had three times the budget I do, I could go out and find the fastest DDR4 RAM in the world, then I’d build a water-cooled super-Beast and overclock the shit out of an i7 variant with an unlocked clock. I gather you can push some of them to around 5GHz.

    Here’s how that’d go. Yeah, I’d cut the compute time by about 4:5 on a good day, but performance would be unstable. The super-cryo-Beast would live hard, die young, and leave a pretty corpse.

    If you’re going to go that hardcore, might as well skip the water-cooling, and go straigh to a vapor-compression cycle heat exchanger like the kind used in refrigerators and air-conditioning. You thought water-cooling was fiddly… ;P

  17. @loren if you haven’t seen this article yet it might be interesting:

    http://www.infoworld.com/article/2610403/cloud-computing/ultimate-cloud-speed-tests–amazon-vs–google-vs–windows-azure.html

    The takeaway is benchmark different instance types as the this task may perform better on a smaller instance than a larger. As long as the instance has enough RAM I would test it.

    In theory the C3 XL and 2XL instances with 2.8Ghz Ivy Bridges should put in the best performances.

    For me the bottom line would that using cloud compute services for a one off (or low number) tasks is going to be more cost effective than a machine that ages under moore’s law if I don’t think I’ll utilize it effectively after the task is done.

    1. >In theory the C3 XL and 2XL instances with 2.8Ghz Ivy Bridges should put in the best performances.

      Well, geez. That’s barely any faster than my 2.66GHz Intel Core 2 Duo (though at least an EC2 instance wouldn’t OOM the job when it got near 4GB).

      If that’s the best the cloud can do, the Great Beast will slouch from Toledo to Malvern with no fear it’ll be obsolete before it’s born.

  18. Not really a good comparison, as one is a server chipset and the other a desktop chipset, and you’re also talking about different generations. Clock speed isn’t everything. Even when comparing two server chipsets of approximately the same clock speed but different generations, the Ivy Bridge processors are often benchmarked to be twice as fast as processors from two generations prior.

    I’m honestly surprised at the lack of awareness of cloud options in this thread. Just try it. It’s ridiculously easy, and highly amenable to variant testing. Focus your time on things more important than maintaining a cobbled-together system that will lose value faster than a new car.

    1. >Focus your time on things more important than maintaining a cobbled-together system that will lose value faster than a new car.

      I do not “cobble together” systems, sirrah! My builds are creatures of beauty and power. Hmmmph.

  19. >I do not “cobble together” systems, sirrah! My builds are creatures of beauty and power. Hmmmph.

    Lol, I meant no disrespect! I’ve built many, many systems myself from individual components, and I always did my best to know my requirements, and make the system reliable and quiet, and perform well. But nothing ever goes according to plan, and there are always failures and maintenance I didn’t plan for. It just seems to be a fact of life that hardware fails and will cause much frustration and heartache and pain, and I’m much happier letting someone else deal with that aspect of things wherever I can. I now use a Mac Air for desktop and coding purposes (they truly are things of beauty; after two years, I still occasionally turn mine over and over in my hands and marvel at the quality), and spin up cloud instances to test workloads. It’s much less stressful, and I’m more productive.

    1. >And besides, it’s not your build, it’s John’s. :-)

      And a bit of Wendell’s before we’re done – I think he wants to throw in some parts.

  20. I spent a bit of time with E2C today. In some ways I’m impressed, in others, not so much.
    Based on the description of the types of scale described, I decided to go with a c3.4xlarge instance. This would have 16 cores, 30 GB of ram and 2 SSDs I could use for temporary storage. I also decided to go with a shared instance for cost reduction purposes, and because I suspect that most other people would do the same. Also, E2C uses a token-bucket model for computational resources, so 16 cores worth of tokens going into a mostly single-threaded application should be more than adequate.

    I suffered a number of false starts. However, I managed to get 2.5 reliable runs of repo conversion. It actually went slower as time went on. In any case, the most representative run has the following output:

    ubuntu@ip-172-31-7-100:/mnt$ time find emacs -name ‘*,v’ | cvs-fast-export -p -l emacs.errors >emacs.fi
    2014-10-23T01:15:02Z: Reading file list…done, 667745.362KB in 5091 files (0.027054sec)
    2014-10-23T01:15:02Z: Analyzing masters…done, 270266 revisions (5.818163sec)
    2014-10-23T01:15:07Z: Make DAG branch heads…5091 of 5091(100%) (0.004387sec)
    2014-10-23T01:15:07Z: Sorting…done (0.028241sec)
    2014-10-23T01:15:07Z: Find branch parent relationships…done (0.019138sec)
    2014-10-23T01:15:07Z: Merge common branches…37 of 37(100%) (134.948487sec)
    2014-10-23T01:17:22Z: Compute tail values…done (0.036586sec)
    2014-10-23T01:17:22Z: Find tag locations…done (3.489763sec)
    2014-10-23T01:17:26Z: Generating snapshots…done (369.659708sec)
    2014-10-23T01:23:36Z: Saving in fast order: done (72.317381sec)0%)
    cvs-fast-export: no commitids before 2007-05-12T16:59:34Z.
    after parsing: 5.845 431384KB
    after branch merge: 144.373 673596KB
    total: 586.354 701476KB
    113300 commits/35281.395M text at 193 commits/sec.
    cvs-fast-export: 10 warning(s).

    real 9m46.460s
    user 4m53.747s
    sys 0m17.904s

    Overall, I was pleasantly surprised at how nice E2C was. However, based on the numbers above, the CPU power certainly doesn’t beat out the Great Beast.

    1. >However, based on the numbers above, the CPU power certainly doesn’t beat out the Great Beast.

      …or even Wendell Wilson’s Quasi-Beast. Hell no. It’s actually pretty close to snark’s throughput without the 4GB hard memory limit. Cue John D. Bell, snarking.

      Thank you, Garrett. From now on I shall reply to cloud enthusiasts with your numbers.

  21. I’m not sure how cloud got back on topic, but since nobody’s mentioned it yet: Rackspace does a lot of open-source stuff, and they have bare metal servers for rent. If you can find the right person to ask, they *might* be willing to donate server time to the project.

    Specs here: http://www.rackspace.com/cloud/servers/onmetal/.

    (Disclaimer: I work for them. Also, one of my perks is a dollar amount of free server time per month. I’m not sure if I can use that on bare metal instances, but I think I can. If it would be useful, I might be able to get you a day or two of free server time even if the company isn’t interested.)

    > If an EC2 instance really is within shouting distance of Great-Beast-class hardware,
    > though, what am I going to do with that $2790 that will meet my donors’ expectations?

    Speaking as a donor: I don’t know, but that shouldn’t enter into how you solve *this* problem. If a better solution than the Beast is available, use it. I feel like I should say something about sunk costs here but I’m pretty sure I would be misusing the concept.

    That said, I wouldn’t complain if you built the machine anyway. It’s what I expected my money to be used for. As long as the problem gets solved and any surplus goes towards some form of open source production, I think most of your donors will be satisfied.

    Besides, monster builds are *cool*.

    1. >would HOPE be likely to help?

      No. See previous comment about tight loops and beating the allocator to death.

  22. 2. Get an account on GNU Savannah, if you don’t already have one. Install your public key there. (Sorry, this is required in order for the rsync in the next step to work.)

    3. Get Emacs CVS from Savannah: cvssync cvs://cvs.savannah.gnu.org/emacs#emacs

    It should be noted that instead of requiring a Savannah account, you can just point rsync to rsync://vcs.savannah.gnu.org/sources/emacs/emacs/ (any other Savannah project’s VCS tree can likewise be rsync’d — just view the root of the rsync server to see what’s available. The interesting ones, of course, are CVS and SVN, since it hardly matters to rsync a bzr or git repo).

    1. >you can just point rsync to rsync://vcs.savannah.gnu.org/sources/emacs/emacs/ (

      I didn’t know this. I think I’ll add that as a cvssync magic rule, it will eliminate the need for Savannah credentials when using that tool.

      UPDATE: Done and working. It’ll be in the upcoming release.

  23. I’d love to see if running on HP’s “The Machine” would eclipse your “Great Beast”. The memory pathway on such exotic hardware should help quite a bit with that large of a working set. Photonics and memristors sound so promising for this kind of work. Now, if only they can actually get it to market some day.

  24. Two comments here.

    1. If you are doing cost-estimation for EC2, use the spot price. The more powerful instances might normally price for $2/hr, but the spot prices on the same instances are almost always around 25 cents per hour. That obviously adds up fast. They do retain the right to terminate a spot instance if your bid price goes under the sale price, but you can always bid the $2/hr figure and that isn’t likely to happen (and you pay only the actual price, not the bid price). I wouldn’t run a 24×7 application component this way, but they’re perfectly-suited to short-duration tasks like this.

    2. Regarding the algorithm, a datapoint to consider for comparison is the Gentoo repository, which we have some experience with converting now (using tools based on cvs2svn, not cvs-fast-export). The number of commits there is significantly larger – 750k condensed in git, but in cvs there are about 3.1M commits. One thing that is atypical with Gentoo is that there is only one branch (well, there is a second one that was created by mistake and is one commit deep, and we discard that during conversion). Our scripts convert that in about 3 hours on a gimped Phenom II x4 running at about 2.7GHz (heat issues limit the clock) – I’ve heard that it can run in as little as 30min on something beefy – it does benefit from parallelization for portions of the conversion. Memory use is far more modest in our scripts – I don’t think it even hits 1GB at any point – the system only has about 8GB of RAM free so it definitely doesn’t hit that. With four cores it is CPU-bound – I imagine that with enough cores it might become IO-bound, but our entire CVS repository fits in a 700M squashfs so it gets cached in RAM (the intermediate outputs are considerably larger).

    So, definitely not an apples-to-apples comparison and I’m not familiar enough with the Gentoo migration code to vouch for whether they would work on a repository with more than one branch. It wouldn’t surprise me if that constraint makes a big difference given that you mention that this phase of the process consumes so much time.

    Latest version of the gentoo scripts are at https://github.com/gentoo/git-migration-scripts-rich0 – I am hosting this since I’ve done some final cleanup, but the real work was done by Brian Harring (ferringb) over the last few years. Some of the code is Gentoo-specific, but much of it would be applicable to any repository.

  25. Off topic, but there’s no other place to put this — your search bar will not return any hits if the contributor’s name is the only search argument. I wished to pull up my several years of comments to see how they had evolved, but no joy.

  26. @Conrad

    I’ve noticed that the built-in search engine only searches the OPs. That’s why I usually search this blog using DuckDuckGo (or, more rarely, Google), appending site:esr.ibiblio.org to the search term(s).

  27. >esr on 2014-10-22 at 17:18:14 said:
    >
    >>A conversion of NetBSD’s CVS repository to git has been done multiple times.
    >
    >Two questions:
    >
    >1. How was it done?

    I don’t know, I wasn’t involved, but check out https://github.com/jsonn/pkgsrc and https://github.com/jsonn/src . The user could probably give you more info.

    >2. Do you have any idea why I was approached to do a full conversion last December?

    I didn’t say that nobody was interested. There are definitely developers interested in a git conversion, but it is rather controversial.

    >I detect different agendas at work. The politics around big conversions are often murky and nasty.

    Do note that in the interest of not getting into a flame war, I didn’t state my opinion on the matter, just that I think a conversion to git of the central repository is highly unlikely.

    BTW, on a couple of other items, I will note that core i7s come in speeds of up to 4GHz now. They come with four cores plus hyperthreading. I don’t know how that compares to a Xeon for performance, but since you have a single-threaded job, the extra speed would probably help. Also, for NetBSD commits, the timestamp is based on the server time, which is set to UTC (the server is located in California).

    1. >Do note that in the interest of not getting into a flame war, I didn’t state my opinion on the matter

      Your restraint is appreciated. I don’t really want to get involved in the politics either, See my next blog post.

  28. I’m sure I’m not telling a hacker anything new, but have you (or others) tried the process with CFLAGS optimised for the processor – Haswell in the case of test runs and your proposed Beast? Compiling with the latest GCC may also give a further (if lesser) boost.

    1. >have you (or others) tried the process with CFLAGS optimised for the processor

      You know, that actually slipped my mind. I’ll add -march=native and see what shakes out.

  29. I know there was a bug in a recent GCC where it didn’t identify some of the latest Intel processors with -march=native, although -march=actualarch worked.

    GCC 4.7 is the earliest to support Haswell with -march=core-avx2 . 4.8 is likewise, but GCC 4.9 changed it to -march=haswell

    If your GCC is older than 4.7 then you try the most recent Intel architecture it supports, or else roll your own up-to-date GCC.

    It’s not just setting the specific architecture that would bring improvements but the newer instructions that come with that.

    For instance, running the Blake2 b2sum hash program (with one run done and disgarded to get the file into memory) with my native arch but selectively excluding/allowing instruction sets gives the following gains on my machine:

    (base – with/requires sse2)

    (sse3 identical in performance)

    with ssse3: +8.6% faster

    with sse4 +3.8% faster still (+12.43% over base)

    with avx: +5.8% faster again (+18.2% over base)

    Haswell, of course, has avx2 on top.

  30. Dear all,

    I might come late in the debate, but have you investigated stdout caching policy for the last phase?

    By default, stdout is set to be line buffered with a small buffer designed for displaying text to humans.

    This seems inadequate for megabytes of git-fast-import stream, as this causes a lot more context switching than necessary between user and kernel land.

    So maybe calling setvbuf with a large buffer size and disabling line oriented output could be a net win for speeding up phase 3 of the conversion process. Perhaps the disk block allocation algorithms would work better too with bigger chuncks to write at a time.

    On the other hand, line oriented input is absolutely necessary for debugging purpose.

    That’s just an hypothesis, but it might be worth exploring, given how few modifications it requires.

    1. >So maybe calling setvbuf with a large buffer size and disabling line oriented output could be a net win for speeding up phase 3 of the conversion process.

      Good idea. I’m already using large buffer copies for most of that phase; this may squeeze a bit more performance out of the printfs and so forth.

  31. Since a custom memory allocator is in use and the code is monothread, I would have one more suggestion: make sure to activate huge pages when loading the dataset in memory. Because it seems more reasonable to attack a 10+GB memory footprint with blocks of 2MB than 4KB.

    This follows previous advices about cache behavior, but it’s the TLB cache I’m worried about in this case.

Leave a comment

Your email address will not be published. Required fields are marked *