The Great Beast has met its match

When I built the Great Beast of Malvern, it was intended for surgery on large repositories. The specific aim in view was to support converting the NetBSD CVS to git, but that project is stalled because the political process around NetBSD’s decision about when to move seems to have seized up. I’ve got the hardware and software ready when they’re ready to move.

Now I have another repo-conversion job in the offing – and it does something I thought I’d never see. The working set exceeds 32GB! For comparison, the working set of the entire NetBSD conversion tops out at about 18GB.

What, you might well ask, can possibly have a history that huge? And the answer is…GCC. Yes, they’re looking to move from Subversion to git. And this is clearly a job for the Great Beast, once the additional 32GB I just ordered from Newegg arrives.

Published
Categorized as General

33 comments

  1. > once the additional 32GB I just ordered from Newegg arrives.

    Oh.
    My.
    Sweet.
    Fornicating.
    Goddess….

    Please let me know if you need any help getting it configured. (Although that should just be a straight-forward BIOS “yah, got more memory” issue.)

    1. >(Although that should just be a straight-forward BIOS “yah, got more memory” issue.)

      Hm, won’t the BIOS autodetect it?

  2. @ahd –

    > So does the Great Beast have a complete parts list anywhere?

    This is probably the most complete BoM, unless you can get Wendell to disgorge further details.

    You might also want to scroll down through the rest of the comments on that post.

  3. @esr –

    > > (Although that should just be a straight-forward BIOS “yah, got more memory” issue.)

    > Hm, won’t the BIOS autodetect it?

    Yes, it will. But, many BIOSes will throw an error and ask you to confirm by entering them and accepting, before actually booting the OS.

  4. It’s a shame I can’t punch you a hole into a SuperMicro server I’ve got sitting in the lab. 512G of ram ought to be enough, right?

  5. @William:
    >512G of ram ought to be enough, right?

    Not sure about that, but 640GiB certainly will. :-P

  6. Meh, bitcoin mining is hard enough now that it takes dedicated mining farms to do it at all effectively.

  7. @Jay:
    >>Meh, bitcoin mining is hard enough now that it takes dedicated mining farms to do it at all effectively.

    Pfft. Maybe for you guys. I do it by hand, because I want to make sure the math is right!

  8. Too bad bitcoin mining depends largely on GPU power, you could make a killing with it.

    GPU mining has been dead for quite some time now. Killed thoroughly with ASICs.

  9. Does GCC have their original CVS/RCS files? The first versions don’t look right on the SVN web interface. It seem strange that only single file trunk/gcc/config/m68k/xm-3b1.h would be checked in as the initial revision.

    1. >Does GCC have their original CVS/RCS files?

      I don’t know. The history of the repository is pretty strange; it was originally *two* CVS repositories, GCC and EGCS, merged with a custom version of CVS and later converted with cvs2svn.

  10. I talked to a company yesterday that needs to uplift a large SVN repo to GIT. How hard is it for someone who’s unfamiliar with your tool to do the work?

    1. >I talked to a company yesterday that needs to uplift a large SVN repo to GIT. How hard is it for someone who’s unfamiliar with your tool to do the work?

      Depends strongly on how grotty the repository history is. My tools make the job as easy as possible, but if the repo has CVS conversion scars or various common kinds of operator errors in it that can still be pretty hard.

      If they want it done fast (and cheap) they should hire me. Seriously – even at my high hourly rates, I’m a better deal than paying for the amount of time it will probably take for an in-house code monkey to come up to speed.

    1. >Did the EGCS fork happen before CVS was used?

      I don’t know. I do know that there were at some point two separate CVS repos, later merged.

  11. I’m interviewing with them again on Monday and I’ll mention it.

    > > Did the EGCS fork happen before CVS was used?

    > I don’t know. I do know that there were at some point two separate CVS repos, later merged.

    I hope you made proper offerings to the Prophet Shannon such that Prophey Murphy will forget your name for a while.

  12. >James Noyes on 2015-08-26 at 09:43:59 said:
    >
    >@Jay:
    >>>Meh, bitcoin mining is hard enough now that it takes dedicated mining farms to do it at all effectively.
    >
    >Pfft. Maybe for you guys. I do it by hand, because I want to make sure the math is right!

    Artisanally produced bitcoin… now I’ve seen it all.

  13. https://gcc.gnu.org/ml/gcc/2015-08/msg00245.html has my description of the two CVS repositories – gcc/config/m68k/xm-3b1.h was presumably the first file someone decided to start using RCS for, years before most files were put in RCS. The gcc2 RCS/CVS ,v files are available from ftp://gcc.gnu.org/pub/gcc/old-releases/old-cvs/gcc-cvsroot-1999-05-06.tar.bz2 while the 1997-2005 CVS repository is available by rsync (gcc.gnu.org::gcc-cvs) as is the current (32 GB in size) SVN repository (gcc.gnu.org::gcc-svn).

  14. Some folks have actually tried to run reposurgeon on Gentoo on fairly large hardware and ran into RAM exhaustion. We already had a working migration script at that point so we didn’t pursue it further.

    If you’re looking to test your code/hardware against another large repository you can fetch the final Gentoo cvsroot from rsync://anonvcs.gentoo.org/vcs-public-cvsroot/

    The gentoo-x86 repository is the big one we had to migrate. Our current draft conversion of the full history is posted at https://github.com/gentoo/gentoo-gitmig-20150809-draft

    Be warned, a git bundle of that weighs in at 1.7G, and a compressed tarball of the gentoo-x86 cvs repository is 700M. The conversion took me a few hours on a Phenom II x4 using a few GB of RAM – it does not attempt to hold everything in RAM at once and our conversion routine is somewhat optimized due to assumptions we can make about the nature of our commit history and layout.

    I’d be curious as to how the Beast handles it if you do give it a shot.

    1. >I’d be curious as to how the Beast handles it if you do give it a shot.

      I might, but not soon. NTP, the GCC conversion, and related work on reposurgeon will eat my bandwidth pretty effectively in the near future.

  15. Julian, if you read his earlier posts you’ll find that his conversion process tends to be limited by single-threaded performance, so a cluster would be counter-productive, unless it were merely to work on multiple independent conversions at the same time (in which case operator time becomes rate-limiting).

    For the Gentoo conversion single-threaded performance was fairly important, though the structure of our repository did lend itself to a multi-threaded conversion which might have been possible on a cluster. Honestly, though, I’d run some tests on EC2 with highly-parallel instance types and even in our case which was probably close to best-case for a parallel conversion having dozens of cores didn’t speed things up all that much vs just having 4 or maybe 8, which are both well-within the range of a typical single-threaded machine.

  16. @ Rich Freeman

    Actually, whether practical or not, this is a good use-case for an old relic of supercomputing: distributed, shared-memory machines. Particularly, I used to be a fan of academic work that used software libraries and/or good networking hardware to emulate a NUMA machine on a cluster of cheap PC’s. Example:

    http://discolab.rutgers.edu/dsm/

    One could use several for large memory even if the work was sequential. Probably benefit from clustered filesystem for fault-tolerance or performance boost while they’re at it. Too bad there isn’t much work in DSM’s anymore. Last cool thing I saw was “NUMAscale” product that turned AMD servers into NUMA machines with a card. Never looked at the price because I’ve been too budget-limited for anything above a good server/desktop haha. Good concept, though.

  17. @Nick P

    My understanding is that amd64 is already a NUMA architecture, though I have no idea if the linux kernel actually treats it as such. I imagine the latencies are not high if you access the wrong bank of RAM on the wrong core compared to a traditional machine.

    Of course the difference was more pronounced on a traditional NUMA.

Leave a Reply to JR Cancel reply

Your email address will not be published. Required fields are marked *