Spending the “Help Stamp Out CVS In Your Lifetime” fund

I just shipped cvs-fast-export 1.21 much improved and immensely faster than it was two weeks ago. Thus ends one of the most intense sieges of down-and-dirty frenzied hacking that I’ve enjoyed in years.

Now it comes time to think about what to do with the Help Stamp Out CVS In Your Lifetime fund, which started with John D. Bell snarking epically about my (admittedly) rather antiquated desktop machine and mushroomed into an unexpected pile of donations.

I said I intend to use this machine wandering around the net and hunting CVS repositories to extinction, and I meant it. If not for the demands of the large data sets this involves (like the 11 gigabytes of NetBSD CVS I just rsynced) I could have poked along with my existing machine for a good while longer.

For several reasons, including wanting those who generously donated to be in on the fun, I’m now going to open a discussion on how to best spend that money. A&D regular Susan Sons (aka HedgeMage) built herself a super-powerful machine this last February, and I think her hardware configuration is sound in essentials, so that build (“Tyro”) will be a starting point. But that was eight months ago – it might be some of the choices could be improved now, and if so I trust the regulars here will have clues to that.

I’ll start by talking about design goals and budget.

First I’ll point at some of my priorities:

* Serious crunching power for surgery on large repositories. The full Emacs conversion runs I’ve been doing take eight hours – goal #1 is to reduce that kind of friction.

* High reliability for a long time. I’d rather have stable than showy.

* Minimized noise and vibration.

Now some anti-priorities: Not interested in overclocking, not interested in fancy gamer cases with superfluous LEDs and Lambo vents, fuck all that noise. I’m not even particularly interested in 3D graphics. Don’t need to buy a keyboard or mouse or speakers and I have a dual-port graphics card I intend to keep using.

Budget: There’s $710 in the Help Stamp Out CVS In Your Lifetime fund. I’m willing to match that, so the ceiling is $1420. The objective here isn’t really economy, it’s power and buying parts that will last a long time. It’d be nice to go four or five years without another upgrade.

OK, with those points clear, let’s look at some hardware.

First, this case from NZXT. How do I love thee? Let me count the ways: 200mm low-velocity case fans for minimal noise, toolless assembly/disassembly, no sharp edges on the insides (oh boy do my too-frequently-skinned knuckles like that idea). USB and speaker ports mounted near the top right corner so they’ll be convenient to reach when it sits on the floor on the left side of my desk. Removable cleanable filters in the air vents.

To anyone who’s ever tinkered with PCs and cursed the thoughtless, ugly design of most cases, the interior images of this thing are sheer porn. Over on G+ someone pointed me at a boutique case design from Sweden called a Define R4 that moves in the same direction, but this goes further. And I want those 200mm fans badly – larger diameter means they can move enough air with a lower turning rate, which means less noise generated at the rotor tips.

Doubtless some of you are going to want to talk up Antec and Lian Li cases. Not without reason; I’ve built systems into Antecs and know Lian Li by reputation. But the NZXT (and the Define R4) go to a level of thoughtfulness in design that I’ve never seen before. (In truth, the way they’re marketed suggests that this is what happens when people who design gamer cases grow up and get serious.) Suggest alternatives if you like, but be aware that I will almost certainly consider not being able to mount those 200mm fans a dealbreaker.

Processor: AMD FX-8350 8-Core 4.0GHz. The main goal here is raw serial-processing power. Repository surgery generally doesn’t parallelize well; it turned out that multithreading wasn’t a significant win for cvs-fast export (though the code changes I made to support it turned out to be a very good thing).

So high clock speed is a big deal, but I want stable performance and reliability. That means I’d much rather pay extra for a higher rated speed on a chip with a locked clock than go anywhere near the overclocking thing. I would consider an Intel chip of similar or greater rated clock speed, like one of the new Haswells. Of course that would require a change in motherboard.

Speaking of motherboards: Tyro uses an MSI 990FXA-GD80. Susan says this is actually a gamer board but (a) that’s OK, the superfluous blinkenlights are hidden by the case walls, and (b) having it designed for overclocking is good because it means the power management and performance at its rated speed are rock solid. OK, so maybe market pressure from the gamers isn’t so bad in this instance.

RAM: DDR3 2133. 2133 is high speed even today; I think the job load I’m going to put on this thing, which involves massive data shuffling, well justifies a premium buy here.

Susan recommends the Seagate SV35 as a main (spinning) drive – 3TB, 8.5msec seek time. It’s an interesting call, selected for high long-term reliability rather than bleeding-edge speed on the assumption that an SSD will be handling the fast traffic. I approve of that choice of priorities but wonder if going for something in the Constellation line might be a way to push them further.

Susan recommends an Intel 530 120GB SSD, commenting “only buy Intel SSDs, they don’t suck”. I’m thinking its 480GB big brother might be a better choice.

Susan says “Cheap, reliable optical drive”; these days they’re all pretty good.

The PSU Tyro used has been discontinued; open to suggestions on that one.

Here’s how it prices out as described: NZXT = $191.97, mobo $169.09, CPU $179.99, 32GB RAM = 2x $169.99, SSD = $79.99, HDD = $130.00. Total system cost $1092.02 without PSU. Well under my ceiling, so there’s room for an upgrade of the SSD or more RAM.

Let the optimization begin…

UPDATE: The SeaSonic SS-750KM3 is looking good as a PSU candidate – I’m told it doesn’t even turn on its fan at under 30% load. At $139.99 that brings the bill to $1232.01.

180 comments

  1. Eric,

    I urge you to locate and carefully read the Software Optimization Guide for any AMD processor you’re considering before you believe the claim of n-gazillion cores. I bought a “16-core” Opteron last year which turned out, upon more careful reading and testing, to really be an 8-core machine with two threads per core. What the rest of the world calls a “thread” is what AMD calls a “core”, and what the rest of the world calls a “core” is what AMD calls a “compute unit”.

    1. >I bought a “16-core” Opteron last year which turned out, upon more careful reading and testing, to really be an 8-core machine with two threads per core.

      Bletch. Pretty good argument for a Haswell i7.

  2. > Now it comes time to think about what to do with the Help Stamp Out CVS In Your Lifetime fund, which started with John D. Bell snarking epically about my (admittedly) rather antiquated desktop machine and mushroomed into an unexpected pile of donations.

    > I said I intend to use this machine wandering around the net and hunting CVS repositories to extinction, and I meant it. If not for the demands of the large data sets this involves (like the 11 gigabytes of NetBSD CVS I just rsynced) I could have poked along with my existing machine for a good while longer.

    At this point, I was like, “I’m looking forward to seeing this thing at http://www.top500.org

    > Budget: There’s $710 in the Help Stamp Out CVS In Your Lifetime fund. I’m willing to match that, so the ceiling is $1420.

    I should have turned on my webcam :)

  3. I have no quibbles with anything you’ve specified.

    I’ll make two recommendations about power supplies, though:
    1) Modular cabling. It’s really nice to only have the cables you need physically in the box, instead of having the others tie-wrapped up somewhere .
    2) Buy a name brand. Antec, Cooler Master, Thermaltake, Corsair, FSU.

    Generally, with power supplies, you get what you pay for. Don’t break the bank, but don’t buy the cheap stuff either.

    1. >Generally, with power supplies, you get what you pay for. Don’t break the bank, but don’t buy the cheap stuff either.

      Agreed. I’ve given this same advice often enough.

  4. PSU – go with a SeaSonic X or SS series, larger than necessary, because they don’t even turn on their fans when under 30% load. Under 50% load, they run their fans in low speed mode. Silence is gold.

    CPU cooler – I went with a Noctua CPU cooler, running a 120mm fan at low speeds. You really want a good aftermarket cooler, like a Noctua, as the stock CPU fan tends to be the loudest source of noise.

    SSD – the 3 reliable brands are Samsung, Crucial, and Intel, in order of their popularity by gamers. I would highly recommend getting a SSD for your OS drive. It’s nuts how fast your machine feels with the OS on the SSD.

    Hard drive – go with a mirrored raid 1 configuration for your data drives. They don’t even need to be that fast. Basically, bite the bullet, assume drives will fail, and protect against that with a raid 1 plus network backup. Raid 1, because it’s trivial to recover from a drive failure in a raid 1 scenario.

    Gamers (like me) are forced to go with a SSD/HD mixed setup, with the OS on the SSD, and data on the hard drives, because it’s just too cost prohibitive to put everything on the SSD.

    1. >PSU – go with a SeaSonic X or SS series, larger than necessary, because they don’t even turn on their fans when under 30% load. Under 50% load, they run their fans in low speed mode. Silence is gold.

      Ooooh. I especially like this advice.

    2. >PSU – go with a SeaSonic […] Under 50% load, they run their fans in low speed mode. Silence is gold.

      I like the specs on the SeaSonic SS-750KM3. At 750W, driving a single low-power HD and an SSD ought to be well inside its envelope, and there’s $139 in the budget for it. Done, unless someone can point me at something unequivocally better.

  5. > before you believe the claim of n-gazillion cores

    Ditto on that. Thus my “don’t remember what AMD calls it” quote in the original thread.

    Not sure about your mass storage needs, but do recommend 2.5″ drives over their 3.5″ brethren, especially given the latency sensitivity your code is displaying. Much faster response times, such that a 2.5″ 7200 is equivalent to a 3.5″ 10k.

    The SSD front is still evolving very quickly from mass interest. If you want enterprise class reliability, look at Samsung as well as Intel. If you want better value, Tom’s Hardware SSD review is getting long in tooth, but is a good starting point: http://www.tomshardware.com/reviews/ssd-recommendation-benchmark,3269.html

    Enjoy the hunt.

  6. I’m still using an Athlon 64 dual-core (*coughs* also 4GB of RAM, and not enjoying it anymore), these choices inspire me to upgrade too. I might be going the Intel route instead of AMD, and maybe not as insanely fast RAM… but that case… however you found it, that is most excellent. I do happen to do quite a bit of gaming, but I care not for excessively pointless lights or transparent lights, still the whole deal with Linux graphics drivers and gaming-performance cards is a sad one, I think it’s the final frontier of hardware support Linux has yet to conquer, and the blame is mainly on both AMD and NVIDIA keeping their drivers proprietary. I’ve been semi-loyal to NVIDIA only because of stable drivers, but I do rather wish someone came out with the combination of high-end graphics card with open source drivers (AMD releasing specs is certainly better than NVIDIA, but the open source drivers lag behind a few generations of cards and GL support). Intel once made hints of doing just that, but they pulled the plug on it …

  7. SSD – the 3 reliable brands are Samsung, Crucial, and Intel, in order of their popularity by gamers. I would highly recommend getting a SSD for your OS drive. It’s nuts how fast your machine feels with the OS on the SSD.

    Hard drive – go with a mirrored raid 1 configuration for your data drives. They don’t even need to be that fast. Basically, bite the bullet, assume drives will fail, and protect against that with a raid 1 plus network backup. Raid 1, because it’s trivial to recover from a drive failure in a raid 1 scenario.

    Gamers (like me) are forced to go with a SSD/HD mixed setup, with the OS on the SSD, and data on the hard drives, because it’s just too cost prohibitive to put everything on the SSD.

    This dives into software configuration choices, but … I’m a huge fan of ZFS, and the Linux port is pretty rock-solid stable these days (it runs the LLNL supercomputer, even!). For a desktop config, I’d go with two main spinning-rust disks in a mirror (like RAID1, only better because of ZFS magic), and then an SSD for L2ARC (you actually don’t want to get TOO BIG of one, because all of the metadata for L2ARC is stored in main RAM, so a ~64GB one should do it). Boot times will be just as fast as the spinning-disks, but after a while in main operation, it should get a healthy amount of data cached in both RAM and the L2ARC, providing near-raw-SSD seek times.

  8. Check that comment about “cores” carefully – after all, it was Intel that started the whole confusion with “hyperthreading”, which meant that *sometimes* a so-called “core” could work on two different streams of execution at once. As for “Rest of the world” – as far as desktops go, you only have Intel and AMD. Perhaps we should have someone with less of a stake in the game define this stuff. AMD probably has a good point with regards to their terminology of a “compute unit”, as they’re the ones working on mixed CPUs/GPUs/PPUs/etc on a single chip. Heck – how do you measure the basic graphics card in terms of how many “CPUs” is has?

  9. Another small note about ZFS: It’d probably be tempting to go for btrfs because of being in the mainline kernel (ZFS probably won’t be for as long as it’s CDDL… which might never change), but in my experience, btrfs is still a steaming pile of unstable crap… and its design is really lacking in comparison to ZFS. The reliability will probably be totally fixed but because of the design choices I don’t really consider it a serious competitor. That being said, it is still a checksumming CoW file system, and that’s still better than ext4/xfs/etc.

    1. >Were you considering two sockets?

      Not yet. And there seenms to be a lot of confusion abroad over what counts as a “core”. The AM3 in Tyro may only be a quad-core by most peoples’ reckoning.

  10. The NZXT H2 Silent Mid-Sized Tower was a good find for me. I got it cheap from newegg (.ca) Fair notes here:

    http://blog.spiralofhope.com/4372-nzxt-h2-silent-mid-sized-tower.html

    Regarding a hard drive. If you want reliability, consider something with a 5 year manufacturer’s warranty. Western Digital “black” have that.

    I highly HIGHLY recommend you don’t use SSDs unless you raid them to a hard drive, in which case you may as well get a hybrid. Reliability is complete fucking shit from this may-as-well-be untested technology. If you don’t care about reliability, then go for it.. they’re great!

  11. “The modem market has stabilized and standardized. If you can spend $59, get a U.S. Robotics V.92 USB external. You can then know that you’ve got the best and skip the rest of this section.”

    s/modem/optical drive

    If you are looking for super-reliability Intel has “server class” SSDs which they claim:

    “Intel promises that the entire contents of these drives can be written ten times a day for five years without wearing out the flash memory!”

    Whether that is true or not, they cost a metric fuckton of money.

  12. Let me recommend a popular, cheap CPU cooler, as your AMD stock CPU cooler is among the loudest of all stock fans: the Cooler Master Hyper 212 EVO CPU cooler, which goes for about $35, so well within budget.

  13. Eric –

    My 2¢ worth –

    Unless you already have a spare of each, don’t forget to budget for a (low end) video card and monitor for your old (current) box. I still recommend keeping it on as a backup destination and “fire extinguisher”.

    More RAM => More Better!

  14. Have you considered a closed loop liquid cooler for the CPU? I have yet to build a PC (grain of salt etc…), but it sounds like a simpler setup than a big air cooler.

  15. As John says, “More RAM => More Better!” But I would add the warning to read the documentation for the processor and motherboard you ultimately choose–especially the fine print. The supported memory speed might go down if and when you popluate all of the slots.

  16. I went through an upgrade recently, so this is of interest.

    My old desktop machine was built from components and upgraded in place over more than a decade till the mid-tower case was the only remaining original part. It had a dual-core Intel Core Duo CPU and 4GB RAM, and multi-booted Win XP Pro and Ubuntu. It finally developed issues I was having problems curing. My SO said “You really need a new machine. You have $500 to spend on it.”

    By happy coincidence, I’d gotten email from computer retailer MicroCenter, who has a store in Brooklyn, offering deals. One was a refurb Dell Optiplex 755 Small Form Factor box with a 2.4ghz quad core Xeon processor, 4GB RAM, a 250GB SATA HD, and Win 7 Pro, for $250.

    That was a decent base. I added enough RAM to take it to 8GB (the max the Intel chipset on the Dell mobo supported), a 240GB Crucial MX-100 SSD, and later added an ASUS HD 5450 Silent PCI-e video card with a GB of video RAM (an OEM AMD/ATI design.) (I’m *not* a gamer, but do enough stuff like Google Earth that uses 3D that the poor 3D performance of the onboard Intel graphics was an annoyance.) I didn’t need keyboard or mouse, but the machine came with them, so I have spares that will be useful. My final price was about $550, and the extra $50 was mostly the after the fact video card upgrade.

    The big win in performance was the SSD. While Intel is the gold standard, I had no qualms about the Crucial product. Crucial is a unit of DRAM manufacturer Micron, and I trust their products. Migrating Win7 from HD to SSD was straight forward. Getting the 755 to boot from it was more complicated, due to a BIOS limitation on the 755. Installing Ubuntu to dual boot once the SSD was in place as the boot drive was painless.

    The result boots from powered off to a Win7 desktop in about 45 seconds, and somewhat less for Ubuntu. Large programs generally load in about two seconds On the Win7 side, I found an open source 64 bit ramdisk, so the machine booted into Windows sees a 512MB drive Z:, and that’s where I put things like browser cache. You plan to add gobs of DDR3 RAM. You can allocate ram to a ramdisk in Ubuntu, and that’s where I’d put things like temp files created by cvs-fast-export.

    I’d think about the exact size of SSD required. In my case, the intent was to store OS and programs on it in a basically “read-only” configuration, but have data reside elsewhere. SSDs are wicked fast, but there is an inherent NAND flash write limit of about 10,000 writes per memory cell. The SSD firmware attempts to spread the writes evenly, and transparently remaps failing cells and migrates the data, so degradation is graceful. On a consumer home machine, the machine is likely to be replaced with something newer/bigger/faster before degradation is even noticeable. On a server machine, that’s not necessarily the case, and what you’re building will essentially be a server machine. I’m not sure what you’re doing *needs* a 480 GB SSD. (And you need to make sure you have TRIM support for the drive enabled in Ubuntu.)

    In any case, I think you’ll find the new system a joy to use once it’s built and configured. I still have a happy smile on my face about my system, and it’s nowhere near what you are planning.
    ______
    Dennis

  17. http://en.wikipedia.org/wiki/Bulldozer_%28microarchitecture%29 This explains some of the issues. I grew up in the 80s, when the floating point operations were done on a co-processor, so I can see the argument of only counting the integer cores. This is what we get for being fuzzy in our terminologies – what was once the operating system is now the kernel, and Microsoft is (nearly) free to kill entire sectors of the applications fields by including some minimalist version in “the operating system” and watching people ask “Why should I buy your X when it’s already installed as part of Windows?” How many people remember Spinrite or PC-Tools and their defragging tools, filesystem checkers, and undelete tools?

  18. I will second the recommendation for the Cooler Master Hyper 212 EVO CPU cooler. It can mount two 120mm fans (gotta buy the second) in a push-pull configuration. Using the stock fans, it supplies plenty of cooling to my AMD Phenom II X6. It seldom spins fast enough to be audible unless I’m compiling a large application like OpenOffice or transcoding video, and never at full tilt unless I’m booting and the fancontrol daemon hasn’t started yet.

    The ony catch with that particular CPU cooler and (I assume) similar, is that depending on the board layout, it may interfere with the RAM modules if they are using large heat spreaders. My personal solution to that problem was to simply remove the heat spreaders and I haven’t had any reliability problems. Last time I was haunting overclocking circles, RAM cooling was seldom deemed necessary.

    I would love to recommend better fans than the stock ones, but my former favorite 120mm fan, the Scythe Gentle Typhoon has been discontinued and I don’t have a new favorite yet. Noctua is well regarded for slience vs performance, but they are also expensive, and so I don’t have personal experience with them.

    Foo Quuxman’s suggesion for a closed loop liquid cooling system is also worth considering. Sealed units with varying cooling capacity can be had at retail, which means no fiddling with pumps, tubing, fittings, waterblocks or any other such considerations. Plus the CPU’s water block would mount without interfering with the RAM’s heat spreaders.

    Is it too late to contribute to the fund?

    1. >Sealed units with varying cooling capacity can be had at retail, which means no fiddling with pumps, tubing, fittings, waterblocks or any other such considerations.

      That makes the possibility much more interesting. I’ve been avoiding water cooling partly because of build complexity and partly because all those joints seem like begging for a leak to get sprung sometime down the road. Can you recommend any particular sealed water cooler?

      >Is it too late to contribute to the fund?

      Not at all. I’d actually like to see some more bucks come in, because the switch to a Haswell i7 might get pricy.

    1. >The FX-9590 8 core goes to 4.7/5GHz – that is NOT overclocking.

      Oh, now that is very interesting and might get ahead of the Haswell i7 in the speed competition. Looks like it doesn’t have the Bulldozer shared-core hack, either. But this comparison says the i7 is faster single-core, which is the figure of merit probably most important for my job load.

      Still…looking at the PartPicker FX-9590 page, my eyes are drawn to the ASRock 990FX Extreme9 mobo because it’s the only one that’ll handle 64GB.

  19. We’re very happy with Samsung SSDs at work. Have bought many of them and they work. Admittedly the ones in our Nexenta blow up periodically but that’s because we’re abusing them with some simply insane amount of r/w and we expect it.

    The L2ARC comment makes sense. However I think with some cunning trickery you could split a 256GB SSD into 64G L2ARC for ZFS with your HD RAID array* + 192G OS boot. If you do try this and get ZFS to work on an Ubuntu/Debian flavor of Linux I hope you will write up the steps because it’s something I’m toying with on a test machine

    *you are going to have more than one disk aren’t you? With ZFS you can do funky things to match up drives of different capacities in the same pool and I recommend that because different sizes means absolutely different MTBFs so you are unlikely to see what I call the “cascading matched seagate” problem where one drive in the array fails and the recovery/restriping work then causes other drives to also fail, thereby causing that whole premise of RAID to fail (in case you hadn’t guessed I strongly recommend against Seagate – we’ve had a bunch of problems with them in the last couple of years but WD works well though).

  20. I’m truly surprised no one has mentioned Silent PC Review http://www.silentpcreview.com for information on parts to build an extremely quiet PC (and links to system builders who can do the same thing for you)…

    Mike Chin is the owner and lead poster, and he is committed enough to the work to build an 11dB anechoic chamber in his house to better test sound levels precisely.

  21. Eric, Intel’s specs say the processor can only address 32GB of RAM and it only uses DDR1333/1600, so your faster DIMMs will be wasted..

    If you want the larger memory footprint try looking at the new E5-1630 v3 4c 3.7 GHz Haswell processor (server-grade). It uses the new DDR4 2133 MHz RAM and can use up to 768GB of RAM.

  22. I few thoughts which I haven’t seen covered yet:

    The manufacturing of flash is pretty reliable these days. The main failure point seems to be in the flash controllers, rather than the flash chips. Intel is known to have good flash controllers. Other brands are undoubtedly improving, but have certainly had problems in the past.

    The performance of RAM can be tricky. Faster RAM (not just more of it) is frequently the best bang-for-the-buck performance boost you can get as your processor is much faster than your memory interface. However, be aware of DIMMs which crank up the clock speed, but then increase the CAS/RAS time. This is the amount of time it takes for a given chunk of memory to become available when you want to read it.

  23. I built a new box a few months ago, upgrading my personal desktop from a 2009 Core 2. Every time I do a custom build, I have to go re-familiarize myself with the market. Here’s what I learned:

    – Despite being a fan of AMD’s efforts since Opteron, I keep ending up with an Intel core. They’re just better bang/buck in my target range. (And my target is usually the fastest single-socket, non-server core I can get with no overclocking.)
    – Historically, I’ve always just bought the latest Intel SSD. You certainly won’t go wrong there. However, the Samsung 840 Pro line the performance champ right now, and seems to have good reliability.
    – If you need a spindle drive at all (and it’s worth asking that question, if only for noise reasons), the Western Digital Black series is one of the few that still has a 5-year warranty. (Not that I ever return a drive, but I figure their actuaries know more than I do about reliability.)
    – Antec has a nice range of super-quiet PSUs.

  24. I have been looking at going to ZFS, but I have been discouraged because of the cost of memory required to safely support ZFS. Most of my development machines are running at least several VMs (VirtualBox) and thus a minimum of 32GB RAM is needed. My next system will want 64GB for future expansion. The cost of 64GB DDR3 ECC is substantial (~$700 for 4x16GB) and not well supported by some chipsets.

    Are you planning to use ECC memory in your new developments system?

    1. >Are you planning to use ECC memory in your new developments system?

      Now that I have an offer to help me spec a good mobo/ECC-RAM combo, maybe.

  25. So no raid? I’m surprised it wasn’t even mentioned in your build. I agree that using the SSD for the OS is great, but for pure unadulterated speed on the hard drive front, why not a raid 0 or a raid 5? I have a 5-disk raid 5 system in my computer and it’s blazingly fast. I had to do something to keep up with the gigabit internet that was laid down in my neighborhood by the state government. My original hard drive couldn’t keep up with the internet throughput.

    1. >So no raid?

      I want to listen to the pro- and con- arguments some more. It’s not something I have an opinion about.

  26. AMD vs. Intel rundown, as I see it:

    All high-end AMD chips these days are in the Bulldozer family, as far as I know. I don’t think cvs-fast-export does any floating-point math, so the shared-core hack isn’t going to be a big problem for you. You do get separate integer cores and ALUs, and contention is only an issue when the FPU gets used. And the design has its strengths, especially in parallel applications, where AMD is beating Intel right now.

    But the fact is that in terms of serial execution speed, Intel is winning right now. AMD’s Bulldozer cores haven’t even reached the same serial instructions per clock (IPC) as the old Phenom 2 designs from 2-6 years ago, although they are clocked higher. Current Intel parts als run cooler and more efficiently than equivalent AMD parts, which means you have to worry less about heat dissipation and your electricity bill will be a bit less.

    On the other hand, AMD is attempting (sometimes successfully) to undercut Intel on price, so you might be able to go further upmarket than you otherwise could for the same money, so you might get more bang for your buck anyway. Look at specific benchmarks for specific parts before you make that decision, and also be sure you know what types of workloads each benchmark is simulating.

    Then there’s the question of integrated graphics. Both AMD and Intel are now integrating graphics cores into the some of their CPUs. AMD calls these “APUs,” Intel is calling them “SOCs,” but they’re the same thing. These GPUs are probably going to be sufficient for your needs (able to drive two monitors.) The advantage of going this route is that you no longer need to install your discrete GPU, which will again be good for heat dissipation and electricity use. It’s not typical for extreme high-end processors to include integrated GPUs (it’s assumed that gamers will want discrete GPUs,) but I’d suggest giving serious consideration to this option.

    In terms of actual graphics technology, AMD beats Intel hands down (the gap is narrowing, but it’s still very real.) However, in terms of the quality of the open-source drivers on Linux, Intel is winning at the moment. AMD has robust open-source support for their older GPU designs, but these new designs may compel you to use their horrid Catalyst proprietary drivers.

    Intel SSDs are and always have been the best, but the gap is narrowing and you might be able to save yourself some money by buying another brand. There were a series of Anandtech articles a few years ago about serious performance problems in the SSDs of the day, demonstrating with benchmarks that Intel’s SSDs were the only ones with satesfactory random I/O performance. Those articles helped promote the notion that Intel SSDs were the only ones worth buying, which was true at the time. However, that was years ago, and the situation is different now.

  27. > Can you recommend any particular sealed water cooler?

    Taking a look at the case, the pictures show an NZXT Kraken X60 mounted up top, so that should fit with no trouble. However, the reviews I’m finding show that the Corsair H110 cools just as well, and is quieter when comparing both units with their stock fans. Another review I find on youtube shows that holding true with high quality aftermarket fans as well. Both use 2x140mm fan radiators, so if the X60 fits, the H110 should fit as well.

    As you’ve stated you don’t intend to do overclocking, you may be able to get away with an NZXT Kraken X40 or Corsair H90. Both are the resepctive little brothers of the aforementioned models, being 1x140mm units. If it fits, mounting an X40 or H90 in the rear 140mm fan slot would free up the top for two more 200mm fans.

  28. Relating to MaxE’s comments about video, if you decide you need discrete graphics, look into the “silent” video card options on e.g. NewEgg.

    (They have big-ass whole-slot-wide heat sinks with lots of fins, and no fan; not only silent, but nothing to break. I once had my home server fail because the video card fan had worn out and killed the card.

    One less moving part is better in and of itself.

    I don’t know what the Linux X driver situation is these days, so I can’t suggest a specific card or maker, but there were lots of options last time I looked.)

  29. (Addendum to the above: That might be a durability win even if the on-MB video is sufficient in itself.

    Moving the video heat generation to a way-overcooled but still passive card that isn’t doing anything more hardcore than X compositing would reduce load and thus heat generation on the CPU die, and thus let your CPU cooling fan spin slower – or put less load on your liquid system, which IIRC in most cases has its own radiator fan anyway.)

    1. >I believe the Intel Core i7 4790K at 4GHz is the best fit for fast single-threadedness:

      Yeah, my web research is pointing in the same direction.

  30. > SeaSonic SS-750KM3 is looking good as a PSU candidate

    This looks like the next step up from the model running in my machine right now. I can confirm they’re good stuff.

    > I bought a “16-core” Opteron last year which turned out, upon more careful reading and testing, to really be an 8-core machine with two threads per core. What the rest of the world calls a “thread” is what AMD calls a “core”, and what the rest of the world calls a “core” is what AMD calls a “compute unit”.

    What model was it? A friend brought up something similar recently. When I went digging, I found that the compute units on the chip he’d bought had two integer processing units and one shared floating-point each. It’s questionable to me what consitutes a “core” in that case. Advertising the number of compute units as “cores” seems like it undersells its capacity to process integers by half. Advertising the number of integer units seems like it oversells its capacity for floats and whatever else was shared. Going with the literal truth, “Number of cores is an ambiguous measurement, so here’s the technical details of what we did,” invites all the problems involved with trying to get the public to understand any technical specification more complicated than We Big Numbers Have.

    I’m not surprised marketing went with the biggest number they could defend, of course, because Marketing. How annoyed one should be probably depends on whether one needs integer or float processing more.

    (the above assumes you had the same or functionally similar kind of chip. I don’t remember exactly what my friend had, and last I checked Intel is on top of the high-performance end of things right now anyway)

  31. > I will second the recommendation for the Cooler Master Hyper 212 EVO CPU cooler.

    Thirded. I’ve used the Hyper in at least five systems over the last few years, for myself and others. I agree with the caution about it possibly blocking RAM slots; the heatsink is a monster. It may be worth trying to dig up reviews — or at least user reports — that have tried it on your specific board.

    1. >Thirded [for the Cooler Master Hyper 212 EVO CPU cooler].

      OK, I’m willing to consider that choice made unless we go for the exotic water-cooling option, which I consider still a live possibility.

  32. I strongly recommend trying for an ECC build. Support from desktop motherboards is erratic, and you’ll need to ensure that your processor supports it, as the memory controllers are integrated on all of them these days. IME, it’s worth trading off a bit of memory latency to prevent cosmic rays from borking your conversion in a spectacular manner.

    Look at Crucial’s MX100 line of SSDs. They perform well, use quality chips, and are very aggressively priced. You could do a lot worse than to grab a 256GB unit (on sale at Newegg recently for $99) and consider upgrading that component at some point in the medium-term future. Set up LVM (and remember to turn on TRIM), and adding or replacing the drive will be quite simple.

    Whether that is true or not, they cost a metric fuckton of money.

    That’s because “server” or “enterprise” SSDs are almost all using SLC. Much more reliability, but drastically lower bit density per transistor.

  33. > So no raid? […] why not a raid 0 or a raid 5?

    I must strongly disagree. At home, only go with mirrored raid 1 (or I guess ZFS in raid 1). All other versions of raid are too expensive, too hard to upgrade, too much of a pain in the ass, when something goes wrong.

  34. As for APUs/SOCs: Not candidates for what Eric wants. I have an A10-5800K, and it reports as having 4 cores in Linux. (Dunno if that’s two FPUs/four integer units, or four and eight.) It’s fine for the use case I built it for, running a Second Life viewer, but it would be grossly underpowered for Eric’s use case. (And, FWIW, though the chip’s AMD 7660D graphics wasn’t bad, the system only got seriously fast when I stuffed an Nvidia GTX 570 in it.)

    I think Eric’s going to want 3D graphics hardware, but mainly for X/Wayland/flavor-of-the-week compositing, and Intel’s crappy graphics will be fine for that.

  35. You need more memory, and, as mentioned, ECC. I would even say that you should skip the consumer board entirely and get a supermicro server board. The only catch there is that figuring out which board/cpu/memory set you need can be maddening.

    If you want to go that route, let me know and I’ll help find the right parts. I’ll even kick in a bitcoin or two if it comes up over budget.

    1. >If you want to go that route, let me know and I’ll help find the right parts. I’ll even kick in a bitcoin or two if it comes up over budget.

      Can’t turn down that offer, especially since it was in the back of my brain that I wanted to investigate the possibility of ECC. I’m serious about wanting high reliability above all.

      So, here’s what I request: Please spec out a mobo/RAM combo based around the 4GHz i7 and ECC. It’ll be easier to make a buy decision when we have an actual BOM to look at.

  36. Eric –

    You mentioned that Dave Taht had given you a SSD. Would you be willing to tell us what kind and size it is? And would you be using this in addition to, or as a replacement for, the Intel one spec’ed in the “Tyro” config?

    1. >Would you be willing to tell us what kind and size it is? And would you be using this in addition to, or as a replacement for, the Intel one spec’ed iin the “Tyro” config?

      Sure, but I have to find it first. I’ll dig through the relevant piles when I’m more awake.

  37. Mike – this is drifting a bit offtopic, but what about the design of btrfs do you find problematic. Its not-doneness is readily apparent, but as far as the design goes it seems pretty comparable to zfs, with each filesystem potentially supporting features not found in the other.

    zfs is definitely aimed more at the SAN market – it leaves out features like being able to expand a raid-z vdev, but has prioritized things like having the zil on a separate device, etc. Btrfs actually seems a bit more flexible for smaller installations – it can switch raid modes on the fly, add/remove individual disks from arrays (at the equivalent of the vdev level, not the zpool level), etc.

    But, I could be completely ignorant of some design flaw, so I’m interested in hearing your opinion.

  38. As Brinton said, check http://www.tomshardware.com/reviews/ssd-recommendation-benchmark,3269.html for SSDs. SSDs are lovely, and much better than they were a couple of years ago.

    Beyond that, Tomshardware does quarterly box builds at 3 or 4 budget levels, explains the bad and the good, and debugs the hardware impedence/physical mismatches. Those recent build components are (in my experience) available on Amazon and Newegg (which sometimes sells the build as a kit with some discount).

    Even with Toms recommended kit, you will have to cope with hi-tech stuff prototypes from Silicon Valley/Japan built in China, documented by Japanese engineers and translated in by native speakers of *. Beware of power supply multi-pin connectors which butt together. My Corsair power supply’s instructions told me to socket them in the wrong order. One month of really bad instability almost caused me to take the whole box out back and use it for target practice. With new or changed hardware, it is almost always the cables/connectors!!!

    Current kit from Toms, this quarter suggests http://www.tomshardware.com/reviews/build-high-end-performance-pc,3942.html.
    Swap out the gaming GPU card for a <$200 dual monitor (for normal stuff) card, and you are in budget.

    Run far away from RAID anything, unless it's hardware RAID and you have a drawer full of spare drives of the same build. I use an underpowered and cheap 1 cubic foot box (HP N40L) with BIOS hacked to support 6 HDD and manually robocopy (Win7 version of rsync) to make periodic one-way backups. No RAID, no automtic propagation of a virus, but I might lose a month or so or media, which is backed up on the net. As is your stuff?

    Your motherboard/case should support USB 3 – external drive docks for USB 3 are very fast, practical and reliable, and are very good for backups.

    If you wish, I can send you a couple of lightly used 2TB drives for backup. They are 7200 RPM Hitachi, and should be good for backups for a few more years. I have recently swapped into 4TB.

    Conrad

  39. Run far away from RAID anything, unless it’s hardware RAID

    Hardware RAID is nearly always crap. Unless you’re running Adaptec or 3Ware, the “hardware” is about as much hardware as a Winmodem anyway, and even a Pentium wouldn’t notice the load of software RAID on a competent OS (i.e., not Windows NT). Linux MD is perfectly fine *if* RAID is really wanted, which doesn’t seem to be worth much in this case.

  40. Eric:

    Regarding your choice of case, the black version looks like mine on the exterior, though the manufacturer is different, and in that vein I’d like to offer a warning:

    If at all possible, try to find photos of the *inside* of the case, and consider finding another (weighted according to how much ease of maintenance is important to you) if the case has a black interior. I’m still young and of fairly good ocular health (8 diopters of myopia notwithstanding), and I find that the all-black interior of my new machine’s case causes a fair bit of eye strain during maintenance and slows down work inside the machine. Given that you’re around three decades older than me, it may be a more significant issue for you. On the other hand, given that there’s a white version of the case, there may be different internal color schemes between the two versions, so this may not be an issue.

    Fortunately, in my case (no pun intended), I don’t anticipate having to open the machine up a lot more.

    1. > Given that you’re around three decades older than me, it may be a more significant issue for you.

      Nope. Eyesight showing no sign of degrading, thank goodness.

  41. Christopher, by hardware, I meant enterprise Adaptec, which is the only RAID I know which even pretends to increase reliability. But $$$. And the contractors still manage to destroy during recovery.

    “Run far away from RAID anything, unless it’s hardware RAID

    Hardware RAID is nearly always crap. Unless you’re running Adaptec or 3Ware, the “hardware” is about as much hardware as a Winmodem”

  42. Run far away from RAID anything, unless it’s hardware RAID

    Totally disagree. Hardware RAID uses proprietary disk layouts that make it insanely difficult (if not impossible) if the controller goes kaput. At least with Linux MD, not only is MD the same on any Linux system, but the disks are labelled and assembled by UUIDs, making it trivial to access data even if all the disks are shuffled in a different order. That being said, Linux MD still suffers from some of the failings of any other standard RAID, namely being totally inadequate of handling errors short of whole-disk-dying. The new generation of file systems (ZFS, HAMMER, btrfs, some others) implement their own RAID-like schemes that can recover from bit errors and such (primarily because everything is checksumed. regular RAID has none).

  43. You can go to places like McMaster-Carr and order industrial-grade Rotron fans that are rated by CFM and sound output in sones. Many yahren ago I went through a sound reduction phase and went for two 110VAC low-noise 5″ fans; one to pull air through and out the power supply (modified case) and an internal one and some cardboard ducting just for circulation. I later went to just the power supply fan to move fresh air and kept the internal circulator fan. It’s not so much a matter of how much air you pull through the case, as getting air to the hot stuff.

    I once lucked onto a big bag full of 1/8″ foam discs with peel-off sticky backs. They were scrap from some kind of gasket punch. Lining the case with a couple of layers of that helped a lot; the big flat panels of the case are good at passing sound out. You can buy adhesive foam, cork tiles, truck bed liner, etc. at your local big box hardware store.

    Back when 5-1/4 hard disks were the norm, I tried rubber-mounting some in the old slide rail mounts. I’m not sure it did anything useful, though.

    Video cards that have their own fans: generally they’re the loudest fan in the machine. (scientific test with rolled paper tube to listen through)

    Since my first PC in 1986, the #1 failure mode is the power supply. I seem to have one die every two or three years. It doesn’t seem to matter how much they cost or how many watts they’re supposed to deliver, or even if I add an oversized or extra fan. A couple of client sites I have, same thing. If there’s anyone making a long-term-reliable ATX power supply I haven’t found them. I finally wound up keeping a spare in the closet.

    If you put the machine on the floor you don’t hear as much noise as when it’s up on the desk. But I hate crawling on the floor to play with cables.

  44. > Since my first PC in 1986, the #1 failure mode is the power supply. I seem to have one die every two or three years. It doesn’t seem to matter how much they cost or how many watts they’re supposed to deliver, or even if I add an oversized or extra fan. A couple of client sites I have, same thing. If there’s anyone making a long-term-reliable ATX power supply I haven’t found them. I finally wound up keeping a spare in the closet.

    Have you ever tested your AC quality?

  45. So Christopher, Mike and Conrad agree that RAID sucks, and we don’t use it! Why does it suck?

    1. Stuff is more reliable, so periodic backups do the job. If you need atomic backup, use a database.
    2. Rabbits trying to recover a disk array failure are likely to destroy the whole array.
    3. The math for RAID 6 is so pretty, why oh why doesn’t it work? Have you tried to read the documentation for for recovery for those products? Making it work requires persons to make it work.
    4. Software (robocopy rsync etc.) are much more flexible, and while they can destroy what you are trying to save or backup, you can try it out in a folder, where “scare quotes” hardware can destroy the whole drive/array.

    Conrad

  46. My heartfelt thanks go to Charles Barker and Thomas Blankenhorn, who have donated $50 and $100 respectively to the Help Stamp Out CVS In Your Lifetime fund. That beings the total to $860.

  47. @esr:
    >Nope. Eyesight showing no sign of degrading, thank goodness.

    Even so, I find working in a black-interior case, in a well lit room, stressful even for mid-to-late 20’s eyesight. If your eyesight is as good as it was 30 years ago, interior reflectivity should be a concern. For anyone whose eyesight *is* degrading, I’d say it’s a critical factor in case choice.

    > It’ll be easier to make a buy decision when we have an actual BOM to look at.

    Does this help? :-P

    0xEF, 0xBB, 0xBF

  48. If quiet is what you’re after, there’s a video demo on how case design can make a big difference:

    https://www.youtube.com/watch?v=CB-_JIjhAyA

    I don’t think the wood is necessary. He also put a huge CPU cooler on his video card in another vid, but that probably won’t make any difference here. Just in case it might get missed or forgotten, don’t forget to check your replacement heatsink thermal resistance in deg/W or conductivity in W/deg. And don’t forget the Arctic MX-4 heat sink goop, even if you don’t replace the heat sink. The stuff is almost certainly better than the factory gunk the CPU comes with and helped me resurrect a Lenovo T61 laptop (specs TL;DR version: envies the system we are replacing here.)

  49. Another vote against RAID.

    RAID is *not* for preservation of data. Anyone trying to use it for that purpose deserves what they get when they misdirect an rm -rf. I’m sure you know that but it’s a peeve of mine. :-(

    RAID *is* for continuity of service — i.e. if you’re running a server and don’t want whatever it’s providing interrupted when a disk inevitably dies. You’re not doing that, as far as I can tell. Assuming /home is backed up, you can get from zero to wherever you were in an hour (for OS + app reinstall) plus restore time. What you’re getting from RAID is the convenience of not having to wait for a replacement drive and do a restore if a drive fails, plus whatever work you’ve done since the last nightly backup. I’m guessing most of what you do gets pushed to an online repo regularly, so the latter is probably already covered.

    There is a performance benefit from RAID5, 10, or similar. I’m pretty sure it’s dwarfed by SSDs, but haven’t measured. It probably depends on one’s use profile — I would expect lots of random access to be faster on SSDs and lots of sequential access to be faster on spinning RAID5/10. But I’m not sure, and that’s 3+ drives worth of moving parts to maybe match 0 drives worth of moving parts.

    (you could, of course, RAID some SSDs, but that could get very expensive very fast)

    RAID 0 is just asking for trouble.

    If the convenience of not having to order a drive and do a restore in the case of hardware failure seems worth the complexity/moving-parts cost, then do RAID1. Otherwise don’t bother. Complexity is a cost for system builds, just as with software.

    (for my part, I have an SSD operating system drive in each machine in my home, which I treat as expendable. All personal files are on an NFS/SMB fileserver with massive spinning platters that gets backed up every night. Backups and restores only have to be managed on one machine. Any given PC could get hit by a meteor and not inconvenience me much.)

    1. >If the convenience of not having to order a drive and do a restore in the case of hardware failure seems worth the complexity/moving-parts cost, then do RAID1. Otherwise don’t bother. Complexity is a cost for system builds, just as with software.

      That make sense to me, and moves me towards no RAID.

  50. I want to listen to the pro- and con- arguments some more. It’s not something I have an opinion about.

    How long does it take you to install a working machine from bare metal?

    1. >How long does it take you to install a working machine from bare metal?

      Couple of hours. Why, are you suggesting I experiment both ways?

  51. Take a look at these:

    http://supermicro.com/products/motherboard/Xeon/C220/X10SAE.cfm
    http://supermicro.com/products/motherboard/Xeon/C220/X10SAT.cfm

    Notice the different slot layouts. I don’t know what other cards, if any, you need to plug in.

    Remember where I said that finding sets can be maddening? Supermicro lists those as working with “4th generation Core i7/i5/i3” processors, but doesn’t provide a list. Also, things like “4th generation” seem to me to be marketing terms more than hard specs. Still, the 4970K is listed on Intel’s website on a page titled “4th Generation Intel Core i7 Processors”, so it seems likely to work.

    http://ark.intel.com/products/family/75023/4th-Generation-Intel-Core-i7-Processors#@All

    The final madness is memory. Crucial’s memory finder tool gives part numbers specific to the motherboard, which are generally impossible to find at retail, or carry insane prices. The good news is that you can make a note of the specs and then find their generic memory that matches. This setup has 3 options:

    CT2KIT102472BA160B is the generic option for these boards. 16GB ECC in kit form as a pair of 8s.
    CT2KIT102472BD160B is the 1.35 volt version (instead of 1.5v)
    CT2K8G3W186DM is the 1866 MHz version (board and CPU are rated for 1600)

    I don’t have strong opinions on the details of the memory, so I can’t endorse any of them above the others. Regardless of your memory preference, you’ll need 2 kits, for a total of 4 UDIMMs, or 32 GB.

    I’m coming up with $270+$340+2x$213 = $1036 for the motherboard (assuming X10SAT), CPU and memory (assuming 1.5V, 1600 MHz). I skipped a bunch of posts, but I bet that the difference in cost between that and the consumer board/non-ECC version is less than 2 BTC.

    There are some other motherboard options available, too. And for something like this, I like to double check my selections, and maybe even run it by my CDW rep (who will, in turn, run it past his supermicro rep). My biggest concern is ECC support in that i7 CPU. The board says it supports ECC or non-ECC, and the CPU says it does not support ECC. I’m not sure who wins that battle.

    Oh, and I just realized that I didn’t look closely at your chosen case. Looked like a mid or full tower to me, so I’m assuming it can take a full sized ATX board instead of needing micro. Something else to double check…

    (Also, consider the Xeon E3-1281 v3 or E3-1283 v3 instead of the i7-4970K.)

    1. >Still, the 4970K is listed on Intel’s website on a page titled “4th Generation Intel Core i7 Processors”, so it seems likely to work.

      And an explanation by a later commenter supports this.

      >I don’t have strong opinions on the details of the memory, so I can’t endorse any of them above the others

      OK, I’m going to call this one for the CT2KIT102472BD160B (1.35v version). I see no price difference from the generic, and there’s no point in paying for 1866 speed if the memory controller can’t drive it. The idea of reducing power dissipation and generated heat is appealing.

      >I’m coming up with $270+$340+2x$213 = $1036 for the motherboard (assuming X10SAT), CPU and memory (assuming 1.5V, 1600 MHz). I skipped a bunch of posts, but I bet that the difference in cost between that and the consumer board/non-ECC version is less than 2 BTC.

      What’s a BTC?

      Anyway, with $860 in the kitty $1036 doesn’t seem out of reach for the core hardware. With the other components under discussion (case, SSD, HDD) I see the whole thing costing out at $1578 – which considering the non-ECC build is $1232 is pretty convincing. Just $350 seems like a reasonable premium for ECC.

      >And for something like this, I like to double check my selections, and maybe even run it by my CDW rep (who will, in turn, run it past his supermicro rep). My biggest concern is ECC support in that i7 CPU. The board says it supports ECC or non-ECC, and the CPU says it does not support ECC. I’m not sure who wins that battle.

      Yes, we absolutely need to know this. There’s a plausible scenario in which the i7 has no on-board ECC support but the mobo memory controller does.

      >I’m assuming it can take a full sized ATX board instead of needing micro.

      You assume correctly.

      >(Also, consider the Xeon E3-1281 v3 or E3-1283 v3 instead of the i7-4970K.)

      What do you consider the potential win here? Is it lower power dissipation? I see that Intel claims ECC support, so if nothing else this is a viable fallback should we get an unhappy answer about the i7.

      I’m going to call this the leading proposal for the core hardware, so far. I like the ECC, I like the low power dissipation, and I like the i7’s benchmarks on single-thread computing.

  52. Supermicro lists those as working with “4th generation Core i7/i5/i3? processors, but doesn’t provide a list. Also, things like “4th generation” seem to me to be marketing terms more than hard specs.

    The Core i? “generations” actually correspond to feature sets and/or process ticks; Wikipedia has good comparisons on the “list of Core i? processors” pages. The generations are identified by the digit in the thousands place, so anything that’s an i7-4xxx is 4th generation.

  53. Regarding RAID, I’m a big fan of Linux MD. I’ve been using it forever, and it works great. One thing to keep in mind is that if something goes badly wrong, you will lose data if you panic. Your experience with shooting will be a big help here. Remember to stop and ask for help if you even suspect that you are starting to go beyond your experience.

    Proper hardware RAID isn’t so terrible any more, but still pretty bad. Being tied to one brand/model/firmware version of a controller card sucks. If things go wrong, your recovery will be either simple and painless, a test of your sanity, or impossible. And you don’t get to choose in advance, but planning can influence it, to some extent.

    Fake hardware RAID is pointless, particularly for Linux users. The performance losses are slight, probably even trivial (just like MD), but you couple that with all of the disadvantages of hardware RAID’s proprietary data structures and lousy user interfaces.

    I don’t like btrfs. It seems like the systemd of the disk world. Not that it tries to spread tentacles throughout everything, but it mixes a few genuine improvements (checksums for data and metadata) with a tendency to take too much responsibility into one place (RAID, LVM and filesystem, all in one). I prefer layers, and clean-ish interfaces between the layers.

    It sounds like you intend to load this box approximately once and use it for a long time. If so, you’ll be kicking yourself eventually if you don’t use some sort of RAID, at the very least on the system disk.

  54. Christopher Smith on 2014-10-18 at 03:25:34 said:

    > The Core i? “generations” actually correspond to feature sets and/or process ticks; Wikipedia has good comparisons on the “list of Core i? processors” pages. The generations are identified by the digit in the thousands place, so anything that’s an i7-4xxx is 4th generation.

    That actually makes sense.

    I don’t follow the industry like I did 15-20 years ago, so I no longer have the familiarity to know things like that. Most recently, I’ve been looking at Xeons, and if you know some trick to make sense of those numbers, I’d be very appreciative. It looks to me like the family numbers were assigned by lottery.

    At least ATI/AMD had the decency to publish guides on how to interpret their Radeon model numbers.

  55. So Christopher, Mike and Conrad agree that RAID sucks, and we don’t use it! Why does it suck?
    <snip>

    Don’t misread me. I think traditional RAID does have its shortcomings (especially compared to ZFS…), but it doesn’t quite “suck” in concept:

    1. Avoid hardware RAID, avoid proprietary disk formats. Using Linux MD is the easy solution here (I belive it’s even supported on *BSDs, but they have their own software RAID formats, too). I still recommend ZFS due to its huge reliability gains that MD cannot provide. (Say you’re running MD RAID1, you get a bit flip. How does it detect and correct the error? The answer is “It cannot and does not.”)
    2. Onto ZFS, it checksums all data and never overwrites data in-place (of course, eventually previously-freed blocks in the pool can be overwritten), this allows it to detect bit flips and try to correct them with the redundancy available. The copy-on-write semantics make it so its RAID5/6-like implementation (called RAIDZ1/2) completely avoid the infamous write hole, whereby data corruption can seep undetected into a RAID5/6 array; they happen with crashes/power failures at just the wrong time.

    I do have a fair bit of warning if you do go with ZFS, at present there is no reshaping of top-level vdevs. This basically means that if you setup a RAIDZ1, you cannot add more disks to that particular RAIDZ1 (although you can add more vdevs to a pool). For a home desktop, two disks in a mirror is probably plenty, and definitely has the best performance advantages. Single-disk vdevs can be converted to mirrors at any time, and the reverse direction is also true. (In strict RAID1, only two disks are allowed; many RAID1s allow more than two disks, and ZFS’s mirror vdev allows any number as well.) One of the ZFS-on-Linux developers is actually tackling the problem of reshaping vdevs and other tricks that have long been impossible in the current implementations… no promises!

  56. @kjj
    > My biggest concern is ECC support in that i7 CPU. The board says it supports ECC or non-ECC, and the CPU says it does not support ECC. I’m not sure who wins that battle.

    All three components (CPU, board, memory) must support ECC.

    There are no fast i7 4xxx processors with ECC:

    If you want ECC, you have to go with an i3 or a Xeon (oh the joys of market segmentation):

    (The Xeons also support more memory.)

    1. >If you want ECC, you have to go with an i3 or a Xeon (oh the joys of market segmentation):

      Damn. How much single-threaded clock speed am I giving up if I do that?

  57. @ESR

    I know that in at least some cases low voltage ram has faster timings.

    “What’s a BTC?”

    Bitcoin.

  58. @esr
    > How much single-threaded clock speed am I giving up if I do that?

    The v3 Xeons are based on Haswell, so it should scale linearly with frequency.
    i7-4790K: 4-4 GHz (no ECC, max. 32 GB, $339)
    E3-1281 v3: 4.1 GHz (ECC, max. 32 GB, $612)
    E5-1630 v3: 3.8 GHz (ECC, max. 768 GB, $372)
    The E5 has a little bit more cache, but probably not enough to offset the clock difference. The E3 would be able to use the same board as the i7.

    1. >E3-1281 v3: 4.1 GHz (ECC, max. 32 GB, $612)

      Yeah, that’s the one I was using my last cost projection. Nice idea but $100 and change over budget.

      >E5-1630 v3: 3.8 GHz (ECC, max. 768 GB, $372)

      Plausible fallback position if the fund doesn’t make $918.

  59. I’ll reiterate my comment about memory speeds being reduced when you add more DIMMs by quoting scripture from a random HP server doc that states the situation nicely. You might have to do some minimax magic if you’re paying a premium for fast memory.

    “The number of DIMMs attached to a memory controller also affects the loading and signal integrity of the controller’s circuits. In order to maintain signal integrity, the memory controller may operate DIMMs at lower than their rated speed. In general, the more DIMMs that are populated, the lower the operational speed for the DIMMs.”

    1. “In general, the more DIMMs that are populated, the lower the operational speed for the DIMMs.”

      Yeah. Hard to know what to do with that information, though. To minimax the design we’d have to know what the falloff curve looks like.

      In the present instance, given my experience with the NetBSD repo, I’m pretty much figuring I’m going to have to max out RAM even if it costs me some speed.

  60. >How long does it take you to install a working machine from bare metal?

    Couple of hours. Why, are you suggesting I experiment both ways?

    No, trying to get an idea of the amortized cost of disk failure. If you have decent backups of your stuff (rsync, replicated git, whatever), it’s not worth the effort to do RAID.

    1. >No, trying to get an idea of the amortized cost of disk failure. If you have decent backups of your stuff (rsync, replicated git, whatever), it’s not worth the effort to do RAID.

      Ah, I see. I rsync relatively frequently, and a lot of the most critical stuff is on forge sites. So I guess that answers the question.

  61. Having been on the sharp end of Intel’s bargaining back when we had to buy Intel EPROMs in order to get allocation of the Intel 80186 CPUs, I would always buy AMD on principle. Unfortunately, as others have noted, when it comes to the performance end of the curve, AMD has fallen badly behind.

    I alternate between doing the research and buying pre-built machines.

    My last machine, which is around 11 months old, was a pre-built Lenovo Ideacentre K450, with an i7-4770.

    I upgraded the RAM (and the OS, of course). Overall, I’m very pleased with it. It’s a medium on the noise spectrum — nowhere near as quiet as a silent PC I built a few years ago, but **much** quieter than the Asus machine that I bought and almost immediately returned right before buying the Lenovo.

  62. >In the present instance, given my experience with the NetBSD repo, I’m pretty much figuring I’m going to have to max out RAM even if it costs me some speed.

    With very limited exceptions, more RAM always trumps faster RAM.

    How much RAM are you looking at? Because the LGA1150 options, like the I7 and E3 you’re looking at, seem to top out at 32GB. Otherwise you’re looking at LGA2011-3, which is likely to involve (I haven’t looked at it lately) a much more expensive motherboard.

    1. >How much RAM are you looking at?

      I’m assuming 32MB. I’m aware there’s a helluva jump in the price curve above that level, and the system cost is pushing the budget limit now.

  63. Ok, so if we go with the E5-1630 v3, we need a different motherboard. Check this guy out:

    http://supermicro.com/products/motherboard/Xeon/C600/X10SRL-F.cfm

    My 128GB monster data cruncher is running the previous version of that board, the X9SRL-F, and I love it. But I really needed all of the slots on it. If you aren’t planning to run 32 SAS/SATA and 4 10GBE ports on it, it may be possible to find something more modest, and possibly a bit cheaper.

    It takes RDIMMs instead of UDIMMs. CT2K16G4RFD4213 is a 2x16GB kit. (DDR4 17000 instead of DDR3 12800)

    Newegg has the board for $280, and the memory for $434. Anyone have a good source for the CPU? The best I can find is random internet stores under google shopping for ~$400. Total cost for MB/cpu/memory is ~$1114.

    If you are going this route, note that all of the manuals for the supermicro boards mentioned so far include this caution: • If you buy a CPU separately, make sure that you use an Intel-certified multidirectional heatsink only.

    Between the E5-1630 v3 and the E3-1281 v3, my biggest complaint is the TDP, but that is a personal preference. Whether you’d be better off with the 10 GB cache and 3.8GHz or 8 MB cache and 4.1GHz is a tough, maybe impossible, question to answer. I don’t think it’ll be a huge difference either way.

    1. >Between the E5-1630 v3 and the E3-1281 v3, my biggest complaint is the TDP, but that is a personal preference. Whether you’d be better off with the 10 GB cache and 3.8GHz or 8 MB cache and 4.1GHz is a tough, maybe impossible, question to answer.

      Sorry, which is which? For this job, I think faster cache may actually be the win. If that’s also the lower TDP I think we have a winner.

  64. >In general, the more DIMMs that are populated, the lower the operational speed for the DIMMs.
    My understanding is that this effect is somewhat counteracted by buying all your DIMMs in a single kit (and not two of the same kit.) That way the memory controller doesn’t have to least-common-denomintator anything.

  65. @kjj
    > If you aren’t planning to run 32 SAS/SATA and 4 10GBE ports on it

    The X10SRL-F has 10 SATA, no SAS, and 2 1GbE ports.

    > it may be possible to find something more modest, and possibly a bit cheaper.

    Other server boards don’t really have fewer features, and workstation boards like the Asrock X99 WS are more expensive.
    Something like the X10SRL-F appears to be the cheapest board for the E5-1630 v3.

  66. “That makes the possibility much more interesting. I’ve been avoiding water cooling partly because of build complexity and partly because all those joints seem like begging for a leak to get sprung sometime down the road. Can you recommend any particular sealed water cooler?”

    I grabbed an Antec Kuhler 120 when I did a build-up a couple of years ago. It’s in the Antec P-180 case beside my credenza. ABSOLUTELY SILENT at this distance. I can hear the other box when it’s running, but not the big box. Sealed, no muss, no fuss, 2×120 (or maybe 140mm) fans on the radiator, easy mounting. And your are NOT cantilevering a kilo of weight off the mb.
    I recommend it greatly.

    I second the Seasonic X or SS lines. Have one in the mythtv box at home. The case fan makes all the noise.

  67. Wait, the X10SRL-F does not even have an audio chip.
    If sound is needed, the cost of a sound card (like the Xonar DGX) would bring it into the same price region as the Asrock X99 WS.

    1. >Wait, the X10SRL-F does not even have an audio chip. If sound is needed, the cost of a sound card (like the Xonar DGX) would bring it into the same price region as the Asrock X99 WS.

      That tips it for the Asrock, then.

  68. When working with large (out of cache) memory intensive tasks, memory bandwidth is the most important performance issue (with enough space for problem). In a given CPU socket a faster CPU usually will not make any difference when the performance is limited by memory bandwidth. Go for the fastest ECC memory you can get/afford , and the minimum CPU that will run the memory bus at full speed. If more budget still available, upgrade CPU.

    Sorry I’m not up on current offerings.

  69. Cervisia on 2014-10-18 at 14:36:02 said:

    > The X10SRL-F has 10 SATA, no SAS, and 2 1GbE ports.

    Which is why I needed six PCIe slots to add four SAS/SATA cards and two 10GBE cards…

    1. >You’re showing your age, there. =)

      I meant GB, of course. What;s a factor of a thousand between friends? :-)

  70. As long as we’re tossing out recommendations, I’m going to chime in for Arch Linux with Yaourt. I’ve just switched over to it from Gentoo for the machine I got a couple of weeks ago, and I think it strikes a very good balance between customizability and effort required to install and keep up to date. It does use systemd, but that’s becoming an unfortunate de-facto standard, and it does make boot significantly faster.

    What;s a factor of a thousand between friends? :-)

    You should run for office!

    1. >As long as we’re tossing out recommendations, I’m going to chime in for Arch Linux with Yaourt

      No, not gonna change distros at the same time as a major hardware upgrade. That would be just begging for unanticipated interactions. The dread god Finagle and his mad prophet Murphy lie in wait for fools who attempt such as this.

  71. > How long does it take you to install a working machine from bare metal?

    Most operating systems, 15 to 30 minutes from sticking the install disc in.

    A fully-configured system, with all drivers and updates installed, networking, disc shares, printers, user software… depending on complexity, anywhere from half to a full day… if I don’t run into any intractable video, printer, or wireless networking problems.

  72. The E5-1630v3 has more cache and a lower clock, with a 140 Watt TDP. It also supports more and faster memory, and is cheaper.

    The E3-1281v3 has less cache, but a faster clock and higher turbo, with a 82 Watt TDP.

    1. >The E5-1630v3 has more cache and a lower clock, with a 140 Watt TDP. It also supports more and faster memory, and is cheaper.

      >The E3-1281v3 has less cache, but a faster clock and higher turbo, with a 82 Watt TDP.

      I think we have to go with the E5-1630v3 for this job load, then, and max out the RAM speed. Shame about the higher TDP – I too would have preferred to minimize that. Asrock X99 WS mobo, because I’m going to have to pay for sound one way or the other and I’ll always chose pre-integrated over a card for that (unlike, for example, graphics).

      The Asrock page for the X99 WS raises some interesting questions. I gather from reading the spec that it’s designed to support a particularly fast class of SSD that mounts to the mobo directly; ARock’s implementation claims to cut latency by not going through the conventional SATA bus. It’s called an “M.2”. Has anyone worked with these? Are they known good for Linux?

      I’m also unclear about the native graphics capability of the board. The phrase “Supports AMD 4-Way CrossFireX™ and NVIDIA® 4-Way SLI” is opaque to me.

      Can anyone explain what “12-phase power” means?

      The NZXT case will take EATX, so that’s one blocker out of the way. I note, however, that the Amazon reviews say the stock 200mm fan in the case isn’t actually very quiet; anyone have a recommendation to replace it?

      We need to match ECC memory to the board.

      Then, make sure we have good processor cooling. Two proposals are currently live: one conventional air-fan-based design around the Cooler Master Hyper 212 EVO CPU cooler, one sealed-unit water-cooled suggestion around the NZXT Kraken X40 or Corsair H90.

      Everybody who’s used one praises the SeaSonic SS-750KM3, so that choice looks pretty solid.

  73. Backblaze stats aren’t universally applicable.

    For sane people (those running RAID) the failure rate isn’t that important. It shouldn’t be too high, of course, or you’ll be very annoyed at the time you spend RMAing drives. RAID users want enterprise drives (including WD Reds) because their firmware times out quickly on failed reads.

    A drive that can’t read a sector in 2 or 3 tries has a problem, and you need to know about it ASAP. A drive that takes 30 or 60 seconds to time out on a failed read makes degraded operation annoying. A drive that manages to read that sector on the 42nd try, and silently congratulates itself on the job well done is going to cost you data, probably sooner than later.

  74. ARock’s implementation claims to cut latency by not going through the conventional SATA bus. It’s called an “M.2?.

    These directly use either 2 or 4 PCIe lanes, still a rather young tech but expanding fast. Couldn’t say about compatibility…

    I’m also unclear about the native graphics capability of the board. The phrase “Supports AMD 4-Way CrossFireX™ and NVIDIA® 4-Way SLI” is opaque to me.

    This means that the board has a bridge chip to split up the PCIe lanes more than is normally possible, so that it can support 4 graphics cards simultaneously. Somehow I doubt that you are interested in that.

    Here is a review of the board you are looking at: http://www.anandtech.com/show/8557/x99-motherboard-roundup-asus-x99-deluxe-gigabyte-x99-ud7-ud5-asrock-x99-ws-msi-x99s-sli-plus-intel-haswell-e/6

    Looking at the port list I don’t think it has built-in graphics support, though I can’t be certain.

  75. > “M.2?. Are they known good for Linux?

    In the worst case, these cards would have a ‘legacy’ SATA connection, but when in SATA Express mode, the controller’s interface to the driver is either AHCI or NVMe, both of which are supported by Linux.

    > The phrase “Supports AMD 4-Way CrossFireX™ and NVIDIA® 4-Way SLI” is opaque to me.

    ‘Normal’ graphics cards are supported anyway; this stuff is for connecting and synchronizing multiple cards.

    > Can anyone explain what “12-phase power” means?

    It means the power is spread over more MOSFETs. But It’s essentially just a marketing phrase for “lots of power for overclocking”.

  76. For my workloads (compilation, mostly) big cache helps, and having a hexacore to do the work helps also. So I stick with intel, 12M cache, 6 cores.

    http://www.newegg.com/Product/Product.aspx?Item=N82E16819116939

    I have a case with a simply enormous 12 inch fan, spinning slowly. Can’t remember the brand right now.

    I move /tmp to a tmpfs filesystem, saving on writes during aformentioned compiles.

    I find intel’s graphics drivers to be very stable.

    Lastly, you can get a box in the cloud with similar specs and merely fire it up for those
    massive conversions, and not have to update at all.

    1. >Lastly, you can get a box in the cloud with similar specs and merely fire it up for those
      massive conversions, and not have to update at all.

      Couple issues with that:

      (a) What else would I spend the $915 people have generously contributed on? I didn’t ask for the donations to begin, but they’ve piled up to a height where I now feel like I owe it to the audience to build this monster and deliver a performance report.

      (b) Cloud = timesharing = fewer cycles per second. Part of the point here is to be able to do conversions fast enough that a major project’s downtime during the switchover is tolerable.

      (c) It really was getting to be time for an upgrade.

  77. Thank you, Phil Sutherland, for conrtributing $20 to the Help Stamp Out CVS In Your Liftime fund. Total stands at $880.

  78. And thank you, John Dougan, for contributing $10 to the help Stamp Out CVS In Your Liftime fund. Total stands at $890.

  79. Most cloud services aren’t particularly great at single-threaded performance. I’ve run the Gentoo git migration scripts (which are somewhat parallel) on a Phenom II x4 (real hardware) and on a 32-core EC2 instance. The latter was definitely faster, but it basically burned through most of the jobs in 20 minutes and then I got to watch one or two threads drag on for another half hour. The single-thread performance of that machine isn’t great.

    On the other hand, one thing you can get a lot of on EC2 is RAM. It is MUCH cheaper to fire off a spot instance for 25 cents/hr with 130GB of RAM than it would cost to put together a system with that much memory. They also offer SSD storage, but I suspect the performance of that may not be spectacular (I didn’t test it).

    I’d say the biggest reason not to use the cloud is your downtime concerns. Otherwise, the cloud is perfect for these kinds of once-and-done projects. To run the Gentoo migration on EC2 I just got everything running in a container and then uploaded the tarball to S3, extracted it on EC2 and chrooted into it after setting up mounts. If I were doing it enough I’d just set up an ebs snapshot for it.

  80. > (b) Cloud = timesharing = fewer cycles per second. Part of the point here is to be able to do conversions fast enough that a major project’s downtime during the switchover is tolerable.

    How reasonable would it be to do the full conversion once, and have the project still active while it’s running, and then do an incremental conversion afterward, to minimize downtime?

    1. >How reasonable would it be to do the full conversion once, and have the project still active while it’s running, and then do an incremental conversion afterward, to minimize downtime?

      That’s roughly how I do it now. Actually I tend to do a couple of trial conversions per project just to make sure I’ve spotted all the artifacts. Then show a beta of the converted repository. Then have the maintainers call a freeze for the time it takes to re-run the full conversion and drop it in place.

      The time to re-run the conversion can be a significant irritant, but incremental conversions (especially from CVS) aren’t really safe. This really is a case where having a bigger hammer will help.

  81. >> Can anyone explain what “12-phase power” means?

    >It means the power is spread over more MOSFETs. But It’s essentially just a marketing phrase for “lots of power for overclocking”.

    Incorrect, even if that is the desired end result.

    The mobo must downconvert the 5v supply to the CPU’s required voltage. Instead of the usual single stage MOSFET switch and a monster LRC circuit to try and smooth out the voltage/current ripples, you have 12 of them phased in 30 degree increments before going into the LRC ripple filters.

    This creates highly stable voltage and current supply for the CPU and it is this stability that is required for over-clocking. This stability is also something the CPU adores even if it isn’t over-clocked. Add this to a high frequency, high efficiency switching P/S and you have enough filtering to counter all but the worst house-hold line noise.

    As a side-note to this, I am often asked why I buy the best rated/tested over-clocking components for my machines even though I never over-clock. It is all these small things required for stable over-clocking, such as the above, that in a non-OC rig all come together as a super-stable machine that just plain doesn’t fail.

  82. And a big thank you to Joseph Presley, with whom I have not gamed in far too long, for contributing $50 to the Help Stamp Out CVS In Your Lifetime fund. Total now at $965.

    As a matter of potential interest, the CVS conversions I’ve already done with this tool include robotfindskitten, groff, and nethack. So the Stamp Out CVS project is already well underway.

  83. NetHack? Is that, or will that, repo be publicly available? I would love to have that :)

    For what it’s worth, I’ve converted two projects that are long dead from CVS using this tool. ZSNES (it basically has no value today besides historical interest), and ex-vi (this one could be resurrected; it’s an enhanced port of the original vi editor!).

    1. >NetHack? Is that, or will that, repo be publicly available? I would love to have that :)

      That’s…complicated. One of the nethack devs gave me a tarball of the CVS which I converted as a test. I then gave the git conversion to some nethack variant developers, because they asked and it’s open source, not realizing that the nethack devs had developed a culture of secrecy around that code. Which I must say as an original devteam member myself I consider a pathological development.

      One result is that some present members of the devteam are very pissed at me – though not the person who gave me the CVS snapshot, who seems to tacitly approve of the leak. I am pissed at them for being secretive and doing fuck-all with the codebase I contributed to for 11 years (no release since ’93), then treating me as a bad guy when I in effect (though accidentally) called them on their bullshit.

      Another effect is that the people who do have the git conversion are keeping it under wraps until they can graft the 11 years of changes in it (which don’t amount to a lot) onto one of the public variant repos. They’re also negotiating with the devteam about doing a 4.3.4 release. I’m not part of that negotiation.

      So the code will become available, but some politics has to be worked through yet.

  84. That sounds rather… awful. I do hope it’s resolved soon. I’m inferring from some of your post that NetHack development is kinda-sorta dead already. I have UnNetHack installed and I love it, but always wondered why the upstream project seemed so quiet.

    1. >I’m inferring from some of your post that NetHack development is kinda-sorta dead already.

      Has been for over a decade. Though I now have some hope that the disclosure will shame them into either issuing a release or allowing the variant devs who now have the code to do so.

  85. I really wish it were possible to delete or edit posts on here. In addition to the formatting error I keep making (too much time spent on forums that use those pseudo-HTML tags), it turns out that your mistake was actually that it was 2003, not 1993, so 11 years is correct – and incidentally the version is 3.4.3, not 4.3.

  86. Here is my 2 cents worth on RAID. With the newer motherboards I can set up two hard drives to mirror one another so that if one fails all I have to do is pull the bad one out and put a new one in.

    That today’s motherboard raid is a more forgiving of disparate hard drive types. Much easier to use various Windows based disk management utilities. And when I pull out both HD I have two copies of the same data. This is in comparison to a series of PCI card based RAID setups that my company had a few years back.

    Doesn’t negate the need for backups on a wide variety of media. Like other mention if you delete stuff then you are done without a traditional backup. But it saves a ton of time when a hard drive goes bad.

    I realize there are other reasons for using RAID and other configurations of RAID to address specific needs. But the case I am concerned about is how fast can I get going when a HD goes down. Since multi terabytes HDs are cheap these what makes the most sense in my mind is to use RAID 1 and pair two of them so they have the same data.

    The main thing I check is to see if I can yank one of those HDs out and stick into another computer with another motherboard and see if it boots like a normal single HD. If it does then I am good with the setup.

  87. Thank you to Jeremy Banks, who has just added $25 to the Help Stamp Out CVS In your Lifetime fund, bring it to $990. I think this is his second donation, which is above and beyond the call…

  88. Robert, if anybody is going to use RAID on a linux-only box I’d STRONGLY recommend using mdadm software RAID over anything supplied in the motherboard/PCI/etc unless you’re talking about a serious RAID card with battery backup (not something that sells for $30).

    Really the future is more around things like zfs or btrfs, but both of those are fairly experimental on linux (the former is significantly less so). If you want traditional ext4+lvm+raid, then you REALLY should make that raid mdadm. It is hardware-agnostic and much more flexible. That said, I’ve run into kernel bugs with ext4+lvm+mdadm in the past.

    I do agree with the comment that RAID is more about uptime than data security and isn’t a true substitute for backups. However, a more nuanced way of looking at it is that backups and RAID are different solutions for different but overlapping sets of failure modes. For an outright disk failure a RAID will let you get by without any downtime and any loss of data, while a backup will probably cost you downtime and will likely cost you hours of data loss as well (not much of an issue in this application). Backups also can be expensive to get right – adding RAID5/6 to a 5 drive array costs 20-40% of the cost of the drives to be protected, while a series of backups could easily cost more than 100% of the cost of the drives to be protected. Many backup schemes have gaps.

    Of course, in this case neither is really that critical. Presumably Eric is making use of purely open-source software/configurations, so ideally he should be able to quickly re-create the system from repositories mirrored all over the Internet, and from standard distribution package managers. A one-time snapshot of /etc and such might be all he needs to rebuild the system with little effort. The repositories he is migrating are likewise already hosted elsewhere, so they can be re-requested if they are lost. One of the big advantages of FOSS and modern linux distros is that the amount of stuff you really need to back up is minimized. I personally tend to use duplicity to do automatic cloud backups of the important stuff, and then just rely on RAID for the rest (cheaper, and I’m fully prepared to lose everything not covered by duplicity). It is important that your backups be automated – a backup that you don’t perform is useless in a failure.

  89. “Then, make sure we have good processor cooling. Two proposals are currently live: one conventional air-fan-based design around the Cooler Master Hyper 212 EVO CPU cooler, one sealed-unit water-cooled suggestion around the NZXT Kraken X40 or Corsair H90.”

    You are choosing the higher TDP CPU. Maybe consider the Kraken X61 instead of the X40/X41. Silent PC Review quite liked the X61 ( http://www.silentpcreview.com/NZXT_Kraken_X61 ). When tuned it turned in an amazing cooling level with very low noise levels:
    “Compared to previous coolers we’ve tested with dual fans, the X61 takes the top spot, just squeaking by the Prolimatech Genesis for the title. When paired with our reference fans, it drops a few rungs but it’s still within a degree or decibel of the very best.”
    And:
    “With the promising results of the NZXT Kraken X41, we were optimistic about the larger X61’s chances. Frankly, this is the sort of performance we’ve been awaiting since AIO liquid coolers debuted. After repeated disappointments, the X61 is the first to truly impress us, doing so in stunning fashion. Not only did it shoot past all the closed-loop water coolers we’ve tested so far, it climbed to the very top of our leaderboard. Even so, we need to note that if you want a truly silent cooling system, the X61 is not it. We were able to get the SPL down to 16~17 dBA, but no better. This is very quiet by any standard but not truly silent. In contrast, the top air-only coolers on our leaderboard can go all the way down to the 11 dBA ambient level of our anechoic chamber, albeit with higher temperature rise. ”

    I really like the Antec Kuhler 90 I have in the desktop box, but the Kraken X61 just jumped to the top of my ‘in-the-next-build’ parts list. And if I cannot find a case which easily handles the radiator, then I guess I will actually buy a case with a plexiglass side window, since that is a lot easier to cut than a metal case side!

    You may wish to revisit the case decision to ensure that you can fit the 310×140 fan footprint. The SilverstoneTek Raven line with the “rotated” motherboard design, or any of the cases with a top fan, too have advantages. I dropped the internal temp in the mythtv box at home by 8-10 degrees C by changing the Thermaltake Bach case, from a stereo receiver style horizontal setup to a vertical mount with the face down, and the PCI card rack up. Cool air in the bottom, hot air naturally, or fan assisted out the top. I was surprised at the difference. And it’s a lot easier to fiddle with the wiring!

    I strongly second the suggestion to mount /tmp as a tmpfs in ram. When I crunch files into ebooks, it takes about one-third the time if I rsync the entire working set of files into a tmpfs folder space and work on them there. The fastest RAM available will help. I expected a good time reduction against thrashing files to/from the spinning hard drive, but I got more than expected from the SSD. Bandwidth advantage beats even an SSD’s speed.

  90. To my humble astonishment, Douglas Kilpatrick has just donated $100 – only he designated it to the “Wipe out SVN in my lifetime” fund.

    :-)

    Yeah, I’m on that job, I am.

    Current amount $1090.

  91. *boggle*

    OK, I think the Xeon build just turned into a lock.

    Daniel Brooks donated $1,000 – yes, that’s three zeros – towards the build of the Great Beast.

  92. I did the same analysis to come up with the price per bogoMIP of the AMD-8350 as being matchless. So I’m typing this on one of my new “Personal Hadoop Cluster” nodes using that proc. ECC is a must for my application and a pretty good idea for yours too. So I did some hunting and the 8350 supports ECC in hardware and under Linux quite nicely. Unbuffered ECC takes a bit of a performance hit if you’ve got lots of sticks-per-channel, but if it’s just two sticks per channel it only pops things down one memory-speed increment which is quite tolerable when compared with the class up upgrades to get the benefits of registered-ECC-DRAM.

    Short version: These sticks are only ~20% more money than their non-ecc equivs.
    http://www.newegg.com/Product/Product.aspx?Item=N82E16820239724

    My chosen MB didn’t support ECC in the BIOS, but it doesn’t have to. This command says all the right things back:
    modprobe -v amd64_edac_mod ecc_enable_override=1

    You can have two complete 8350 systems that you can buy for the cost of a Xeon one, and you have to go Xeon to get ECC with Intel (current i7s don’t do ECC). I’m guessing 2x 8350s will prove a vastly more versatile solution as well.. If you had enough money to do a dual e5-2667, it might be a different picture. I’m not knocking the total crunch per dollar of the Xeons, but those $1700 dies will bring you to $3400 before you have purchased any of the rest of the system.

    1. >those $1700 dies will bring you to $3400 before you have purchased any of the rest of the system.

      Given the avalanche of money that has landed on this, “cheap” is no longer a major concern. If more bucks will buy faster single-threaded performance it’s a good call, and Intel is ahead there.

      I still want to avoid overbuying, but mainly so I don’t wind up with a system that’s noisier and hotter than it needs to be.

  93. There’s no match for Intel’s single threaded performance. Nor how emotionally satisfying it is. I purchased 288 of those e5-2667s for my last Hadoop cluster and have zero regrets.

    But for the home system space and remote maintainability were not serious concerns, so my precious DL380G8s were not required. I can literally buy 15 of the AMD-8350 systems for the cost of the dual die DL-380G8s. So the pivotal question for fitting an the app to the architecture was one of paralelliz-ability. Hadoop 2.0 has tools to make that happen. I’ll have numbers to compare them in Hadoop-races soon, but there’s no getting around 5x the bogoMIPS, 3.2 times the memory bandwidth at the same money. In the case of two systems you can put pulled 10Gbe cards (HP 671798-001), connected by a fixed length twinax SFP+ cable (ebay) and have 10G networking between them for $160.

    So how much do you need a spare World of Warcraft machine, and how parallellizable is your app? :)

  94. Please use raid 5. You get increased performance and reliability. You just need 3 or more drives.

  95. Don’t get one SSD. Get two and put them in a linux software raid10 configuration. They will *fly*. Raid0 read speeds, raid1 write speeds, and raid1 data safety. Benchmark them first to see which offset layout setting (near/far/offset) gets the best results for read and/or write and choose accordingly. You’d think it wouldn’t matter with truly random access addressing on an SSD but it varied from 10-25% on my two samsung 840 Pros. I’m now seeing consistent ~1GB/s read transfers using -p o2.

  96. Hello, if you’re building a special purpose box that only really needs single-threaded performance I don’t understand much why consider relatively high-threaded machines (AMD FX, i7 and Xeon that are like i7)

    Core i3 would be fast for the job, there’s i3 4370 at 3.8GHz. “Exceptionnally” it supports ECC too, where i5 and i7 do not (but Xeon E3 is kind of special branding of i5/i7, with ECC support). Weird market segmentation games.
    To get ECC memory on the 1150 socket, it seems you need a motherboard with a “pro” chipset, i.e. Intel C222 or C226 or similar. That would be more costly than a $80 motherboad, on the other hand they look more “servery” with COM port, dual Intel lan etc. (Micro-ATX form factor).

    Example motherboards : Gigabyte GA-6LASH and Gigabyte GA-6LAS. Disadvantage is the 32GB max RAM.

    You need a 300W PSU more than a 750W one, I think. Peek efficiency is very roughly at half of nominal power. Say, Seasonic G360 (360-watt, “80 Gold”) will be very good for any single CPU machine with no significant power use by the GPU.

    Some nitpicking would be the i3 has 4MB will a higher end Xeon has more L3, which will be used even if most of the Xeon’s threads will be idling.

    1. >Hello, if you’re building a special purpose box that only really needs single-threaded performance I don’t understand much why consider relatively high-threaded machines

      The appeal of the Xeon build isn’t the number of cores, it’s the ECC RAM.

  97. I have read a few ignorant comments regarding the AMD Bulldozer architecture is. I measured the performance hit which is caused by the shared elements of the two cores of a single module in a 12 core Opteron. In the worst case I got the performance of 1.7 cores instead of 2. In the best case I got 2.03. Yes, the latter was repeatable too, I guess the shared cache helps sometime. And yes, this was a floating point test.

    Comparing AMD moduels to Intel Hyper-Threading is bullshit, sorry. In the best case Hyper-Threading brings the perforamce of 1.3 cores. In the worst case it is well below 1.

    Referring to the shared FPU is bullshit too. A module indeed contains one FPU. But we rarely see a program which does nothing else but floating point operations. Most importantly the FPU is 256 bit wide, and the module divides it into 2 or even 4 parallel parts for the usual floating point operations. The cores do floating point parallely!

    Nowdays the general rule is that for the same money you get more multithreaded performance but less single threaded performance form AMD, compared to Intel. Becuse single threaded performance is prefererred in this question, Intel is a good choice. At least for those who accept the disgusting moves of Intel against AMD and free competition.

  98. Wow, what a read. I wish you all the best in you quest for the perfect new machine, ESR!

    Noise isn’t too much of a consideration if you run the machine as a server away from your desk, like in your basement, or an extra room. Then you don’t limit the system’s performance due to making it fit into your living/work space. I run up to 20 systems at a time at home out of the basement, doesn’t affect anyone living upstairs.

    Maybe take the money, and build two systems. An updated desktop that is quiet, and a compute node. Looks like your tip jar is getting noticed, so hold off on buying something!

  99. Well, if you want to actually buy a machine (vs borrow colocated foss resources) my recommendation would be to get one based on a Xeon E3-1240 v3 (which is haswell), or later. Anything from haswell on forwards in the Xeon class is just absurd in terms of crunching performance, and they run cool at the same time. Destroys any AMD offering outright. Seasonic power supply for sure… 400W fanless is my suggestion (you aren’t building a high-end gaming system after all, 400W is plenty plenty). I am not particular about cases but my tendancy is to get ones with one or two 80mm+ intake fans and one 80mm+ output fan (low rpm, NOT variable), plus the cpu fan (which is variable), and splurge for the largest copper fin cooler you can fit. That’s the quietest combination I know of without going nuts on the cooling. I do *not* recommend the more complex cooling systems, and they aren’t needed with newer Intel cpus anyhow. Stay away from small fans.. that’s where the noise comes from. The secret to low noise is to use slower, larger fans, period.

    For cvs->git crunching you will want reasonable ram and a big SSD and a big HDD. No need to go absolutely nuts but 32GB is baseline for ram these days. The SSD should be at leat 512G if you are going to work on big repos, and 2 TB HDD for target data. Put swap and the base system on the SSD at least, and I also recommend originating repos if they fit. That will handle most of the paging and I/O needs. You will want to try to avoid significant actual VM paging since that can wear the SSD out w/large jobs, which means putting target repos on the HDD instead of the SSD (lots of writing/rewriting/removing/rerunning). For this sort of crunch work anyway.

    Really the most important aspect of a crunch system for this sort of work is ram and SSD, you can make do skimping on other things, but not ram or SSD.

    For the SSD brand I recommend ONLY Crucial or Intel. They maintain their own firmware on the chipsets and fix problems that arise and are the only ones I trust. I stopped trusting Seagate long long ago. For the HDD… I personally use Western Digitals these days because Seagate started monkeying around with their firmware too much. Whatever you do, do NOT buy Seagate hybrid drives. Not under any circumstances. I probably bought my last 3.5″ drive earlier this year… all drives moving forwards, including HDDs, are 2.5″.

    For a personal machine I don’t use RAID at all.. it’s a waste of money. What I do instead is spend the money that the RAID would have cost on a very low-spec box with a big backup disk in and make sure my system is backed up daily to that, as well as to offsite. I don’t see very many HDD failures… they happen, in which case the system goes down once every year or two which is not a big deal.

    -Matt

  100. ‘Tis a noble goal. I may be able to help.

    Denhac (denhac.org – check us out!) has a decent sized compute cluster and I can offer some resources to aid this project. I’m the Vice Chair and Keeper of the Infrastructure there, so I don’t think anyone will complain if I spawn a few VMs. In about two weeks, I can create 4 instances of whatever you like, each with 4 threads (2 cores) of either Xeon 5620 or Opteron 2378 (1/2 and 1/2), 4GB memory, and 64GB storage. Storage backend is Fibre Channel, and can pull 4Gb/s fairly consistently.

    Internally, they’d have at least a gigabit between each other, and up to 50Mb/s to the net. I can probably keep it up for at least three months, and there is node redundancy so only a very major maintenance or outage would take machines down or keep them down.

    Send me a message, svn must die!

  101. I needed a (mostly) single threaded race horse for Simulink simulation work recently. I got JNCS to build an overclocked cherry picked delidded i7 4960k system. They got it running at low temps (24×7 stability) @ 5.1ghz with 2 active cores. A corsair sealed watercooling system was used. The last benchmark i ran was with a i7-3770k machine was 376 sec vs 250 sec for the same simulation.

  102. Silence, reliability and speed are contradictory requirements. Speed means power and this requires
    moving the heat out. Every degree of temperature reduces reliability, so the cooler you run, the better. But this, usually, involves some noise.

    Xeons are not only doing ECC, they are also selected after the (highly variable) manufacturing process to run cooler and be more reliable. Water cooling reduces noise, but it only cools some of the chips (for example, only the processor). So, for best results, overspecify the fans, take care about the air flow inside the case (push air inside through filters, take it out through multiple fans). You must give up one of the requirements, and I would suggest noise. Admit some level of noise and move the server at some distance, or in a different room (not a small cabinet that you
    cannot cool) as somebody was suggesting. You can afford a 10Gb connection. Filters are important as dust would also reduce reliability. Also, you need to overspecify fans for reliability: if one breaks there still must be sufficient airflow to keep everything cool. Choose ball bearing fans, preferably.

    Get a fan-less `terminal’, and run the compute-intensive job remotely (in the next room).
    The fan-less will be totally quiet. You can avoid all fans, PSU, processor and case. I got one with mini-itx in a small case, with intel atom, it _does_ make a difference even from very low noise pc-s. BUT, it ran at about 50C and, while well within specifications, in one year it broke. My error was, I think, the small case. It should have been a larger one, with large radiators, good (if passive) air flow.

    Regarding the server, consider ready made ones. They are designed to run 24/7 for years, something that desktop machines are not. Desktop machines are supposed to run some 12 hours, then stop, and so on. There are very many issues to consider about reliability in a server, besides the number of fans.

    I ran a S7000FC4UR from intel, with 4 xeons and 6 hdd. I think I installed it in 2007. It runs since then, permanently (except for a few reboots that were not due to its hardware, but to long blackouts of our dear electricity company).

    1. >Get a fan-less `terminal’

      Suppose I wanted to do this. What would you use, and (assuming CAT5 between terminal and server box) how would you build and configure the terminal?

  103. And thank you to Ove Nipen, who has contributed $50 to (as he says) “stamp out cvs and svn”. Brings the total to $2575.

  104. There are some ready made, for example:
    http://www.endpcnoise.com/cgi-bin/e/std/category=Fanless_PCs.html
    (or search for ‘fanless pc’)

    The above seem pretty expensive for me, but are well within your budget. The one I bought a year ago, from another vendor, was about 300$ without a disk, but was not specified as a fanless PC. It has intel atom D2550, 2Gb and I asked for the spining drive to be replaced by an SSD–to avoid the noise, as well as for the fan to be removed. It also has an external PSU that would start a (noisy) fan if I don’t take care to place it with enough space around to cool through convection,
    but that is only accidental.

    It ran for a year day and night without making any noise (I have 3 year warranty and I’ll have it repaired). It doesn’t have an external heat-sink, nor heatpipes, as the ones in the list above, so perhaps they are cooling even better. I monitored the temperature (using sensors). I also made a cooling tower out of cardboard and placed it on top. This increased air flow and usually reduced the cpu temperature from about 50 to about 45 (when not loaded) and from 60 to 55 when heavily loaded.
    The processor is specified at 100C maximum (but then, again, it is not the only component
    that must keep running and stay cold).

    There is another trend in silent PCs that may be better. Some baby or full tower cases may operate fanless with up to 80W of power, mostly by convection. For example (I think):
    http://www.quietpc.com/nof-cs-80

    I have seen more of them while browsing in the past, but can’t find them right now.

    I think they must be better then the mini-itx radiator based ones, because, again, the cooler they run, the more reliable they will be. I think it is essential that case is placed and maintained without obstructing its air inputs and outputs.

    Regarding configuration, I prefer to keep on the ‘terminal’ a fully functional linux installation, such as ubuntu with openbox, rather than some X terminal. I can browse from it. When I do work, such as running computations or anything demanding, I ssh -X to one of the servers–either the one in the other room or a remote one–and run them there. If I want to run a movie over the link,
    with only the windows this part of the connection, it is not smooth even over 1G [but this could be due to my local slow processor having to encrypt everything over the ssh connection] but for most work I do it is perfect–both fast and noiseless.

    1. >The above seem pretty expensive for me, but are well within your budget.

      Well within my budget if I weren’t also trying to build a repository-crunching monster!

      Those prices are pretty shocking. I think I could build a Shuttle-based fanless PC for about a third of what they’re charging.

  105. > Those prices are pretty shocking. I think I could build a Shuttle-based fanless PC for about a
    > third of what they’re charging.

    You mean something like this:
    http://www.cnx-software.com/2014/06/27/intel-celeron-j1900-fanless-pc/

    This is more or less like what I have, for the same price (250-300$). This is possibly the cheapest available. Raspberry Pi would be cheaper (50-100$ with accesories), it only takes 5W or so, but it is too slow for comfortable use. I don’t know any intermediate example of a good terminal (100+ $, significantly faster than R-Pi, 10W), I would appreciate if anybody could mention one.

  106. > Suppose I wanted to do this. What would you use, and (assuming CAT5 between terminal and server box) how would you build and configure the terminal?

    How powerful does the terminal really need to be, and do you have an HDMI monitor?

    1. >How powerful does the terminal really need to be, and do you have an HDMI monitor?

      I do, and will soon have two again. All I’d want, really, is something that would drive the monitors comparably to HDMI direct from the machine, relay my keystrokes down to it, and bring audio back up. Local storage and computing power other than required for the relaying would not be required.

  107. If you want remote access without logging in, then there are two general options (I haven’t tried myself either of them).

    One is ltsp (ltsp.org). There are prebuilt terminals, for example: http://store.disklessworkstations.com/ltsp-thin-clients.html
    these also seem to be in the 250-300$ range and are atom machines.

    Another one is vnc. You run a server process on the server and access it
    with a thin client, such as this: http://www8.hp.com/us/en/campaigns/thin-client-solutions/t510.html that also has two video outputs.
    However, I don’t know how cheap they are, seem hard to find under 300$ and also don’t know about the noise.

    You could most likely also configure a shuttle-based computer either with vnc or ltsp quite easily. The problem is, it doesn’t get much under 300$.

  108. > All I’d want, really, is something that would drive the monitors comparably to HDMI direct from the machine, relay my keystrokes down to it, and bring audio back up. Local storage and computing power other than required for the relaying would not be required.

    An unqualified “comparably to HDMI direct from the machine” makes this a tall order compared to using ssh or x11 (and doing low-intensity tasks such as web browsing locally). Are you planning on doing anything with video / 3d / etc?

    Of course, the other alternative would be to put HDMI and USB through the walls, to the room where the computer (and the noise) is.

  109. Earlier this year, I moved the guts of my computer into a Corsair Carbide 300R:

    http://www.amazon.com/Corsair-Carbide-Mid-Tower-Computer-CC-9011014-WW/dp/B006I2H0YS

    I was doing GPU mining at the time and needed better ventilation. I equipped the case with three 140mm fans: two in the front, one on top. It’s not absolutely silent, but I picked fans (plain black fans, not riced-out LED fans) that still keep it quiet. More importantly, everything stays much cooler than it did with the older case. Build quality is pretty good, and you only need a screwdriver for the fans. At about half the cost of the case you’re considering, it’d free up some money for a faster processor or more RAM.

    For power supplies, I’ve had good luck with single-rail modular power supplies from Seasonic and Corsair. I’ve had some of them driving ASIC mining rigs out in the garage in Las Vegas, and they’ve not missed a beat. They’ll have no problem powering a computer kept indoors.

    As for things like processors and motherboards, I don’t have much current guidance to offer. The last time I was in the market for those, I needed something cheap for my media server, and ended up buying an AMD A4-3300 and an Asus F1A55-M LE…set me back a whopping $105 about a year ago for both. For serving files, it’s far more than adequate. I suspect you’re looking for something with a bit more grunt, however. (Besides, these have been discontinued AFAICT.)

    (The rest of my desktop configuration is kinda long in the tooth: a Core 2 Quad Q6600 (now on its third motherboard since I bought it in 2008), 6 GB RAM (had 2 until maybe a year or two ago), and a Radeon HD 6870 purchased a couple of years ago for mining. It’s still surprisingly responsive for most of what I run on it. I’m thinking an SSD could squeeze some more life out of it, still.)

  110. Cloud options might be off the table, but what about private “cloud”? Build/buy a desktop box that’s really just an appropriate monitor/mouse/keyboard setup with no meaningful resources in the local system and stuff a several year old server with a crazy fast processor and at least 64GB RAM into a soundproof corner of the house’s not-so-used space? Out of service servers are fairly cheap and often have a lot of life left in them, and a hardware RAID setup with a good cache layer makes SSD almost unimportant. Starting around 2011 the big manufacturers started making quiet systems that didn’t rely on Windows drivers for fan control, so a server box might be OK in your main space — I forget they’re on when I have one on my back desk for testing at work, and I’m sitting 3 feet away from the thing.

  111. I’ve been happily running RAID1 with MD (standard Linux software RAID) for years on my home box, and the hassle it saved me when one of my drives went flaky paid for itself. I was able to continue running for a few months before replacing the faulted drive, and total downtime was about half an hour to replace the drive and let it rebuild the mirror automatically.

    Do be aware that MD is compatible with ‘BIOS RAID’. You configure the RAID set in the bios before you install, and the MD driver will automatically raid things up for you and keep it that way. You get all the performance of Linux kernel RAID with perfect integration with the bios, the ability to boot with either drive failed, etc.

    Also, if you use RAID 1 to mirror your drives, you get the benefit of the kernel automatically balancing reads across the two drives, giving you a nice performance benefit. Writes go to both drives so there’s no performance benefit on that side, of course.

  112. Thank you, Jonathan Abbey, for contributing $100 “For improving reposurgeon and the world in general. Also for a nice dinner for Cathy” . It shall be done.

    Total now at $2665.

  113. To clarify: the half hour downtime didn’t get the replaced drive fully mirrored.. with modern sized hard drives, that takes *forever*. But since it was rebuilding the mirror in background, I didn’t have to care about that.

  114. (this might be a duplicate post, it’s not showing up)

    > Raspberry Pi would be cheaper (50-100$ with accesories), it only takes 5W or so, but it is too slow for comfortable use. I don’t know any intermediate example of a good terminal (100+ $, significantly faster than R-Pi, 10W),

    http://www.cubietruck.com/

    I have one, and I love it. It also has a SATA connector, which the Pi sorely lacks.

  115. Eric,

    My recommendations:
    * Please use two SSD drives in RAID1 for your OS and swap.
    * Look for “embedded” versions of your Xeon processors, those versions are guaranteed to be available for 7 years (e.g. E5-2640v3). Ditto for the server-grade motherboards that support them. You’re looking for a long-lived system, and I doubt you’ll be wanting to search ebay for replacement parts in 2 years.
    * Motherboards like the SuperMicro X10SLR-F can support up to 512GB of ram and have a dedicated IPMI port for remote hardware management and remote graphics console via VNC
    * I’d highly recommend using Linux software raid. It’s easy to recover your data on another system in the event of motherboard failure.
    * Given what you’re doing, though, I’d recommend building your filesystems with an “external journal” on the SSD. There is a bit of a bottleneck with the software raid when you’re doing operations on lots of small files and using the (raided!) SSD for this will significantly improve performance and still be reasonably safe (since your SSD is raid1). Mounting with “noatime,nodiratime” will also speed things up.
    * I wouldn’t worry about the lack of an audio controller on the motherboard. Pretty much any graphics card with hdmi out will have an integrated audio controller.

  116. Thank you, Praveen Bhamidipati, for contriuting $15 to the Help Stamp Out CVS In Your Lifetime fund. Fund now at $2590.

  117. Eric,

    This project you’ve undertaken is no less than a monumental task, which clearly shows not only your brilliance and passion (not that this wasn’t obvious before), but also that you can uniquely handle any technical anomaly that future repos will throw at you. Because of this, my suggestion is for you to not build a single, massive, repo crunching machine that you can run centrally. Well, do build that system (and remember: pics or it never happened), but I propose that you focus your unique skills and knowledge on the maintenance of reposurgeon, csv-fast-export, towards squashing those unique anomalies that arise as more repos are processed, and (at least initially) staging repos for processing…and let us peons (and our computers) do the menial work for you.

    What I propose is that, in addition to your physical repo crunching machine, is that we (community) build a virtual monster: a distributed grid computing system managed by a simple repo staging and task scheduler where volunteers can connect clients to participate. As described in your csv-fast-export README, further in the csv-fast-export html docs, and also your post about “How reposurgeon wins”, there is still a lot of manual work that goes into conversion, both before and after the crunching. This work takes a certain skill level. As you explain here, there is a lot of work that goes into actually crunching..which takes comparatively little (no) skill but sheer processing power. Further, if you looked those those various stages of the process (discovery of repos (www crawlers?), contacting owners, pre-processing for committer ids to create user maps, additional repo staging, repo download (or remote) conversion, conversion sanity checking, bug squashing, etc etc) there is a lot of opportunity for the community to contribute to this project, from running a small grid client to chunk through repositories to (for more trusted users) helping manually build user mappings, etc. This leaves you to focus on those problems the distributed clients found while processing repos, approving new repos to be included, etc…and allows us to offer our cycles and energy (skills and compute) for the common good.

    I’ve been unsuccessful in finding crawler statistics, but I can guess that there are an astronomical number of CVS and SVN repos out there. Even those project that pass whatever threshold to be eligible for this project on just sourceforge alone probably couldn’t be done by you alone. Let us help.

    In addition to my monetary donation, I offer my services to help design and build (an open source) grid orchestration system and client based on whatever tasking/business process you define. If you’re interested, we (you, me, and any community that materializes around this) can flush out the details…somewhere public…but other than the comment section of your blog =).

    Regardless of your interest, good luck! You continue to inspire.

    Steve

    1. >we (community) build a virtual monster: a distributed grid computing system managed by a simple repo staging and task scheduler

      First, thank you very much for the $100 donation; brings the fund. I may have to engrave the 3-figure contributors’ names on a plaque or something.

      Hm. It’s an interesting idea. Perhaps not even so much for scheduling the compute time as scheduling the human time. Fixing up CVS or Subversion artifacts in a big repository is a lot of work even with reposurgeon in hand.

      Probably the logical first step would be to create a mammoth-hunters mailing list, invite interested parties to sign up, and then see who actually joins.

  118. Thank you, Zig Zichterman, for the $100 donation. And for the supportive words: “A craftsman can see beauty in good tools wielded properly.” There speaks the hacker ethic.

    With Steve Siebert’s previous $100 this brings the fund to $2790. We’re now about $50 over the nominal build cost, which is a nice place to be – prices fluctuate, there might be sales taxes to pay, and even competent people sometimes make mistakes that can be fixed with a bit of extra budget.

  119. @esr starting a mailing list sounds good to me. Do you have a preferred provider? I could setup a google groups otherwise…

  120. >I have applied.

    Sorry – didn’t notice you had email notifications turned off on your account. I’ve made you an owner.

Leave a comment

Your email address will not be published. Required fields are marked *