Building the perfect beast

I’ve attempted to summarize the discussion of build options for the repository-surgery machine. You should see a link at the top of the page: if not, it’s here

I invite all the commenters who have shown an interest to critique these build proposals. Naturally, I’d like to make sure we have a solid parts list with no spec conflicts before we start spending money and time to build this thing.

As the Help Stamp Out CVS In Your Lifetime fund has received $965 and I said I’d match that, even the Xeon proposal is within reach. Though I don’t mind admitting that I wasn’t expecting to have to match quite so much generosity and the thought of spending $900 on the machine makes me swallow a bit hard. If the gentleman who instigated the Xeon proposal is still willing to toss a couple of bitcoins at it, I won’t be too proud to accept.

Plans are not yet final, but John Bell (who started this party with his $100 “Get a real computer, kid!” donation) says he’s eager to do the build at his place in Toledo. Then he’ll haul it out here and we’ll do final installation and system qualification, probably sometime in mid-November.

I’ve already had one nomination for the next CVS mammoth to get speared: Gentoo. I’ve sent an offer but seen no response yet. NetBSD is definitely on my list. I’ll cheerfully accept suggestions of other deserving targets.

Not to forget that I do Subversion repositories too. I’ve actually converted more of those than CVS ones, I think – Battle For Wesnoth, Hercules, Roundup, and Network Utility Tools are the ones that leap to mind.

110 comments

    1. >(Not sure which one of us has the more “smug expression”)

      Not touching that with a ten-foot pole. I hope the build page is clarifying. What unanswered questions does it leave?

  1. Gentoo already has an active project underway for converting their CVS repo to git. I half expect it to hit prod by the end of the year.

    1. >I never figured you for a Don Henley fan…

      Hey, that was a brilliant album, conceptually and lyrically, even if some of the music was a bit MOR for my usual taste.

  2. A bit MOR??! A BIT??!!?!

    Given your tastes, and given just how MOR that album was, it’d be the musical equivalent of Hershey’s Cookies and Cream bars for you, I’d have thought.

  3. I’m closely involved with the Gentoo git migration. I think there might be a place for some outside help here, but we’d probably all want to think about it due to where everything currently stands. Let me start with a quick summary of just where we are now:

    It would be very fair to say that the Gentoo git migration has been languishing for years, though with meaningful bursts of activity along the way.

    We’re actually in the midst of one of those bursts at the moment, and the Gentoo Council just gave a blessing to moving forward with the migration as quickly as the infrastructure can be set up. The plan is as follows.

    On the infra side we now have a set of scripts running on test servers that can take a git repository and make it available for distribution and mirroring, with all the necessary hooks. What remains is to actually transfer it onto Gentoo-owned servers.

    On the processes side we have worked out how we intend to transition to using git, and have done minimal testing of these processes. One challenge with Gentoo is that we will end up having fairly frequent commits to master which are basically instantly deployed, and that means that it may be hard to do a git pull, rebase/merge the commits onto master, test, and do a git push before colliding with another change that prevents a fast-forward from taking place. With cvs all the commits are at the file level which means there aren’t as many collisions, but in truth it means that we never have a consistent view of the tree as a whole. We think that we’ll be able to just have everybody keep pushing into master at our peak commit rate but if that ends up not working there are alternatives.

    On the actual migration side the decision has been to have two repositories. Moving forward we’ll drop all history and just migrate the current tree and cleanup cvs-oriented details (manifests and changelogs in the repository, cvs headers, etc). Separately we’ll have a historical migration to git which can be grafted in using git replace.

    Rationale for this approach:
    1. We can migrate the current tree VERY quickly and with very little risk of issues. That means minimal downtime.
    2. The migration process right now is very good, but it is very hard to make perfect for a number of reasons. Splitting it up means that we can release historical migration version 1 a few hours after the switchover, and anybody can improve on it later.
    3. The full history is fairly large – 700k commits, about 3.3M objects, with the bundles weighing in at about 1.5GB. Splitting it up is convenient.
    4. Git replace makes it far less important to have a single continuous repository.

    A few things that make Gentoo challenging:
    1. We have embedded manifests and changelogs in every package directory. The desire is to consolidate commits to these (otherwise we’d have more like 3M commits).
    2. The manifest files contain hashes and gpg signatures. The desire is for these to still be valid as much as possible. There is a manifest per-package, so at least at this level of granularity commits need to be consolidated correctly and files have to be absolutely identical to what they were when the manifest was generated.
    3. We use CVS keywords all over the place, so these need to be re-generated in the way cvs did at the time. That means dropping “Attic/” from the path of now-deleted files among other things, and using the right date format (fortunately we did not shift date formats when cvs did, so it is consistent).
    4. The repository does contain errors, like manifests that weren’t correct at the time, or cvs keywords in patches that weren’t committed with -ko/-kb which are going to cause problems, and we have to live with this (they would have been problems all along).
    5. At points in time the rcs files in the repository were tampered with in ways they shouldn’t have been. For commits in the distant past that could cause issues.

    Right now getting our servers set up for git is the main barrier remaining to overcome, and we generally have a plan for the migration itself. That said, if you think you can do a better job with the historical migration than we have I’d be more than happy to send you a tarball of our cvsroot and an example of our current best git bundle and the current migration scripts. Our migration strategy basically allows for competing versions of history, so in some sense the more the merrier. If you think another project is more in need I won’t be offended.

    All that said, I’m really glad to see you trying to rid the world of cvs. It really needs to be killed with fire, and in the case of cvs keywords, preferably nuclear fire.

    1. >That said, if you think you can do a better job with the historical migration than we have I’d be more than happy to send you a tarball of our cvsroot and an example of our current best git bundle and the current migration scripts

      I think one interesting question, given the migration path you describe, is how fast I can do a historical conversion. If it’s fast enough to inflict only tolerable downtime, then some options you don’t presently have open up. So yes, I’d like a drop of your CVS, but only after the Beast is in place. My present hardware won’t cut it due to limited memory.

  4. Since reliability is so important, I would suggest picking up an extra hard drive that you rsync to nightly, maybe even two extra hard drives, that get rsync’d on alternating nights. That is, spend part of your build money on a local backup storage.

  5. Eric, Jay –

    You gentlemen will LOL to learn –

    In the summer of 1985, while a consultant at AT&T Information Systems in NJ, I was allowed to borrow a Unix PC 7300 to take to my apartment. Its UUCP node name was “pbeast”.

    My stupid little laptop has a hostname of ” perfect beast”. (Try pinging “perfectbeast.uc.utoledo.edu” up to about 17:00 EDT today….)

  6. I hope this isn’t too offtopic, but my curiosity’s running wild.

    >Not to forget that I do Subversion repositories too.

    Is there no use for non-distributed version control anymore?

    I understand your vc.el is intended to provide a unified–and comfortable–Emacs interface to several VCS’s. Ten years ago, somebody wrote an interesting post on it and RCS. But if Git is the only sensible choice nowadays, one might as well ignore vc.el–with all due respect–and adopt Magit, which seems fairly popular.

    Oh, and another thing: this new link at the top of the page just made me realize you could also link to “How to get banned from my blog”. I think that post is relevant to newcomers.

    1. >Is there no use for non-distributed version control anymore?

      Sometimes there is. But even if you design your git workflow to be Subversion-like (everyone pushing to one central repo) git has one massive practical advantage: it’s fast.

  7. Hello ESR! Big fans over at teksyndicate.com. I’m about to send a tweet out requesting support of your machine. If the build falls through with John Bell, I’ll be glad to do what I can to help you out.

    At the very least, I can arrange for you to get an Asus Z10 Dual Socket 2011v3 workstation board if you use it in your build. This has a ton more ram slots, and you can run two E5-26xx v3 series Xeons (easier to get, a little more expensive).

    This board DOES require DDR4 memory, however. The X99 you’ve listed also requires DDR4, not the DDR3 you have listed. You may also need a beefier power supply — I may be able to help out there as well.

    If things fall through with John Bell, we’d be happy to assit with/do the build as we’ve done many high-end builds like this for our site and youtube channel. We would like to cover the build on our channel if this is something interesting. Email me.

    1. >Your basement could use better heat in the winter, regardless.

      Dave, that term you’re fond of – “success disaster”?

      You may be inducing one here.

  8. I don’t actually like how git-svn works. Depending on how you set it up, you get a completely different repo history.

    Given Git’s ability to make any tree the root of a commit object, and ability to guess the history of files, I think SVN repos should be converted in two stages

    i) The entire repo is converted as the entire SVN file system (the one it uses to store your history in the repo)
    ii) You then form branch historys by “threading” the inodes of the SVN file system with commit objects marking a succession of trees in a progression of revisions.

    1. >I don’t actually like how git-svn works. Depending on how you set it up, you get a completely different repo history.

      Your misgivings are sound. git-svn is pretty broken. Usually nobody notices because it generates correct head states, but there is often mischief further back in the history.

      >I think SVN repos should be converted in two stages

      Or you can do it the way I did – write a parser for SVN dumpfiles and code that does intelligent translation to gitspace. It was a stone bitch to debug – took me two years – but I believe reposurgeon does a better lifting job than anything else.

  9. Better you than me! But, look:

    You spent a very long time making cvs export to git work as well and as fast as you could make it work, to where only new hardware could tackle the biggest jobs left on the planet. A metric ton of people have benefited from all that work… and the Internet as a whole, owes you big.

    I’m very mad at timothy’s comment on my post on slashdot, you’d need “not drive esr bankrupt” by “having to match” counter-fund, now…

  10. I know the solution is heading in the direction of more hardware, but…

    The more I understand the problem, particularly reading the new merge.c comments, the more I think the DAG doesn’t need to be kept in RAM. If the metadata was written to a file, and sort (1) used to sort in time order, the git change sets could be built fairly easily, in overall time approximately o(n log n). No, this isn’t a complete solution, you still need to feed generate.c.

    Are there enough monster CVS repositories to redesign cvs-fast-export?

    1. >The more I understand the problem, particularly reading the new merge.c comments, the more I think the DAG doesn’t need to be kept in RAM

      In principle I’m sure you’re correct. However I fear the performance hit.

      >Are there enough monster CVS repositories to redesign cvs-fast-export?

      We already have evidence that NetBSD – which is about the worst case I expect – requires a working set of about 18GB for conversion. That tells me that we do not need a drastic redesign – going to 64GB of RAM (quite feasible) would handle repos 4 times that size.

  11. > I shall do so.

    Totally OT, but I gotta ask: was this “shall” deliberately ironic? I skidded to a halt when I saw it, because it doesn’t *look* ironic, yet felt totally out of place for a US English speaker to use in the present context and company.

    Having said that, I haven’t spent much time in the Eastern US, and I’m ready to be schooled on the usage norms there.

    1. >yet felt totally out of place for a US English speaker to use in the present context and company.

      Your confusion is understandable. I lived in London for some time at a formative age; my idiolect therefore includes some Britishisms that are unusual or marked in the U.S.

  12. “I shall” is pedantically correct English, and Eric’s written English is nothing if not pedantically correct. (Except where it’s hackish instead.)

    FWIW, Eric’s basement seems adequately heated to me. I didn’t have to burn a single book to keep warm.

    The project I’m currently involved in uses a SVN-like workflow (with Mercurial, but there’s no significant difference here): one master repository that everyone commits to. However, it also keeps several others around, pulled from and pushed to the master as needed. Individual developers also clone off copies for local projects that are intended to get merged back in, in the same way that people used to use SVN branches; the advantage is that the final result is cleaner.

  13. I just added another 50 to the fund. Please feel free to spend it without matching.
    Jim

  14. Totally on topic: esr, have you considered using the money to rent a cloud service to do this processing? After all, once the dev work is done, these are one-offs. Then your expensive high-performance rig sits idle.

    You could, for example, rent an Amazon EC2 instance: https://aws.amazon.com/ec2/pricing/
    Since you only pay when you use it, the cost is very low, and even the non-matched slush fund would go a long way.

    1. >Totally on topic: esr, have you considered using the money to rent a cloud service to do this processing?

      Yes. I’ve answered this question twice; look upthread.

  15. > Totally on topic: esr, have you considered using the money to rent a cloud service to do this processing? After all, once the dev work is done, these are one-offs. Then your expensive high-performance rig sits idle.

    Evidence suggests that once he finishes up this task, in a few months or so, he’ll tackle some other monumental job that nobody else wants to do. If this program were easier to parallelize, then splitting it over half a dozen cloud instances would be a good way to go simply to have a faster feedback loop during development. Alas, he’s already proved that to be pretty difficult.

    1. >Evidence suggests that once he finishes up this task, in a few months or so, he’ll tackle some other monumental job that nobody else wants to do.

      It’s happened often enough before…

    1. >For how long will donations be accepted?

      I didn’t have a plan. I thought I would be lucky if $500 came in; now the Beast is just about fully funded.

      I can always use donations. There’s test hardware for GPSD to buy (and damned expensive some of it it is, too, one you get out of consumer-grade GPS mice). So I’m not going to turn off the blogbutton or anything, but once the Beast is built donations might go for a survey-grade GPS, or a pro-quality DGPS receiver, or a replacement for my dead second flatscreen, or (very occasionally) a nice dinner for my amazingly tolerant wife.

      1. At least $250 came in while I was at kung fu class. My bogglement increases by the hour

        Yeah, I think we’ll be adding that $22 aftermarket super-quiet case fan. And reporting on the Beast’s noise output – I’ve got a dB meter around here somewhere.

  16. > If this program were easier to parallelize, then splitting it over half a dozen cloud instances would be a good way to go simply to have a faster feedback loop during development. Alas, he’s already proved that to be pretty difficult.

    Ignore parallelization. For $0.56/hr, esr could rent a *single* EC2 instance that has more memory and CPU cores than the highest-end Great Beast proposal. The current fund would be enough for thousands of hours of this and probably still have enough left over for a decent (but not uber) new desktop.

  17. Cloud computing has a couple of faults, notably the ability to build and run your own kernels and profile them and your applications. You can’t be sure if it’s your workload that is causing any given problems you have. It is much better to do the science on bare hardware.

  18. Dave, EC2 supports custom kernels, though they obviously need to be xen-enabled. I have no idea why that would matter here, though – this isn’t some kind of realtime profiling project where such things would affect the “science” of the project.

    Jsn, while you can cheaply get a lot of RAM on EC2, and a lot of cores, and Amazon will say that the cores are running on a particular hardware, the reality is that the single-thread performance of those cores is FAR lower than running on real hardware. I probably would still go with the cloud for this project. Sure, it will get the job done for $50 not $2000, but it would get the job done. I will say that if doing a conversion as fast as possible is the overriding concern to the point where paying a 10000% cost premium is worth it for maybe a 100% performance gain then buying real hardware makes sense. However, I wouldn’t be buying multi-chip, multi-core Xeon systems here. Either it is overwhelmingly single-threaded or not. If it parallelizes well, then a 32-core EC2 system will do the trick for tenths of pennies on the dollar, and if it doesn’t parallelize well, then I have no idea why you’d want to buy all those cores and pay a premium on the hardware that hosts it all.

    RAM is a different story. If you need a lot of it then you might need an expensive motherboard, but RAM is something you can easily get a lot of on EC2 as well. If it were my money I’d at least benchmark EC2 to see if it met my needs. You can get a spot instance of their biggest systems for silly-cheap prices like 25 cents/hr. Go ahead a spend a buck or two and see if it does the job before building a custom rig. :)

  19. > Yes. I’ve answered this question twice; look upthread.

    Found it, thanks. Shame on me for not searching.

  20. “At least $250 came in while I was at kung fu class. My bogglement increases by the hour”

    Think how much you could pull in if you go ‘pro’….teeny-tiny plaques on each key of the keyboard memorializing your donors, $100,,,,for the wealthy, a sign renaming gpsd to ‘The Dr. and Mrs. Finkelstein Memorial Software Library’….

    ….Hey, the colleges all do it….

  21. Let’s see. $100 from Wendell Wilson consulting, $50 from Jim Hurlburt, $100 from Harry Evans, $75 from Dan Oglesby. That brings the fund to $2515. This is maybe more money than the Great Beast build can use. I’m feeling a bit dizzy, and very grateful.

    I get the impression from some of the donation messages that some people are donating to support my service work in general, now that there is some social momentum behind “give ESR money to do a cool thing”.

    If that’s true, the most useful thing such people could do would be to set up a small weekly subvention on Gratipay – button off to the right of the page. Even a steady $5 a week from enough patrons would add up to enough for my projects to have an actual budget. And guarantee the the Great Beast could be properly maintained after it’s built.

  22. My gratitude to Harold Tolley for contributing $50. Brings the Help Stamp Out CVS In Your Lifetime fund to $2565.

  23. My intention by offering a couple of bitcoins was to remove cost as an objection to upgrading out of the consumer market and into ECC support. The flood of donations has rendered that moot, so I’d like to suggest that you get more memory than you expect to need: 64 instead of 32.

    For memory, I suggest you use 16 GB DIMMs. You can start with 2 or 4 filled today, and retain the option of filling the rest later.
    CT16G4RFD4213
    CT2K16G4RFD4213
    CT4K16G4RFD4213

    Amusingly enough, on newegg right now, single 16 GB sticks are cheaper than pairs or sets of 4, not counting shipping. With shipping included, the pairs are cheaper (by a couple of dollars). 2x2x16 GB = 64GB can be had for $870, or $230 over what you have listed for 32 GB.

    Also, the decision between Supermicro and ASRock was initially settled by the sound card. In light of the new budget, a PCIe sound card is somewhere between 1% and 3% of total. Meanwhile, the Supermicro board is a server board and has server features, such as ILOM built in. Would remote access (power control, KVM, etc) be useful to you?

    (FYI, ILOM is a security nightmare, so you’d have to let yourself in through your firewall or router to use those things, but that isn’t a high bar these days.)

    The M.2 drive is another factor. The supermicro board doesn’t have that slot. On the other hand, I’m not sure how realistic it is to expect to saturate a SATA3 port for long stretches, and if you can’t do that, a faster lane isn’t going to help much.

    If you, or someone you trust, runs bitcoin, give me an address and I’ll send them over. Or, I can try to get newegg to ship a couple of parts directly to you (or the build site). Or, email me to work something else out.

    1. >The flood of donations has rendered that moot

      Yes. Yes, it has.

      >I’d like to suggest that you get more memory than you expect to need: 64 instead of 32.

      My thoughts were tending in that direction, indeed.

      >Would remote access (power control, KVM, etc) be useful to you?

      Not very.

      >If you, or someone you trust, runs bitcoin, give me an address and I’ll send them over. Or, I can try to get newegg to ship a couple of parts directly to you (or the build site). Or, email me to work something else out.

      Thanks, there’s enough money in the kitty that I don’t feel any need to create even minor logistical complications for John Bell, who’s doing the build. Your willingness to support the effort with a lot of dollars-equivalent is noted and appreciated.

  24. More RAM is always better. (Plus or minus memory bandwidth considerations, but really now…is that that significant?)

  25. Forget the PCIe sound card. Decent USB sticks can be had for less than $10, and one of the ones I have lying around has a combo 3.5mm analog/mini-SPDIF output. Audio’s just so cheap these days. When I Get Around To It, I even have a design-stage idea to spec out an embedded PoE-powered RTP-streaming audio player with a complete BoM around $25 in quantity.

  26. For such a noble purpose as this, if you were to prominently mention what CPUs you’re using, you could probably get Intel (or AMD) to donate some of their top-of-the-line beasts (note: I’m an Intel employee, but I’m not speaking on behalf of my employer, this is mere speculation).

    With sponsored CPUs you could spend even more money on RAM. :)

  27. @John D. Bell:”In the summer of 1985, while a consultant at AT&T Information Systems in NJ, I was allowed to borrow a Unix PC 7300 to take to my apartment. Its UUCP node name was “pbeast”.”

    I still own an AT&T 3B1, big brother to the PC7300. It still boots, too.

    Ah, the days when ihnp4 was a UUCP backbone site.

    1. >ESR, I’m confused.

      I don’t see why. The release has shipped, now we can migrate. Should happen sometime in the next two weeks.

  28. jsn> Totally on topic: esr, have you considered using the money to rent a cloud service to do this processing?

    ESR> Yes. I’ve answered this question twice; look upthread.

    So you have, Eric. But as an armchair psychoanalist of hackers from the outside, I suspect you answered the question on the wrong level. A more appropriate approach to answering it might be to observe that shiny new hardware is a fetish of the hacker tribe, and that communal drooling over such hardware is one of its sacred rituals. Jsn and his precursors might just as well have asked a community of recreational hunters why they don’t just buy a steak at WalMart.

    1. >shiny new hardware is a fetish of the hacker tribe,

      Since you brought it up…

      I think this is less universally true than even hackers themselves assume. I mean, John Bell laughed at my weak-sauce Core 2 Duo, but neither he nor anybody else thought the tendency to hold on to hardware and squeeze the last smidgen of use out of it was funny or remarkable.

      I think there are opposing drives at war within the hacker soul: “chase the shiny” vs. “relentlessly minimax your resource use”. Either can lead to overinvestment – I had to train myself out of being a hardware packrat and actually kick old hardware out of the house. I don’t think I’m alone in this – you saw the nods of agreement when I muttered about being a cheap bastard.

  29. PS: This is not to say that the answer you gave on the engineering level was wrong.

  30. >I don’t see why. The release has shipped, now we can migrate. Should happen sometime in the next two weeks.

    (Facepalm)

    Of course. I wrongly assumed that, by “after”, you meant “immediately after”. Just another case of sloppy reading/thinking on my part.

  31. On the subject of “overfunding” and needing new monitor(s): MonoPrice is selling 2550×1440 panels for about $400 these days.

    More Pixels Is Better Pixels…

    1. >On the subject of “overfunding” and needing new monitor(s): MonoPrice is selling 2550×1440 panels for about $400 these days.

      There are times when I am too freaking virtuous for my own damn good. I saw this just after deciding that the right use for the extra $600 or so would be to double the amount of memory in the Xeon build to 64GB. Fund now basically at parity with the build cost.

      UPDATE: No, the adjusted is $2736 against $2590, so $146 short. This is not me complaining.

  32. > I think there are opposing drives at war within the hacker soul: “chase the shiny” vs. “relentlessly minimax your resource use”

    While I’m not a “cheap bastard”, I am resource constrained so nearly all of my PCs were bought from my clients as they retired old PCs. (Though some clients are better at keeping up with newer PCs than others. (example, my PC at my current client is a Core 2 Duo (2GHz) while my personal PC is a Core i5 quad (3 GHz) with hyperthreading that I bought a year ago from the client I had at the time)

  33. I can’t remember the last time I spent more than $300 on a PC update, and I max out the CPUs on them all the time. You can get a lot of system for not much money if you are careful (and that $300 does not count the fact that I recycle parts heavily).

    But, if people want to spend $2k on a rapidly-depreciating asset more power to them. :) I’d rather spend 1/4th the budget 4x as often and let Moore’s Law put me ahead of the game 3/4ths of the time.

  34. >I think there are opposing drives at war within the hacker soul: “chase the shiny” vs. “relentlessly minimax your resource use”.

    Indeed. I owe much of my early technical skill development to having to find ways to make do with less than ideal hardware. Now I can afford better, but there’s still a sense of pride in making do. (There’s probably a variant of this line of thinking that compels hackers to build clusters of Raspberry Pis.)

    So every ~4 years, I concede that I need a new main box, and then some bit flips in my head and I splurge like mad. I don’t know if it’s this way for everyone, but I mark time by machines. I can’t tell you what I was doing at age 24, but I remember exactly what projects I was working on when I built that 700 MHz AMD Athlon, or the 200 MHz Pentium Pro before it….

    1. >So every ~4 years, I concede that I need a new main box

      That’s interesting. I think I upgrade on about the same cycle. (This last one ran a bit long.)

  35. >But, if people want to spend $2k on a rapidly-depreciating asset more power to them. :) I’d rather spend 1/4th the budget 4x as often and let Moore’s Law put me ahead of the game 3/4ths of the time.

    Moore’s law is quickly coming to a close, and it takes a good deal longer now for hardware to become obsolete in the home desktop/server space. My home Linux box is happily coasting along on 6 year old hardware, for instance.

    Eric, though, is running hardware from before Moore’s Law flattened out quite so much, and an upgrade makes sense for him.

    1. >Eric, though, is running hardware from before Moore’s Law flattened out quite so much, and an upgrade makes sense for him.

      Wellll…it’s not so much that Moore’s law is coming to an end, it’s just that processor speeds are hitting a range where (as Jakub Narebski pointed out a couple posts ago) quantum effects are proving a non-negligible barrier to upping their speed-power product. We’ll still collect Moore’s Law benefits on (for example) SSDs.

      But the processor-speed stallout does imply that planning for longer system lifetimes than were normal during the fast ramp-up makes sense. The Great Beast might well be a reasonable rig for six or seven years out if I design it with components that won’t crap out prematurely; this is one reason designing for reliability is so high up on my list of priorities.

  36. >>Moore’s law is quickly coming to a close, and it takes a good deal longer now for hardware to become obsolete in the home desktop/server space.

    Not so sure about Moore’s law but we do seem to be approaching a saturation point on desktop computers similar to what happened to hand held calculators twenty years or so ago.
    For a very long time — 1970 to somewhere around 1990, calculator power doubled and cost halved two or three times a year At about the time that the hp 48 came out, calculators reached a point where more capabilities had decreasing value. I bought the hp 48G somewhere around ’95 and a 48 gx several years later because I was taking a class in linear algebra and there was one thing that the G couldn’t do.

    I gave the G to my daughter for a time, I now have both of them, use them daily, and will feel sad when they die some decade in the future. At this point I may die first and it won’t be a problem.

    Beyond a certain point except gamers, big time number crunchers and gnarly csv repositories, while more isn’t necessarily worse, it isn’t much better either.

    I will confess to being tempted to replace my 2gig clock, 4gb ram box with one like hedgemadge built though — sometimes shiny does tempt.
    Jim

  37. I don’t think the attitudes towards machine performance are strictly a hacker thing. Some people really enjoy owning shiny, new, high-performance cars and trade the old one in every couple of years. Others (like me) just want something comfortable to take them from point A to point B when needed. We just buy one new, or nearly new, and drive them until they are junk – then repeat the cycle. The machine is just an appliance, a means to an end. We use the money we save for other purposes.

    I’m typing this on a Compaq small footprint desktop bought years ago. Works OK, but I’m seriously thinking of taking DMcCunney’s advice and picking up one of those Dells at the local Micro Center. It ought to be good for the next five years; it will certainly be more than adequate for what I’ll do with it.

  38. 3840×2160 monitors have come down in price recently. I just bought this one last week when it was on sale for $500:

    http://www.amazon.com/gp/aw/d/B00LJVMOEY/ref=redir_mdp_mobile?pc_redir=1413935420&ref_=pe_385040_121528360_TE_dp_1

    I just hooked it up a few minutes ago. I’ll post a review after I’ve been using it for a few days.

    The drawback to all 4K monitors in this price range is the 30Hz refresh rate, but it’s not as big a deal as it used to be with CRTs. The only thing I’ve noticed so far is that cursor movement looks slightly strange; I think I’ll get used to it. YouTube videos look fine. I haven’t tried twitch gaming yet but that probably won’t be so great.

  39. me> shiny new hardware is a fetish of the hacker tribe

    esr> I think this is less universally true than even hackers themselves assume.

    That’s an interesting insight, thanks. (And thanks for confirming it,
    RonW and Casey Barker.)

    In retrospect, I should have known better. I used to read Alan Cox’s blog religiously while he was maintaining the 2.2 branch of the Linux kernel. (They weren’t even widely called “blogs” then.) Judging by some pictures he posted, his office was packed with computers, all at least five years behind the curve, “because that makes it easier for the hardware to fail so I can test if failures trigger the right kernel errors.” (This quote is from memory.) I wrote off Alan’s stance as an anomaly at the time, which I probably shouldn’t have. But then again, when has an armchair psychologist ever let evidence interfere with a pet theory?

    1. >I wrote off Alan’s stance as an anomaly at the time

      I confirm that is not. Any hacker would make a probably correct guess about the function of that wall of hardware on first seeing it. A crucial clue would be that (doubtless) it looks like it’s in use rather than merely gathering dust; the deduction to “probably a test lab” would follow instantly.

  40. “I had to train myself out of being a hardware packrat and actually kick old hardware out of the house”

    /me seems to remember kicking esr to kick the old hardware out of the house. And it was more like a long slow lob (weeks to get past the front door entirely)….

    I guess I have a very different take on hardware than some – every time I find a way to save a few seconds of time on something every day, I’m willing to spend the time upgrading or changing something (sometimes with total disregard to the actual amount of time to be saved by me, personally. I’ll never get back the 4 years I’ve spent on fixing bufferbloat).

    So I’ll spend money on the best (and quietest) hardware, the best keyboard, the fastest network, etc, to get those extra seconds back in my life, when I have none to spare.

    I remember a time when I didn’t feel so rushed: when a kernel compile took 3 days, I spent a lot more time playing piano and in the hot tub.

    Now, it was incidentally my hope that the funding level for this got eric into the realm of 2-4 socket, 10-16 cores each hardware with 45MB of cache and 256GB of RAM… because I’m pretty sure that he’ll find some way to use up all those cycles intelligently, on some workload, one day.

    The larger argument (to make with managers), if that if buying item X makes a programmer 1% more productive, any capex spent for that pays for itself in weeks.

    Making eric more productive has been a goal of mine since first encountering his old 20 inch CRT.

    All that said, *my* desktop is a tiny little nuc, and I have 8 monster machines in the cloud for everything else – (and I’ve longed to hack kernels on them) – and dozens of teeny little machines floating about….

  41. > I think there are opposing drives at war within the hacker soul: “chase the shiny” vs. “relentlessly minimax your resource use”. Either can lead to overinvestment – I had to train myself out of being a hardware packrat and actually kick old hardware out of the house. I don’t think I’m alone in this – you saw the nods of agreement when I muttered about being a cheap bastard.

    Eric, it sounds like this is an area of hacker culture you haven’t consciously explored before. Perhaps it would make sense to dig into hackerdom’s approach to their tools and toys for the sociological enlightenment you may gain. With you in the middle of an upgrade, and the topic being of interest to many around you, now might be a good time. There are a number of interesting questions to tackle; in particular, how hackers may differ in their approach to a smartphone, a laptop, and a desktop beast / server rack monster. Consider the age of the hacker, the socioeconomic rung, area of expertise/interest, etc. What characteristics cross those categories, and which don’t seem to? When a craftsman chooses his tools, what does the resulting toolchest say about him? And what does the generosity of fellow hackers say about the hacker culture?

    (I also wonder if you approached some hardware vendors if their marketing departments might not be willing to sponsor your time on researching how hackers think about their machines and how to appeal to them with their products. I wouldn’t know how to begin to go about that, however.)

    1. >There are a number of interesting questions to tackle; in particular, how hackers may differ in their approach to a smartphone, a laptop, and a desktop beast / server rack monster

      The pattern I’d expect is rather rational minimaxing with respect to opportunity cost – that is, most chase-the-shiny with respect to smartphones, least with respect to server-rack monsters.

      >When a craftsman chooses his tools, what does the resulting toolchest say about him?

      A hacker’s toolchest will tell you things about him. What kinds of problems he likes is obvious, but subtler things too. Like, is he adventurous or risk-averse?

      >And what does the generosity of fellow hackers say about the hacker culture?

      I think you can easily figure that one out yourself.

  42. @LS: “I’m typing this on a Compaq small footprint desktop bought years ago. Works OK, but I’m seriously thinking of taking DMcCunney’s advice and picking up one of those Dells at the local Micro Center. It ought to be good for the next five years; it will certainly be more than adequate for what I’ll do with it.”

    “more than adequate for what I’ll do with it.” was the key.

    I needed to upgrade, as my main system was becoming unusable, and I was limping along on a netbook. My SO, who keeps an eagle eye on the budget, said “You need a new machine. You have $500.” By happy coincidence, Micro Center had sent me a deals email listing a refurb Dell Optiplex 755 Small Form Factor box with 2.4 ghz quad-core Xeon CPU, 4GB RAM, and 250GB SATA drive running Win7 Pro for $250. That looked like a reasonable place to start. Four more GB RAM, a 240 GB SSD, and a 1GB AMD/ATI PCI-e video card got me what I have now, for a total cost $50 over budget, and I’m still smiling over the results.

    There were a few speed bumps resulting from the “Small Form Factor” design, like how to use both the SSD *and* the SATA HD (give up the internal DVD and use its SATA connector on the mobo for one of them), and getting the machine to *boot* from the SSD, because the BIOS assumed a single SATA drive and provided no option to boot from a second (the answer to that was a freeware utility that modified the Win7 boot menu.) But they were minor issues all told, and the price was right.

    My needs were modest. I’m *not* a heavy gamer who gets a video card with faster processor and more RAM than the machine it’s installed in. To the extent I play games, it’s things like Nethack. I don’t keep enormous amounts of data on the machine, so I don’t need TBs of disk. Nor do I do the sort of heavy development where I spend too much time twiddling my thumbs waiting for a build from a source tree to complete. The system I have is low end as current boxes go, but entirely adequate for my normal usage.

    I wanted a system that booted fast, and loaded and ran programs fast. It spends most of it’s time in Firefox, so a fast enough CPU and enough RAM to run FF with multiple tabs open was a requirement. I also wanted a machine I could set to multi-boot, and run Win 7 and Ubuntu.

    I got what I wanted, and it does what I need it to do.

    But then, I’ve historically lagged behind the curve on hardware, and part of my fun has always been tweaking to get that last extra bit of performance out of the hardware I have *without* throwing money at it.

    It wouldn’t be adequate for the sort of thing Eric wants to do, but I wouldn’t try to do that on it in the first place. I don’t ask a desktop to do a server’s job, and I think Eric’s effort to build a machine suited for the task is the way to go.
    ______
    Dennis

  43. I fall into the category Eric described just above…with one exception: when I was flush with cash, I spent over $5K on a top-of-the-line dual-quad Mac Pro. At the time, i was still doing heavy Hercules development, and Hercules will run your system flat out for as many cores as you can throw at it.

    But when I needed a faster machine to run Second Life on, I fit a Linux box into $500, and when that $5K Mac Pro got obsoleted by OS X 10.8, I spent $1100 on a used two-generation-old dual-quad Mac Pro with a bent-up case and another $120 on upgrading the CPUs.

    Hackers will spend money on shiny on the desktop, too, if there’s a justification for it – and it needs to be a real one, not just “well, I think this will run faster”.

    1. >In case you’re looking to order some or all of the parts from Newegg

      Alas, John Bell is not ready to start the build yet. He’s on the road and semi-out-of-contact and will fotr several days yet. Let’s hope it’s still running when we’re ready.

  44. > code that does intelligent translation to gitspace

    I think we agree here ; I’d envisaged doing a initial “dumb” translation to gitspace (forming a tree that was the root of the SVN repo FS for each commit), and then doing a smart transform on that repository that formed the real commit graph by attaching commit objects to the subtrees afterwards.

    Git being good at detecting similarity, it should be possible to follow the actual history of the trees better than SVN and it’s users do (I’ve seen so many histories cruelly broken by automated IDE features when people delete whole trees and replace them with their super-special patched ones to commit – Git just takes this in it’s stride and even fixes this problem when converting SVN repositories with git-svn).

    Of course… my dayjob didn’t permit me the time… and now I’m leaving it, I don’t have the motivation of a load of old repositories. I’m glad that you have both time and motivation … I’ll definitely be checking out reposurgeon if I ever run into a large cache of SVN repositories again.

    > git-svn is pretty broken

    From my POV, the worst thing about it is that it touts interoperability with SVN but at the cost of having a broken Git workflow. Pushing commits back to SVN rewrites the branch you just pushed (because it writes metadata to the Git commit comments) which pretty much means you all keep your own clone and interoperate only via SVN.

    In contrast bzr-svn does it much better ; it writes it’s metadata to the SVN repo using commit properties, so other users can come along and pull branches with Bazaar and interoperate seamlessly either with Bazaar or SVN ; they can do things like mail each other revision graph patches, push and merge each others branches, and generally not worry too much about things. They still have to avoid a few behaviours, but there isn’t that obligate rewriting of revisions when you transfer them back to SVN.

    For a long while I preferred working with SVN repos with Bazaar for that reason (and because git-svn runs like a dead donkey nailed to the floor with a railway spike on Windows) for a long while… until I ran into one of those epic history fractures which Git fixes so nicely. And then I got addicted to the speed and power of Git and that was that.

  45. >In contrast bzr-svn does it much better ; it writes it’s metadata to the SVN repo using commit properties

    bzr has its own commit properties, while, AFAIR, Git does not. So commit properties in SVN. bzr and any other VCS get put in tags.

    BTW, I am considering delving into Python to see if I can add support to reposurgeon for putting non-commit properties into tags.

    1. >BTW, I am considering delving into Python to see if I can add support to reposurgeon for putting non-commit properties into tags.

      I’m not quite sure what you mean by that, but reposurgeon’s support for the bzr commit-property extension might be relevant.

  46. @Jay Maynard: “Hackers will spend money on shiny on the desktop, too, if there’s a justification for it – and it needs to be a real one, not just “well, I think this will run faster”.”

    And the question is run *what* faster.

    The Micro Center near me has a section devoted to DIY, with a selection of cases, mobos, CPUs, video cards, and other components for the folks looking to build from scratch. Most of those seem to be heavy duty gamers, concerned with the best performance in graphically intense games, and getting that extra few FPS to aid play. There are impassioned debates in various forums over the virtues of the hardware used to do it.

    That’s not the shiny I buy, but I understand the motive, and those folks are heavy into benchmarking to *prove* a particular solution runs faster. What they do isn’t what Eric is doing, but they’d totally get the concept of custom building a system, and spending time on the design and specification stage to make sure the goals are clearly defined and the result will achieve them.
    ______
    Dennis

    1. >The Micro Center near me has a section devoted to DIY

      Those gamer components have uses in workhorse systems. Ignore the bling (cases with go-faster stripes, superfluous LEDs, etc.); the value is in the aftermarket power-management, noise reduction, and cooler parts. Because they’re built to handle overclocking, you can put them in a design meant to run at rated speeds and collect the margin in higher reliability and a longer service life.

      This is why the Great Beast will include a gamer PSU and cooling fans.

  47. >I’m not quite sure what you mean by that, but reposurgeon’s support for the bzr commit-property extension might be relevant.

    I saw that mentioned in the reposurgeon docs.

    The docs also say that SVN commit properties (at least for directory copies that create branches and property-change-only commits) get put in tags. Later they say that other SVN properties are ignored with a comment that this could be problem for the svn:eol-style property.

    So, I am thinking about trying to add saving of other SVN properties into tags as well.

  48. @esr: “This is why the Great Beast will include a gamer PSU and cooling fans.”

    Yep. Historically over the decades, the failure point in my systems has been the PSU. PCs are commodities, bought on price, and PSUs are a place where manufacturers save pennies, and it shows.

    My current system has the stock PSU, fans, et al, because nothing I do will stress it and I didn’t need to go third-party on the components. If I were building a new box from scratch, I would, and I’d benefit from the gamer driven hardware.

  49. It isn’t just the PSU, though that is a big thing that name-brands cheap out on. It really is anything that doesn’t translate into a number that goes on the display sign next to the price. People know more MHz costs more $$$, and more GB costs more $$$, and more TB costs more $$$. They might even realize the difference between a hard drive and an SSD, but they probably don’t think to check the RPMs for a spinning disk, let alone its cache and so on.

    PSU is just a big example of a part that you can cheap out on without anybody noticing. Big brands were notorious for finding the one 150W $10 PSU that actually worked with the specific $30 motherboard they were using and ordering a million of each, and then if you tried to replace either you’d be hosed if you just just paid attention to the rating on the sticker. I think at one point Dell even put out a model that used an ATX connector that didn’t even use ATX wiring.

    There is a big difference between the typical $80 500W PSU and the typical $20 500W PSU.

    The last time I checked anandtech wasn’t doing motherboard benchmarks, but the last time I saw one you could get 20% difference in performance across various models with all other components being the same (I’m not taking Intel vs AMD here – I’m talking sometimes different boards with the same chipset). I imagine that if you actually ran them all to failure you’d see an even bigger difference, though that is much harder to test in a manner that is relevant for purchasing decisions.

    I tend to build half-decent systems at low-cost (usually means I end up going with AMD since they tend to cover that pricing space better than Intel). I build them so that I have control over the components, and for the most part a decent case, PSU, drives, etc are much longer-term investments than a CPU+MB+RAM.

  50. @Rich Freeman: “I tend to build half-decent systems at low-cost (usually means I end up going with AMD since they tend to cover that pricing space better than Intel). I build them so that I have control over the components, and for the most part a decent case, PSU, drives, etc are much longer-term investments than a CPU+MB+RAM.”

    Yep again. I’ve been following the consumer electronics industry for years, and watching the fun as everyone tried to cut costs to be able to offer cheaper prices. For instance, I’ve lost count of the number of drive manufacturers that bit the dust as consolidation occurred to get economies of scale. (And I recall when Seagate bought CDC’s OEM drive operations, and people kept charts of which Seagate models were actually CDC made because they were more reliable.)

    On the system my current box replaced, the mid-tower case is the only remaining original part.

    I’ve been perfectly happy to buy AMD instead of Intel, because I don’t do the sort of things where Intel is a clear winner. I’d have been happier if there were more consistency in CPU form factors and mounting, so you didn’t have to replace mobo *and* CPU because a new one of either wouldn’t work with the existing part, but it generally wasn’t a choice. An upgrade required both. Deal with it.
    ______
    Dennis

  51. @Adrian:
    > From my POV, the worst thing about it is that it touts interoperability with SVN but at the cost of having a broken Git workflow. Pushing commits back to SVN rewrites the branch you just pushed (because it writes metadata to the Git commit comments) which pretty much means you all keep your own clone and interoperate only via SVN.

    @RonW:
    > bzr has its own commit properties, while, AFAIR, Git does not. So commit properties in SVN. bzr and any other VCS get put in tags.
    >
    > BTW, I am considering delving into Python to see if I can add support to reposurgeon for putting non-commit properties into tags.

    Modern Git has *notes* which you can add (or rather attach) post-creation to commits. They would be better than tags to put properties into (can be attached to other types of objects IIRC).

    Alas, the original author of git-svn stopped working on it because he doesn’t use Subversion anymore, and there is no dedicated developer. Also, from what I have heard Subversion bindings API are… well, hard to work with.

  52. > Later they say that other SVN properties are ignored with a comment that this could be problem for the svn:eol-style property.

    Hmmm… I wonder why reposurgeon doesn’t convert this one into ‘eol’ .gitattribute

  53. Jakub, Eric, what do you guys make of Mercurial’s branch history tracking? My understanding is that Mercurial is better about tracking the history of an individual change across branches, while Git tends to treat all commit nodes as an undifferentiated pool, with all relationship tracked with tags and parent links in other commit nodes.

    1. >Jakub, Eric, what do you guys make of Mercurial’s branch history tracking?

      Insufficient practical experience of it to have an opinion.

  54. > Jakub, Eric, what do you guys make of Mercurial’s branch history tracking? My understanding is that Mercurial is better about tracking the history of an individual change across branches, while Git tends to treat all commit nodes as an undifferentiated pool, with all relationship tracked with tags and parent links in other commit nodes.

    What do you mean about “branch history tracking”? I know the Git side, but Mercurial side only from research for http://stackoverflow.com/a/1599930/46058 answer.

    Both track merges i.e. store both parents of a merge commit and follow revision chain to find common ancestor to know which commits were merged, as far as I know. This is opposed to what Subversion does: tracking merged-in information, which revisions were merged in rather than the shape of the DAG of revisions.

    If you meant “rename tracking”, then yes, Mercurial can and does track rename (storing this information at commit time), while Git uses heuristic similarity-based rename *detection* (doing this at analysis time). The Git way allows for something like inter-file and intra-file rename and copy detection in “git blame -C -C”; on the other hand it requires turning it on for diffs and log, and “git log –follow file” is a bit of underperforming hack.

  55. >while Git tends to treat all commit nodes as an undifferentiated pool, with all relationship tracked with tags and parent links in other commit nodes

    That is not necessarily a bad thing.

    Many years ago, a company actually hired me and an Orcale DBA to design and implement a “Code Management System” After working out what metadata needed to be tracked, the DBA designed the DB schema. She specified 7 tables: Files, Attributes, Change_Sets, Descriptions, Labels, Branches and Members. The Change_Sets table had the fields Number, Parent, Merge_Parent. The Descriptions table had the fields Number, Change_Set and Content. The Labels table had the fields Number, Change_Set, Name and Value. The Branches table had the fields Number, Name and Change_Set The Members table had Number, Name, Version and Change_Set. For each commit, there was a Commit record, a Description record, at least one Label record and a Member record for each file changed in the commit. For branch commits, there was a Branch record.

    Not how I would have designed it by myself, but not being a DB expert, I deferred that to her. (Also, not a project I would have recommended, but key people in the company wanted this and were offering me money to do it.)

  56. @Jonathan Abbey:

    Nowadays, from what I know, Mercurial users use “bookmark” branches, now with namespaces… which have similar design to Git branches (and remote-tracking branches).

    “Named branches” (branch labels, family) turned out to be not so good, their disadvantages (e.g. cannot rename branch, name collision) overcoming advantages (metahistory stored). It’s not alone, it’s not an only concept that didn’t survived the test of time – for example Codeville advanced merging algorithm turned out to be not much better than 3-way merge, but much more complicated (esp. in the case of conflicts).

  57. “On the system my current box replaced, the mid-tower case is the only remaining original part….I’d have been happier if there were more consistency in CPU form factors and mounting, so you didn’t have to replace mobo *and* CPU because a new one of either wouldn’t work with the existing part, but it generally wasn’t a choice. An upgrade required both. Deal with it.”

    Meh….I used to to upgrade piecemeal like that back in the days of 386s and 486s but no more. Nowadays it’s too much like when you arrive at your date’s place and she’s all ready to go, BUT – she can’t find the other earring!. She pulls out another pair, but now the blouse is wrong…..she changes that but now the shoes are wrong….now the skirt…etc., etc.,etc….

    If you want to upgrade, buy the whole thing! The designers at Dell, HP, and Lenovo choose components that are roughly at the same performance level and work well together. Replacing one component out of the group is not going to give you a big boost; you need to go up to the next model if you want that. You can save yourself a lot of time and trouble if you don’t waste your efforts on foolishly optimized custom designs.

    1. >You can save yourself a lot of time and trouble if you don’t waste your efforts on foolishly optimized custom designs.

      I want to note that for normal PCs I think this is true. It’s been nearly a decade since my last custom build. This is an unusual situation; I’m optimizing for a figure of merit that the white-box vendors don’t care about.

      There is another valid reason for custom builds: you have to go at least semi-custom to get a PC that’s really quiet.

  58. @LS: “If you want to upgrade, buy the whole thing! The designers at Dell, HP, and Lenovo choose components that are roughly at the same performance level and work well together. Replacing one component out of the group is not going to give you a big boost; you need to go up to the next model if you want that. You can save yourself a lot of time and trouble if you don’t waste your efforts on foolishly optimized custom designs.”

    Which is what I did on the current system. But the one it replaced was built from components in the first place, and some upgrades – bigger disks, more RAM, better video – simply didn’t justify getting a whole new machine. Even replacing mobo and CPU generally didn’t, as existing drives and the like were likely to be migrated.

    And the current system is one I’m not likely to replace for quite some time. About the only thing it might get is more disk storage, and that will be an external drive enclosure connecting via USB.

    The Great Beast Eric is building isn’t “foolishly optimized”. He’s building a server class machine to perform the sort of job that needs to be done on a server to be done in a reasonable time frame, and the discussion here has been about what kind of machine the task requires.

    But since CVS conversions to something better are one time events, I’m curious about what he’ll find to occupy the machine’s capacity after he’s terminated the last CVS repo with extreme prejudice.
    ______
    Dennis

    1. >But since CVS conversions to something better are one time events, I’m curious about what he’ll find to occupy the machine’s capacity after he’s terminated the last CVS repo with extreme prejudice.

      1. I don’t know.

      2. I’m pretty sure something will come up.

  59. My comment about ‘foolish optimization’ was directed at simple, general-purpose computer upgrades. I fully support The Perfect Beast Project for the reasons that esr has stated, and have contributed the grand sum of 20 bucks towards its success.

    I still say, though, that if your present computer is showing its age, stop looking at replacement motherboards. Look at replacement computers. You won’t save much, if any, money, and you may overload your power supply. (This can happen with upgraded video cards, too.) Power supply ratings tend to be rather fanciful, and higher-rated supplies for many machines simply aren’t available. Buy the whole thing. If it fails, you have a warranty.

  60. “I’m curious about what he’ll find to occupy the machine’s capacity…”

    Hey, if the machine allows esr to wipe CVS off the face of the Earth, it will have achieved its purpose and will be well worth its cost. Anything else it does will be pure gravy. He can always set it run the SETI software or similar….

    It’s kind of interesting to see, in DMcCunney’s tongue-in-cheek question, the survival of the ancient mainframe computer attitude where the hardware was so expensive that any idle computer cycles were to be avoided at all costs in human effort. We’re way beyond that now. The world will not end if The Perfect Beast Beast spends its time playing games, or even gets turned off.

    @esr: The Beast’s next project might be a good blog post. Your readers might think up a few suggestions, though you’ll have to wade through the wise-guys who’ll want you to mine bitcoins or plan routes for traveling salesmen.

    1. >The Beast’s next project might be a good blog post.

      Well, the obvious next target would be Subversion. Since I happen to have also written the best Subversion-to-git translator you can find anywhere, this wouldn’t be a very hard choice.

      Which, I hasten to add, doesn’t suck nearly as badly; in fact, given what its design goals were and the state of the art at the time the Subversion devs have little to be ashamed of (well, except maybe that it’s so slow). Subversion is obsolete, but it’s not dangerously brain-dead the way CVS is.

      So there might be better uses for the Beast.

  61. > [Subversion], I hasten to add, doesn’t suck nearly as badly [as CVS]; in fact, given what its design goals were and the state of the art at the time the Subversion devs have little to be ashamed of (well, except maybe that it’s so slow).

    Especially after svn:mergeinfo got added (Subversion 1.5 or 1.6). Before… well, branching was easy but merging was not…

  62. @esr: “>But since CVS conversions to something better are one time events, I’m curious about what he’ll find to occupy the machine’s capacity after he’s terminated the last CVS repo with extreme prejudice.

    > 1. I don’t know.

    > 2. I’m pretty sure something will come up.”

    I’m pretty sure something will come up, too. I’m just curious about what it might be.

    It will be well worth it if all it does is kill CVS, but I don’t see it sitting around idle after it has.

    (If nothing else, it will apparently replace your current desktop, so it won’t simply be turned off after it has accomplished the mission for which it was built.)
    ______
    Dennis

  63. @LS: “I still say, though, that if your present computer is showing its age, stop looking at replacement motherboards. Look at replacement computers.”

    I mostly agree, though the issue in such cases is migrating data. (It’s a reason I’m storing more stuff in the cloud these days, and external drives connecting via USB are Wonderful Things.)

    On that line, I’ve had go arounds elsewhere pointing out that most folks didn’t upgrade Windows versions on existing machines. They got a new version of Windows when they got a whole new machine with the newer version pre-installed. Linux users generally do upgrade in place, but Linux is structured differently and is easier to upgrade that way.

    MS’s recent reorganization of Windows development has as a side-effect changing Windows to make it easier to upgrade in place. Current consumer systems are big/fast enough that the need to get a whole new machine to get better performance has dropped, and MS is likely looking at getting more revenue from in-place upgrades. Their challenge is providing features in newer versions that users will find worth paying for.
    ______
    Dennis

  64. >“Named branches” (branch labels, family) turned out to be not so good, their disadvantages (e.g. cannot rename branch, name collision) overcoming advantages (metahistory stored)

    I read the post Jonathan linked to.

    Seems to me that a “best of both” could be accomplished by using an UUID to identify branch membership. Then any name could be changed or removed as desired.

  65. @RonW: there is additional problem: *local* branch names are local, and not always correspond to branch names in public repository, e.g. my ‘master’ or ‘memory’ branch might be ‘jn/memory-compaction’ in public repository. Git maintains (to some extent) mapping between local and remote and local remote-tracking branch names.

    Also UUID can be hard to do correctly in distributed environment, and with repeating branch names (in the same repository).

  66. >Also UUID can be hard to do correctly in distributed environment, and with repeating branch names (in the same repository).

    UUIDs appear to be used in creating commit IDs. If that’s so, then the same mechanism should work for branch IDs.

    As for branch names being local, that’s fine. An UUID based branch ID would allow arbitrary renaming while still preserving the historical metadata Jonathan (and others) value.

  67. > UUIDs appear to be used in creating commit IDs.

    In Git? Git’s commit IDs are SHA1 digests of the commit, which are in turn based on the commit’s parents, and it’s root tree (which is in turn based on the SHA1 digest of it’s constituents). This is one of the key design decisions that gives Git many of it’s properties.

    Branches in Git are nothing but a floating pointer to a commit. If you have that branch checked out when you commit, the pointer floats up to that new commit. They are not the same construct as in other VCS systems where revisions belong to a particular branch. In Git, there is only the revision graph, and branch pointers. What route through the graph a given branch pointer took is not recorded, since it’s entirely academic as the result is the same at the end. And branch pointers are not copied between repositories in a way that collide (in Git, branch pointers from remotes arrive in their own folder, and branch pointers are only pushed to remotes explicitly).

    This is an entirely sensible decision in a DVCS as pointed out – because people may choose to name branches with the same name on different clones. If you’re doing a conversion from a CVCS, it doen’t present a problem, since you can usually construct a coherent set of branch names that don’t collide (even if the history includes multiple branches with the same name, you can do things like append the revision number they started at to their name).

  68. >Branches in Git are nothing but a floating pointer to a commit.

    I know that.

    I also read the post linked by Jonathan (http://jhw.dreamwidth.org/1868.html)

    My point was/is that by using something like a commit ID as a branch idea, people like Jonathan could have the benefits (metadata preservation) without the problems (name collisions, lacking of renaming)

    Any name associated with a branch could be local only and arbitrarily changed/removed.

  69. This gets further and further off-topic…

    @RonW: commit identifiers are SHA-1 of contents (plus type), which includes commit message, tree object (contents of all files) and author/commit info, including date. Branch lacks all this – it is only a name.

    The problem with UUIDs for commit labels (branch in the sense of “family”, or in Mercurial slang “named branches”) is similar to the problem with rename tracking, rather than rename detection. Branch names are (can be) ephemeral – beside wholesale rename branches can be split or joined.

    For example without –no-ff branch can vanish if the branch you are merging into didn’t advance at all. Or perhaps you started bugfix, but it turned out that the bug is present only when using new feature, so you rebase bugfix on top of appropriate feature branch. Or perhaps something that was two features better work as one (perparation and execution).

    You might want to split a branch, for example if preparatory work has worth on its own. Or it turns out that it is two features, not one.

Leave a comment

Your email address will not be published. Required fields are marked *