Reposurgeon defeats all monsters!

On January 12th 2020, reposurgeon performed a successful conversion of its biggest repository ever – the entire history of the GNU Compiler Collection, 280K commits with a history stretching back through 1987. Not only were some parts CVS, the earliest portions predated CVS and had been stored in RCS.

I waited this long to talk about it to give the dust time to settle on the conversion. But it’s been 5 weeks now and I’ve heard nary a peep from the GCC developers about any problems, so I think we can score this as reposurgeon’s biggest victory yet.

The Go port really proved itself. Those 280K commits can be handled on the 128GB Great Beast with a load time of about two hours. I have to tell the Go garbage collector to be really aggressive – set GOGC=30 – but that’s exactly what GOGC is for.

The Go language really proved itself too. The bet I made on it a year ago paid off handsomely – the increase in throughput from Python is pretty breathtaking, at least an order of magnitude and would have been far more if it weren’t constrained by the slowness of svnadmin dump. Some of that was improved optimization of the algorithms – we knocked out one O(n**2) after translation. More of it, I think, was the combined effect of machine-code speed and much smaller object sizes – that reduced working set a great deal, meaning cache miss penalties got less frequent.

Also we got a lot of speedup out of various parallelization tricks. This deserves mention because Go made it so easy. I wrote – and Julien Rivaud later improved – a function that would run a specified functional hook on the entire set of repository events, multithreading them from a worker pool optimally sized from your machine’s number of processors, or (with the “serialize” debug switch on) running them serially.

That is 35 lines of easily readable code in Go, and we got no fewer than 9 uses out of it in various parts of the code! I have never before used a language in which parallelism is so easy to manage – Go’s implementation of Communicating Sequential Processes is nothing short of genius and should be a model for how concurrency primitives are done in future languages.

Thanks where thanks are due: when word about the GCC translation deadline got out, some of my past reposurgeon contributors – notably Edward Cree, Daniel Brooks, and Julien Rivaud – showed up to help. These guys understood the stakes and put in months of steady, hard work along with me to make the Go port both correct and fast enough to be a practical tool for a 280K-commit translation. Particular thanks to Julien, without whose brilliance and painstaking attention to detail I might never have gotten the Subversion dump reader quite correct.

While I’m giving out applause I cannot omit my apprentice Ian Bruene, whose unobtrusively excellent work on Kommandant provided a replacement for the Cmd class I used in the original Python. The reposurgeon CLI wouldn’t work without it. I recommend it to anyone else who needs to build a CLI in Go.

These guys exemplify the best in what open-source collegiality can be. The success of the GCC lift is almost as much their victory as it is mine.

81 comments

  1. George raises an interesting question. What’s next for reposurgeon?

    Is Gnu CC the biggest public commit graph out there? The most convoluted? Are any others simply waiting for their two-hour appointment in Malvern?

    Are there commit graphs of similar nastiness in the private sector? Could Eric sell his services? $400 for a two-hour convert, say?

    Does optimized reposurgeon still require the 128GB Beast, or can it run on something closer to modern desktop or corporate iron? Or on the cloud?

    Are there more features in the future? What else might we want to do to a repo?

    1. >Is Gnu CC the biggest public commit graph out there? The most convoluted? Are any others simply waiting for their two-hour appointment in Malvern?

      A&D semi-regular Zygo told us back in 2013 of this: “I brought 240GB of RAM, 1TB storage, six weeks of time, and reposurgeon to a fight with a 90GB, 700 kilocommit, 95 kilobranch, 144 megafile SVN repo. The SVN repo won.”

      I think Version 4.4 might be able to handle that action now. That’s only 2.5x the size of GCC; a 256K machine ought to take it in stride, or maybe even the 128GB Beast with GOGC=10 or so.

      >Are there commit graphs of similar nastiness in the private sector? Could Eric sell his services? $400 for a two-hour convert, say?

      You don’t get a quality conversion in two hours. On repos of even non-monstrous size it’s at least a day of work to get all the corners tucked in properly.

      I actually have a paying gig promised when a particular product manager gets back from vacation. Not going to talk about it until the deed is done, but the content being rescued will be very interesting to this crew, I can guarantee that.

      I’d love to have more of this kind of work. People who understand that they need it will pay through the nose for it. Unfortunately a lot of managers don’t understand when they need it – they’re not sensitive to the downstream costs of a bad conversion and are thus all too willing to have some junior programmer in-house botch the job so they “save money”.

      >Does optimized reposurgeon still require the 128GB Beast, or can it run on something closer to modern desktop or corporate iron? Or on the cloud?

      I think you could now do run-of-the-mill conversions (up to, say, 10 years of history with a few dozen devs) on normal desktop iron. The working-set requirement has been drastically shrunk.

      >Are there more features in the future?

      The next thing I hope to add is the ability to excavate ClearCase repositories.

      1. > Unfortunately a lot of managers don’t understand when they need it – they’re not sensitive to the downstream costs of a bad conversion and are thus all too willing to have some junior programmer in-house botch the job so they “save money”.

        I am not even sure why conversion of history is necessary instead of just taking the recent version and hand-migrating some bits when some of the recent changes are buggy. But that’s OK, my last 18 years were spent in a very different, technically MUCH easier kind of programming. One could as well call it scripting.

        So let’s play a game. Some clueless CEO mistakenly makes me a manager overseeing this technically very hard kind programming (aka: REAL programming) because the CEO thinks “code is code”. So I am gonna LARP Mr. Clueless Suit now. Can someone give me a sales pitch how doing this saves thousand of $$$ developer $$$ hours instead of just taking the recent version and hand-migrating whatever is buggy in the recent version?

        I am offering this game because 1) it could be fun 2) someone might need to make this pitch, so practicing before the real one could come useful. Warning: I might be playing it purposefully stupid :)

        1. >Can someone give me a sales pitch how doing this saves thousand of $$$ developer $$$ hours instead of just taking the recent version and hand-migrating whatever is buggy in the recent version?

          That’s easy. You do a shallow conversion, head revision only. Then you get a regression report with a way to reproduce the problem. What you want to do is bisect in the new history to identify the revision where the bug was introduced. Bzzzt! You can’t. That history is missing in the new system.

          Yes, in theory you could run a manual bisection using bracketing builds in new and old repositories. Until you have tried this, you will have no comprehension of how easy it is to get that process slightly but fatally wrong, and (actually more importantly) how difficult it is to be sure you haven’t gotten it wrong. This is the kind of friction cost that sounds minor until it eats man-weeks of NRE.

          So congratulations, tracing the bug just got an order of magnitude more expensive in engineer time, and your expected time to fix changed proportionally. It typically only takes one of these incidents to justify the up-front cost of having had the conversion done right.

          Comes down to risk management really. If you go the shallow-conversion route, maybe you’ll get lucky and never need visibility further back. Or maybe you’ll have a disaster because you increased the friction costs of debugging just enough that you, say, miss a critical ship date. The more experienced with in-the-trenches software development you are, the more plausible that second scenario will sound.

          A subtler issue is that by losing the old change comments you have thrown a way a great deal of hard-won knowledge about why your code is written the way it is. Again, this may never matter – but if it does, it’s going to bite you on the ass, hard, probably when you least expect it.

          And if you say “No problem, the old repository will still be around”…heh. Repositories that have become seldom-accessed are like other kinds of dead storage in that they have a way of quietly disappearing because after a few job turnovers the knowledge of why they’re important is lost. Typically you don’t find out this has happened until you have an unanticipated urgent need, at which point whatever shit you are in gets deeper.

          Nobody in my corner of software engineering considers shallow conversions an acceptable risk. Nobody. We’ve learned that the hard way.

          1. I’m gonna be fried for this I know – but devil’s advocate. Find a bug? What’s wrong with just fixing it as if it had never existed before? Rather than chanelling a perfect 5-year old commit history to ‘identify’ where it was introduced. I’m guessing that even if you managed to do this, there’s a big gap between ‘identifying’ and ‘fixing’. And there’s 5 years worth of commits to fold in. I’d rather pretend that it was introduced yesterday.

            1. Because in any non-trivial (and even in many trivial ones!) program, bugs can be maddeningly difficult to solve. “With sufficient eyeballs all bugs become shallow” derives its power from that problem.

              Bisection gives you a homing device to discover exactly where the bug is happening. Not perfect of course: you might be dealing with a layered bug. But anything helps here.

              1. Interestingly, last night’s episode of The
                Simpsons (now in its fifth decade) featured
                Eric’s idea that many eyeballs make all bugs
                shallow.
                Professor Frink became a billionaire by
                developing a new cryptocurrency. Mr. Burns,
                upset that he was no longer the wealthiest
                person in Springfield, hired hackers to
                develop an even better cryptocurrency. They
                were unable to do so, but they wrote an
                equation whose solution would make all
                cryptocurrency worthless. Burns said
                that was fine, since he didn’t want more
                wealth so much as he wanted to be the
                richest person. The hackers said it would
                take them 90,000 years to solve. So someone
                came up with the idea of putting the
                whiteboard with the unsolved equation in the
                town square so everyone can try solving it.
                Sure enough, the next morning someone had
                written a solution, and the first rule of
                sitcoms (by the end of the episode,
                everything has to go back to the way it was)
                was satisfied.

            2. >I’m gonna be fried for this I know

              Nah. Not by me, anyway. New variant of Hanlon’s easier: “Never mistake for outright stupidity that which can be explained by inexperience.” You sound inexperienced to me, not stupid.

              >I’m guessing that even if you managed to do this, there’s a big gap between ‘identifying’ and ‘fixing’

              So, that’s wrong. For most bugs, characterizing them – grasping what went wrong – is most of the work. You can find exceptions, but this is the rule.

              1. Yes. In my last job, the developers told me how much they appreciated my bug reports. I could define the exact behavior required to trigger the bug such that they could go to the piece of code at fault, slap their foreheads and say “well, of course that’s wrong!” and have a patch in no time.

                1. Lemme make a guess. You are a developer, and the software in question is a development tool. Well, when cooks cook for other cooks, they have the disadvantage that the standards of their customers gonna be very high, but they have the advantage that they are usually able to tell pretty exactly what is wrong with it. This is, at least, what I would expect.

                  I am blessed with supporting user who see an error message like “tax code field of the customer should not be empty” and report it as “it does not work” without even a screenshot. But OTOH the technical part of my job is really more scripting than programming.

                  The job market seems to implement a certain kind of… justice. That is, if you are paid decently and some aspect of your job is easier then it is guaranteed that some other aspect of it is gonna be harder. And the other way around, too, like if the developers in the business of building tools of developments have one thing easy like good bug reports, something else – like expectations – gonna be hard.

            3. Fixing the bug is only half the problem (and the easier half at that). The harder half is *not introducing new bugs*. Whatever change introduced the bug was done for a reason, if you just take another stab at “getting it right”, without reading the commit messages and understanding why it was done that way in the first place, you’re far more likely to introduce some other bug, which will only get detected later.

            4. Consider the following parable:

              Other People’s Code

              Recently I was asked why programmers hate working with other people’s code. I had to ponder for a while how to convey to the uninitiated the full scope of the Charlie Foxtrot, and then decided to try a little analogy.

              Imagine if you will, you’re tasked with wrapping up a research lab that was started by a different foreman. So you come to the site, and besides the unfinished building you see: a giant fan as big as the building itself, a huge hot air balloon, and one of the rooms if filled with floor mops to the ceiling. You scratch your head, spend a week getting rid of all this junk, and finish up the lab. An hour after you go live the scientists run out yelling, “POISON GAS LEAK!!!”

              – “How the hell? It should all work!” you scream as you dial the previous foreman in desperation.
              – “Johnny, we’re having a poison gas leak! How’s that possible?”
              – “Don’t know, should all be working. Did you change anything in the project?”
              – “Not really, got rid of the floor mops…”
              – “The mops were holding up the ceiling!”
              – “What??? What the actual fuck???”
              – “I’m telling you, the floor mops were holding up the ceiling. The gas holding tanks were too heavy, we had to stuff the room underneath with mops.”
              – “Couldn’t you at least leave a note on the door?! Poison gas is leaking all over the place! What do we do?
              – “Turn on the fan. It’ll blow the gas off the island.”
              – “That’s the first fucking thing I took down!”
              – “Why?”
              _ “Why did you build a 120 ton fan? Why couldn’t you install a box of GAS MASKS for fuck’s sake?”
              – “Because a box of gas mask you have to look for in an emergency. Besides, I had the fan left over from a prior project.”
              – “Johnny, I threw out your damn fan! We’re suffocating over here!”
              – “Then why the hell are you still talking to me? Get in the damn balloon and get the fuck out of there!”

              And this, boys and girls, is why you need revision history. Before you toss out the brooms and mops, find out why they were put in there in the first place. They might be holding up the ceiling.

              1. I generally tell this story with the following addendum:

                The saddest thing, though, is when they investigate the aftermath of the debacle, they discover the following shitty truth.

                Midway through construction Johnny realized the weight of the gas tanks was incorrectly specified. The tank vendor told the PM the weight of the empty tanks, and that’s the weight that went into the specs. Then the PM quit and went to work for a competitor, and the new PM didn’t double-check the numbers. He was a music theory major, you see, and not very technical.

                At this point, it would be completely cost prohibitive to redesign the lab from scratch, and upper management refused to lower the amount of gas worked on since they already made a press release about “highest volume lab in the world”. Using hydraulic jacks was a non-starter, because the regulator would never sign off on it. But a room filled with mops is technically just storage…

                And of course none of this could be put in writing…

            5. The bug was introduced in the course of fixing some problem, or adding some feature.

              If you don’t know why the code was written the way it was written, you will probably break some ancient functionality that no one remembers, but everyone uses without being aware they are using it.

              A source code repository is a vast pile of hard won knowledge. When you are fixing ancient bugs, you will need that ancient knowledge.

          2. On the other hand “shallow conversion” is how Linux kernel conversion from BitKeeper to Git went, if I remember it correctly.

            Though there is “historical repository” with BitKeeper-kept history (and maybe even one recovered from tarballs and patches) that with Git can be joined together (originally using “grafts” mechanism, nowadays much safer to use and more universal “git replace” mechanism).

            1. >On the other hand “shallow conversion” is how Linux kernel conversion from BitKeeper to Git went, if I remember it correctly.

              Betcha very few people understood bisection then. That ignorance would make it more difficult to grasp what you’re throwing away.

              1. > Betcha very few people understood bisection then. That ignorance would make it more difficult to grasp what you’re throwing away.

                Well, `git bisect` as a command didn’t even exists then i.e. at v0.99 (though it was added not much later, in v0.99.4, according to Git Chronicle talk slides from 2008; `git bisect run` for automated bisection and `git bisect skip` for those untestable commits are even later invention).

          3. The “solution” (read, lazy and problematic way) to ensure the old repo sticks around is to check it in (the whole bloody thing) to the new repo. There will still be lots of pain later when the team has turned over several times and the codebase looks totally alien, but it does keep the old repo from getting shredded when the original drives go off to the secure-erasure department.

            1. >The “solution” (read, lazy and problematic way) to ensure the old repo sticks around is to check it in (the whole bloody thing) to the new repo.

              Even if this were not an intrinsically ugly and horrifying idea, I wouldn’t do it. On long enough timescales repository formats are brittle – you could end up with a massive binary blob you can no longer use.

              1. Aye, and given the existence of something like reposurgeon, it’s best to do it mostly right. Even if you only ever do the automatic, non-tailored steps, that’ll probably cover you in at least as many cases as statically including the old repo, without requiring chasing down the old tools 5-10 years from now.

          4. So how about we just make a little script that checks out one build per month from CVS, and then checks that in into Git? Or perhaps per day for the last year, per week for the last 5 years, per month for 10 years and then per year?

            1. Eric has previously blogged over at the NTPsec site about how he programs in a “small steps, provably correct” style, and previously here about how such a methodology with very frequent check-ins of changes has Saved His Butt.

              Sampling the history of an old repo, instead of converting the whole thing, would totally defeat this – with the subsequent failure mode described in the comment at the head of this thread.

              1. “Release early, release often.” Yes, this is something the suit I am LARPing might have reasonably heard about. “This is something like unit testing, Mr. Suit, if this version does not work and the previous does the difference between them is small, something to close to a unit…”

          5. ESR> A subtler issue is that by losing the old change comments you have thrown a way a great deal of hard-won knowledge about why your code is written the way it is.

            And if the programmers haven’t put comments in the code explaining why the “obvious way” to code this bit doesn’t work, due to some very non-obvious considerations, but instead put it into the change comments, someone ignorant of that history is doomed to repeat it.

            Chesterton’s Fence strikes again.

            And I have one more reason to argue in favor of those comments in the code.

    2. > Is Gnu CC the biggest public commit graph out there? The most convoluted?

      (Net|Open)BSD maybe? I don’t care to re-learn CVS again to satisfy my curiosity about how immense a task that might be, but I would expect that to be an immensely gnarly one.

      Did you ever approach the BSDs ESR, or was it an even bigger “nope” than GCC and Emacs should have been to any normal person? :)

      1. >Did you ever approach the BSDs ESR, or was it an even bigger “nope” than GCC and Emacs should have been to any normal person? :)

        I was in contact with the repo maintainers of one BSD for a while. It’s actually smaller than Emacs and I did one trial conversion without much difficulty, but the effort just sort of petered out – the people on the BSD end apparently lost interest.

  2. Eric’s final contribution to the sociology and history of hackerdom:

    A complete and forward-portable-to-whatever-replaces-git conversion of the change-history of the Linux kernel going all the way back to Linus’ first Usenet post.

    –Shannon

    1. >Eric’s final contribution to the sociology and history of hackerdom:

      It could happen. Dunno why you’d expect it to be my last one, though.

      1. Not your last, but “final” in the “engraved on your tombstone” sense.

        Also, I posted this as one of those “this would be awesome if somebody else did it” ideas… like most of my ideas. That you have already thought about it simultaneously makes me happy and surprises me not a bit.

        –Shannon

    2. Funny enough, the Linux repo has a tag that’s special only in that it might not cleanly translate to other VCSes: v2.6.11 points to a tree rather than a commit.

      Git tags can point to any kind of object (blob, trees, commits, and even tags). Others may not be so flexible :)

      1. What reposurgeon might do in this situation is fix this problem, make v2.6.11 point to a commit, like it should (and not simply drop it).

  3. Simply print out and scan each revision of the source code.
    OCR software takes care of the rest.
    Easy!

      1. That was specifically designed to overcome the pitfalls of OCR.

        …back in the 60s. And it wasn’t OCR then, it was MICR (Magnetic Ink Character Recognition).

        Modern OCR software can read even grotty prints with reasonably high accuracy.

        1. …back in the 60s. And it wasn’t OCR then, it was MICR (Magnetic Ink Character Recognition).

          Looking at my own checks, they indeed use MICR (the shapes are similar, just bolder). I’m pretty sure I’ve seen OCR-A and B around though.

          Modern OCR software can read even grotty prints with reasonably high accuracy.

          I’ve never seen one that even approaches usability. It probably is honestly better than it used to be, but they always seem to stumble even on the cleanest of prints and generally unambiguous fonts like Roman (letters like I and l are easily confused; you probably can’t even tell which is which on this blog).

          1. And that’s why I demand that my technical fonts are absolutely unambiguous. “I”, “l” and “1” are obviously distinct, ditto “g” and “q”, “O” and “0”, etc.

  4. > at least an order of magnitude and would have been far more

    Heh. “At least an order of magnitude” … I’d be *very* happy to take an order of magnitude improvement in any codebase for which I’m responsible.

  5. V. interesting. Much more interesting than your posts on politics and conspiracy theories. (Though lizard people are psychopaths was OK.)
    You obviously have great talent — far beyond mine — when it comes to computing and related topics.
    But your posts that discuss political theory and similar display a deplorable ignorance in that area.
    (But at least you are not an anti-vaxor or a similar variety of crazy.)

      1. >Go tip your fedora, faggot.

        This is a ban warning. While “Kansas City” does not seem like the sharpest knife in the drawer, mindless poo-flinging is not tolerated on this blog.

        If you must insult another commenter, do it with argument and substance. In this case, a demonstration that I am far from “deplorably ignorant” would have been quite easy and provided substance.

        1. Warning heeded. I thought you might have a chuckle at it, given the context of this post and the one on the previous thread. No worries, won’t happen again.

    1. I’m going to say that you may be a monster, but you’re probably not an awl-monster. The title clearly indicates that Reposurgeon de-feets awl-monsters.

  6. > That is 35 lines of easily readable code in Go, and we got no fewer than 9 uses out of it in various parts of the code!

    Can I find this somewhere in the source code of repo surgeon? I would be interested in your implementation. I also wrote something like a worker pool in Go, but mine wasn’t very reusable and it was certainly more than 35 lines of code. So I would be quite interested how you solved that problem.

    Appreciate it!

    1. >Can I find this somewhere in the source code of repo surgeon?

      I can do better than just pointing at it…

      // walkEvents walks an event list applying a hook function.
      // This is intended to be parallelized.  Apply only when the
      // computation has no dependency on the order in which commits
      // are processed.
      func walkEvents(events []Event, hook func(int, Event)) {
      	if control.flagOptions["serial"] {
      		for i, e := range events {
      			hook(i, e)
      		}
      		return
      	}
      
      	var (
      		maxWorkers = runtime.GOMAXPROCS(0)
      		channel    = make(chan int, maxWorkers)
      		done       = make(chan bool, maxWorkers)
      	)
      
      	// Create the workers that will loop though events
      	for n := 0; n < maxWorkers; n++ {
      		go func() {
      			// The for loop will stop when channel is closed
      			for i := range channel {
      				hook(i, events[i])
      			}
      			done <- true
      		}()
      	}
      
      	// Populate the channel with the events
      	for i := range events {
      		channel <- i
      	}
      	close(channel)
      
      	// Wait for all workers to finish
      	for n := 0; n < maxWorkers; n++ {
      		<-done
      	}
      }
      
        1. >Eric, I think you wanted to put a “code block” here, but all I see is a blank. :-/

          Fixed.

          That was weird. I made a markup error that screwed up the public view, all right, but I couldn’t tell because the code actually rendered in my administrator view.

      1. I’m not going to comment about the curly brace formatting….I’m not….really I’m not.

        EXCEPT TO SAY THAT I EXPECT TO SEE YOU IN HELL RECTALLY CHOKING ON EVERY SINGLE ONE OF THEM

        But that’s not a critique, just an ephemeral finesse.

        YOU ALL DESERVE CANCER OF THE POOP

        I didn’t say that, however……..

        1. It’s actually not his fault. It’s what go fmt enforces. There is One True Formatting Style in Go, and you will use it, dammit! :)

          1. Go’s implicit semicolon rules also means it cares a lot more about whitespace than most other curly-brace language out there.

            You can put curly braces wherever you want in C, C++, Rust, such as the opening brace on the next line, and it’ll work fine since whitespace doesn’t matter in these (the Rust community generally expects rustfmt code, where the opening brace is also on the same line as a statement, but it’s not language-enforced…). In Go, this will be a syntax error.

            1. Can I just say how much I detest automatic semicolon insertion?

              Any time someone tells you Javascript or Go don’t require semicolons, they are lying. It requires them, it just makes it’s best guess as to where they should be. That’s not quite the same thing.

              I may be a bit biased since Allman brace style doesn’t work right because of it, and that’s how I prefer to write.

              1. Although I don’t prefer Allman, I do appreciate greater flexibility for preferences. Although one can opt out of using “go fmt”, there are still constraints like this that don’t allow you to go all the way.

                In the other example, while it is normally expected to use rustfmt, whitespace still (mostly) doesn’t matter in Rust. rustfmt prefers K&R style braces. Rust the language does not enforce that, you can use Allman or anything else you may wish, just don’t run rustfmt.

                1. >Although I don’t prefer Allman, I do appreciate greater flexibility for preferences. Although one can opt out of using “go fmt”, there are still constraints like this that don’t allow you to go all the way.

                  I am conflicted about this. On the one hand I too find it irritating that Go is quite so inflexible in that particular way. On the other hand…I see the Go designers’ point.

                2. It’s not just the style restrictions really. I mistrust any attempt to implement DWIM, and semicolon insertion is just a special case of that.

                  1. >It’s not just the style restrictions really. I mistrust any attempt to implement DWIM, and semicolon insertion is just a special case of that.

                    There are, for risk-assessment purposes, two different kinds of DWIM. One kind. “simple”, has DWIM rules that are easily modeled with an unaided Mark I brain. You can look at the pre-DWIM state and from that correctly and confidently deduce the post-DWIM state. This kind of DWIM is not terrible. You can trust it because you can anticipate it.

                    Complex DWIM has rules that won’t fit in your brain’s working set. Because it won’t, your reasoning about them will be very slow and will probably introduce defects. Complex DWIM is bad, and you are right to think that anyone who implements it has made a design error that will continually come back to bite you on the ass.

                    I think Go’s semicolon-insertion rules are simple DWIM. But I’m not arguing with your implied general claim, because I think most DWIM rules are overcomplex failures. Expecting that Go’s will not be am=n exception is reasonable.

                    1. Every heuristic has exceptions, including this one. :)

                      I think you’re right about it being relatively simple. I have to know that the DWIM even exists to model it’s effects, and I could easily see myself missing that detail up until the moment it bit me. OTOH, I don’t see it taking long to figure out given the internet as a resource.

            1. Set your editor to run ‘gofmt’ on every save. Then you can write your code how you want, and gofmt will fix it into the official Go style for you. The adoption of a single house style for all code, and an official tool to reformat code in that style, is a huge win for Go in terms of code clarity, consistency, and heading off irrelevant and time-wasting piss fights among programmers.

  7. If the Lord existed, you’d be doing his work. In open source today, there’s a bit of a surplus of dilettantes jockeying for power and a dearth of real hackers doing the due diligence to ensure that our infrastructure, and our history, don’t collapse or dissolve. This is why I put $30/month into Loadsharers and you have my gratitude for all your hard work.

  8. > That is 35 lines of easily readable code in Go, and we got no fewer than 9 uses out of it in various parts of the code! I have never before used a language in which parallelism is so easy to manage – Go’s implementation of Communicating Sequential Processes is nothing short of genius and should be a model for how concurrency primitives are done in future languages.

    Conurrent c++ requires damn good people, its hard, requires a lot of thought, a lot of programming effort, and leads to mysterious bugs that never show up in your test suite, but blow up in the demo.

    Rust is struggling to figure out how to model and represent concurrency to the programmer. Their plan keeps changing.

    Node.js has an amazingly good solution to concurrency. It is event oriented, so though your code executes serially, the disk access and network IO happens in parallel, making it effectively concurrent. If you have event orientation, your need for concurrency is considerably reduced.

    Go contains a representation of the formalism of Communicating sequential processes, as Fortran contains a representation of algebraic formulae.

    1. In C++ you could write,

      std::for_each(std::execution::par, std::begin(events), std::end(events), [](event& e)
      {
      // your hook code goes here
      });

    2. Rust provides strong concurrency guarantees using the borrow checker and type system. Data races are compile-time errors in Rust.

      Does Go?

      (Spoiler warning: no.)

    3. > If you have event orientation, your need for concurrency is considerably reduced.

      No, it’s not reduced, only constrained and half-hidden.

      It’s easy to do event-oriented programming in Go. An event is just a struct in a channel. Channel reads (and writes, I think – not certain about that) invoke the scheduler, so you get fair scheduling and asynchrony across channels. Events in flight are just the contents of all active channels. There is a pleasing explicitness about this – no state is hidden, it is easy to reason about.

      On the other hand, the model you ascribe to Node.js is weaker – in particular, it doesn’t sound like you can write a worker pool for in-memory-only operations on a multiprocessor, like the Go code I posted upthread.

      That’s a symptom. Looking a level deeper, what appears from your description is that the design of Node.js has an orthogonality failure. Concurrency is entangled with I/O, rather than being a freestanding primitive of an exposed threads implementation that I/O uses.

      I’m not holding out Go as a perfect language, but unless I have gravely misunderstood your description of Node.js Go wins this one hands down – it’s both theoretically cleaner and more powerful in practice.

      1. You pretty much got it in one. Node was designed for server applications with lots of concurrent connections; its concurrency model is basically one giant select(2) loop. Concurrent asynchronous tasks yield when their I/O blocks. It has been shown that this event-based approach is faster than threading (on a single CPU core) if your goal is to service many network connections at one time.

        What you speak to is that Go is designed for more general purpose concurrency. Your successful use of, and writings about Go have (together perhaps with things like the Biscuit project, a viable POSIX kernel in Go) opened my eyes to Go’s generality and flexibility, of which I had considerable initial doubt.

        That said, Rust is still the future of computing. Borrow checking is such a profoundly transformative paradigm for ensuring memory safety that it will be used everywhere, even in languages not named Rust.

        1. You have no idea what “considerable initial doubt” is until your cat licks your corned beef sandwich.

  9. So, obvious the Go port worked, and worked in an outstanding fashion. ESR, do you think the problem set grew beyond Python’s ability to scale, or is there something in the nature of the language that make this kind of problem intractable in human timescales?

    1. >So, obvious the Go port worked, and worked in an outstanding fashion. ESR, do you think the problem set grew beyond Python’s ability to scale, or is there something in the nature of the language that make this kind of problem intractable in human timescales?

      Both. Or, rather, the first explains the second.

      Python objects are very heavy – a simple string is a minimum of 40 bytes. The interpreter is slow, pretty much an inevitable consequence of the extreme late-binding semantics. As the size of your data set scales up that means among other things that you’re going to start seeing OOMs relatively fast. Your working set will be ponderous and incur frequent cache misses.

  10. I honestly can’t find anything that’s so “wrong” with the email in question; the only thing I found was a Christian might’ve clutched their pearls.

    Anyhow, there’s two aspects to consider:

    1. In general, not any specific case, it’s good practice to apply the Golden Rule + only be as measured rude as essential for the desired impact + have good self-awareness to least treat people with common decency to get along.

    2. Crybullies, thin-skinned cry-babies, overreactions, histrionics, PCness, and cancel culture can all jump in the proverbial dumpster-fire where they belong. It’s not liberalism or conservatism; it’s the product of sheltered, immature, weak kidults who believe social fascism and no due-process like totalitarian regimes is okay. This type of people need to sell newspapers door-to-door and at least try to get out of their microscopic comfort zones for at least a minute per day.

  11. Go’s implementation of Communicating Sequential Processes is nothing short of genius and should be a model for how concurrency primitives are done in future languages.

    No. No, no, no, no, no. Go’s implementation of CSP is, like everything else about Go, an okay implementation intended for junior-level Blub programmers, but with enough flaws and warts to make seasoned developers go “What were they thinking?!”

    Specifically, goroutines still allow for shared mutable state and are still susceptible to data races. If you make a rule to only use channels to communicate between goroutines, then you should be all right — except channels are slow enough that for any performance-sensitive application you will just want to use mutexes anyway, negating the benefits of using channels. By contrast, Erlang processes are truly shared-nothing; the only way to pass information from one process to another is via Erlang’s messaging system. And Rust’s borrowing rules ensure the lack of data races by making them something checked for at compile time.

    Compared to Erlang and Rust, Go’s concurrency story is… pretty sad, and certainly not a model that should be adopted by other programming languages. As with everything else about Go, we can do better than what Go’s concurrency model provides.

Leave a Reply to Paul Brinkley Cancel reply

Your email address will not be published. Required fields are marked *