Flight of the reposturgeon!

I haven’t posted a reposurgeon release announcement in some time because there hasn’t been much that is very dramatic to report. But with 3.44 in the can and shipped, I do have an audacious goal for the next release, which may well be 4.0.

We (I and a couple of my closest collaborators) are going to try to move the reposurgeon code to Go.

I’ve been muttering about trying this for a while, because while Python is an excellent tool for a lot of things, speedy it is not. This is particular pain in the ass on really large repository moves; I’m still trying to get GCC over to git and am severely hampered by test cycles that take a minimum of nine hours or so.

Yes, I said nine hours, and that’s on the Great Beast’s semi-custom hardware specifically designed for the workload. I think there’s a realistic prospect that a Go implementation could cut an order of magnitude off that.

Still, until a couple days ago my speculations were more wishes than plans. We’re talking 14KLOC of algorithmically dense Python, some of which came in from cometary contributors and I don’t necessarily understand very well myself. The labor load of hand translation – and worse, the expected defect rate – made the project impractical.

What changed everything is that Google has the same problem I do on a much larger scale – that is, piles and piles of underperforming Python needing to be moved to a language with compiled speed. Thus, the Go language (of which I am already a fan) and now…grumpy.

Yes, it’s a Python to Go source-code translator. It’s hedged around with warnings about a few unsupported features and many missing pieces of the Python standard libraries. Still…I think it’s a realistic path forward. If I have to write some of the missing library support myself, that’s not a big deal compared to moving a much larger block of deeply interwingled code by hand.

And if I end up improving the translator, where’s the downside?

39 comments

    1. >Did your OS upgrade get you to having Golang v1.9 as a standard package?

      No – only 1.8 IIRC. But 1.10 is available as a PPA. I’m using it successfully now.

  1. If you haven’t already, you should look into Cython. It’s not a magic bullet: without using its syntax extensions you’re unlikely to get a dramatic speedup. In order to use it effectively, you need to know what parts of your code are performance-critical (which I’m sure you already do), and to understand exactly what the syntax extensions do at the C level so you can judge when and where to use them. Then you’ll be able to achieve much more. (Example)

    It also has the major advantage of being highly backwards-compatible with Python, meaning the 95% of your code which is *not* performance-critical can simply be left untouched.

    1. >If you haven’t already, you should look into Cython.

      Tried it. Didn’t get a significant speedup. Dumped it for pypy, which did improve things.

      1. As I said, you won’t get a significant speedup if you just run it on an unmodified Python source base. It does take manual annotation of your code to get the real benefits, and to refrain from using some of the more dynamic features of Python in performance-critical areas. But I expect that would probably still be less work than moving to a whole different language.

        For example, say you’re dealing with large numbers of objects of the same class. Cython would let you pre-define the members of that class, and then accesses to those members (still using normal Python syntax) would be compiled into accesses to a fixed-size C struct, requiring no run-time name resolution at all.

        Do you still believe that the bottleneck is in “the object allocator and GC”? If so, what reason do you have for thinking that Go would be any faster?

        1. >Do you still believe that the bottleneck is in “the object allocator and GC”

          I later became more doubtful about that. Obviously most of the object allocation takes place at repository load time. if object-allocator overhead were dominating we would expect reposurgeon to zip through the surgery parts much faster, which doesn’t seem to be the case. GC overhead still possibly dominates.

          My real reason for believing Go speeds up code originating in Python is economic. If it didn’t, Google would probably not be investing in Go to the extent they clearly are – and they’d have bad more effort to keep Guido and the other Python core devs in-house.

        2. In addition to Eric’s reply, Go still offers more options to deal with object allocation and GC than Python does. You can put things in structs, arrays, etc. and you’ve got a concept of “stack-local” things that don’t even count against your GC. You have machine ints and can do bitpacking at CPU speeds instead of Python interpreter speeds. You could also, if you were in a fiesty mood, tune the knobs and turn off the GC at some points and/or run it manually, though that would be reserved pretty much for “Eric is converting a specific repo”; I wouldn’t ship it in general for anybody. (If you know enough to do that, you know enough to add it to the source yourself.)

          It’s still fundamentally a GC language and you can absolutely hit a wall where that’s a problem, but you’ve got a lot more runway than you do in Python before that’s your biggest problem and you’re out of easy things to try.

          1. Is GC really a problem for non-interactive programs? I thought the main problem with GC is that the latency spikes can cause a program to stop responding; you clearly don’t want that in a game or web browser. Other than that, I think that some garbage collectors have very high throughput, but I’m not sure if that’s true for Go.

  2. IIRC, I thought ESR was already using pypy – since Julien Rivaud and he got narked about reposurgeon’s slowness on large-at-the-time repos (eg Blender), the pair of then dug through the code and cleaned out well-hidden O(N ** 2) traps. (see http://esr.ibiblio.org/?p=4861 for the details).

    Before that case of ESR SMASH!, pypy didn’t make a noticeable speed difference, but after the pair of them were done, it did. Our host did mention a possible interest in figuring out what enabled that speedup by bisecting their changes, but I’m not sure if that ever happened.

    Or is reposurgeon’s Jupiter cliff turning out more intractable than you had feared, ESR?

      1. Possibly bad choice of wordage on my part.

        I was referring to something that, hitherto gcc, had been lurking in the profiling noise and hadn’t been enough of a problem to be noticed. Then on gcc, it blew up from nothing into something the rough size of Jupiter – and from what I grok, over a comparatively short range of repo sizes.

        The brick-to-the-face subtle hint something’s gone sideways being 6-9 hour test runs on a machine built specifically for the job.

  3. Are you expecting to have grumpy do the translation once and maintain it in Go from thence forward? Or is Python going to be the canonical source and the grumpy output just a convenient intermediary?

    1. >Are you expecting to have grumpy do the translation once and maintain it in Go from thence forward? Or is Python going to be the canonical source and the grumpy output just a convenient intermediary?

      The former. The tranlation process is far too cumbersome to make the latter practical even if I thought t were desirable.

  4. To what extent has this been profiled? Humans are notoriously bad at actually diagnosing where performance problems actually exist. GC overhead seems a likely culprit, but is this known for sure? My experience with performance problems is that they are almost always isolated to a specific section of code. Even GC related problems are usually a single section of code going crazy with allocation.

    If this is the case, then two less radical solutions would suggest themselves. First, do some sort of caching or reduction in allocation inside python itself. Second, isolate the problem area and make a small C module that optimizes away the problems. Both would leave you with your original (working) codebase.

    1. >To what extent has this been profiled?

      Extensively. There’s a lot of internal instrumentation for that purpose: Julien Rivaud and I have used it to do successful speed tuning.

      >First, do some sort of caching or reduction in allocation inside python itself.

      Heh. Yeah, we’ve already done that, Take a look at memoize_iterator(), for example.

      >Second, isolate the problem area and make a small C module that optimizes away the problems.

      Ain’t gonna happen. C’s type ontology isn’t strong enough to get real traction on the problem domain.

      1. then i’m confused as to why you don’t seem to be clearer on what part of the code is slow, and how you expect go to help speed it up. hopefully it’s just a communications SNAFU?

        1. >then i’m confused as to why you don’t seem to be clearer on what part of the code is slow, and how you expect go to help speed it up. hopefully it’s just a communications SNAFU?

          Having smashed out a bunch of O(n**2) loops, memoized and optimized to fare-thee well, and put in all kinds of reverse links for lookup optimization, the major remaining possibility is that Python is just freaking slow. I mean, it’s possible we’ve missed some hidden O(n**2) operation, but it doesn’t look that way – I know what it’s like to smell that in my profiling figures, and except for one apparently unavoidable spike in the Subversion repository loader I no longer get that smell.

          Python being slow is not exactly breaking news. It’s even well understood why. If you drive any scripting language as hard as I’ve been doing, the interpretation overhead is eventually going to become a blocker against further speedups. Mind you, that will almost never happen before you do algorithmic tuning, but in this case we have.

          Moving to a compiled language should never be a first resort, but when you’ve exhausted the alternatives it’s going to have to be your last.

      2. Just to be clear, my assumption was that you probably had profiled. I’ve done mostly performance work for the past 5 years or so and profile first has become second nature to me. But, I still occasionally forget it, rush into “fix” the problem, and end up wasting my time fixing nothing. It’s those times that I wish someone would have reminded me of something I already knew but was too clever by half to remember.

  5. by all means convert your codebase if you feel like it; even if it does nothing else, it’ll probably help give the world a somewhat better python-to-go converter, which can only be to the good.

    but the hesitant and uncertain phrasing you use when describing where you think the current code’s performance issues lie make me strongly suspect it won’t give us a faster reposurgeon, except maybe by accident. if you really want to make it faster, i suspect you’d be better advised to first profile out where, when, and how it is slow. merely converting it to another language will only speed it up if the other language by coincidence happens to be quicker at whatever slows your current code down; even if it is, you’ll not have gained any understanding that way.

    1. >if you really want to make it faster, i suspect you’d be better advised to first profile

      Sigh. I wish I had a nickel for everyone who just assumes I haven’t already done this.

      Dude, I didn’t just fall off a turnip truck! I’ve even described the profiling and its results in previous blog entries.

      1. I find it amusing that people assume the author of TAoUP wouldn’t be wise about this.

        Darn kids!

        1. >Darn kids!

          I feel it incumbent on me to be patient with them. “Get off my lawn!” is uninstructive to the youth; besides (and thankfully) I have not yet achieved the degree of geezeritude required to carry it off convincingly.

            1. Stack Exchange suddenly realizes it isn’t welcoming to newbs? They have a policy that requires you to have a certain amount of Reputation before you can post a comment, but not to post an answer, which leads people to post comments as answers, only to be told they should post it as a comment instead (which they can’t do). I used to reply to those people saying how unproductive it is to tell someone to do something they can’t do, but all it did was get my comments down-voted and deleted, because “this is not the place” (but somehow it was the place for the comment I’m reacting to?).

              So all I can do is laugh that they think they need to be more welcoming, especially to Womyn, Persons of Color, and Other Marginalized People.

          1. Have you tried outputting some useful debug info to stdout?

            It’s a great technique, you should give it a try.

            *dons kevlar pants and runs like a bastard for the hills*

                1. >If it works in Quake… ;)

                  Well, yeah. But on the other hand, I’m a pretty good shot – and I train specifically to point-shoot active targets at varying ranges, because that’s what you do if you’re building capability for self-defense against multiple attackers and aren’t a fool.

                  On the gripping hand, I have no desire to shoot you whatsoever. :-)

  6. Are you hoping that the language change will make apparent a O(n²) algorithm which Python syntax masks, or is this mostly “reduce the furshlugginer constant term”?

    1. This is mostly “reduce the furshlugginer constant term”.

      If some O(n**2) operation pops out after this reduces the profiling noise floor, I’ll take it as a bonus..

  7. What happens when you run reposurgeon on a current PyPy? The code base seems like an ideal candidate for the kind of optimizations PyPy does: it’s (mostly) pure Python and it contains heavy loops over in-memory data. Given the strides PyPy has made in the last couple of years, I would expect it to do at least a decent job on the reposurgeon hot loops.

    1. >What happens when you run reposurgeon on a current PyPy?

      That’s my production setup now, I have previously estimated a minimum 2:1 speedup over CPython. For a while I thought that was making migration to a compiled language look unnecessary, but my test cycle times continue creeping up as I add more operations.

      What I think now is that we’ve reached a point where even a single commit traverse is costly, and that operation is not the kind of thing that JITs well.

  8. Really dumb question here, ESR.

    I know this isn’t generally applicable, but are there any points in the GCC history where you can cleanly separate it and thus convert two (or more) sub-histories, then weld them back together when you’re done?

    1. >I know this isn’t generally applicable, but are there any points in the GCC history where you can cleanly separate it and thus convert two (or more) sub-histories, then weld them back together when you’re done?

      When you have repo with as many branch & merge operations as GCC’s, finding such a cutpoint is impractically hard. The cut operation itself would be trivial, if one knew where to apply it.

  9. How do you intend to regression test the conversion from Python to Go? Do you have a set of test repositories that will need to come out the same after being run through both the original Python code and the new Go code? Another method?

    1. >How do you intend to regression test the conversion from Python to Go?

      I have an extensive set of regression tests.

Leave a comment

Your email address will not be published. Required fields are marked *