A low-performance mystery, part deux

Well, the good news is, I get to feel wizardly this morning. Following sensible advice from a couple of my regulars, I rebuilt my dispatcher to use threads allocated at start time and looping until the list of masters is exhausted.

78 LOC. Fewer mutexes. And it worked correctly first time I ran it. W00t – looks like I’ve got the hang non-hang of this threads thing.

The bad news is, threaded performance is still atrocious in exactly the same way. Looks like thread-spawn overhead wasn’t a significant contributor.

In truth, I was expecting this result. I think my regulars were right to attribute this problem to cache- and locality-busting on every level from processor L1 down to the disks. I believe I’m starting to get a feel for this problem from watching the performance variations over many runs.

I’ll profile, but I’m sure I’m going to see cache misses go way up in the threaded version, and if I can find a way to meter the degree of disk thrashing I won’t be even a bit surprised to see that either.

The bottom line here seems to be that if I want better threaded performance out of this puppy I’m going to have to at least reduce its working set a lot. Trouble is, I’m highly doubtful – given what it has to do during delta assembly – that this is actually possible. The CVS snapshots and deltas it has to snarf into memory to do the job are intrinsically both large and of unpredictably variable size.

Maybe I’ll have an inspiration, but…Keith Packard, who originally wrote that code, is a damn fine systems hacker who is very aware of performance issues; if he couldn’t write it with a low footprint in the first place, I don’t judge my odds of second-guessing him successfully are very good.

Ah well. It’s been a learning experience. At least now I can say of multi-threaded application designs “Run! Flee! Save yourselves!” from a position of having demonstrated a bit of wizardry at them myself.

UPDATE: One of my regulars found a minor bug in the mutex handling that cost some performance. Alas, fixing this didn’t have any impact above the noise level of my profiling. Also, I managed to unify the threaded and non-threaded dispatchers; the LOC specific to threading is now down to about 30.

114 comments

    1. >Never trust code that runs first time.

      Skepticism is reasonable, but I had this instrumented enough that I could tell it was doing the right things (not just spuriously appearing to) from the logging.

  1. “Run! Flee! Save yourselves!”

    The problem is that individual CPU cores aren’t getting substantially faster any more. No more doubling every 18 months. We’re getting more cores, lower wattage cores and more cache per core. If you need to increase performance, in practice it means that you’re going to be multi-threading where possible and practical. Telling people to run and flee from multi-threading is much like telling people to run and flee from that internal combustion engine because they break down more often than a horse.

  2. I know almost nothing about threading, but if locality is the issue, might a custom memory allocator help things? ie give each thread its own block of memory?

  3. The other thing is… if you’ve been benchmarking this on a dual-core system, that probably means you have a relatively small, simple cache and slowish memory. If you try it on a hexacore machine, you might get completely different results due to the huge 3-layer cache and more memory bandwidth, and different results again on a 32-core server-grade Xeon due to the different balance of those same factors.

    I think to really understand this, you need to have a more modern system to run it on.

    1. >I think to really understand this, you need to have a more modern system to run it on.

      You may well be right. I’ve been kicking around the idea of upgrading anyway, because my procedure for testing the Emacs repository conversion recipe takes forever to run. Investigating this might be enough additional excuse – I do, in fact want to see how the behavior changes on beefier hardware.

  4. I’m going to posit another possibility: You’re running this code on a reasonably modern Intel CPU*.

    A reasonably modern Intel CPU which features “Turbo Boost”. Which means that when your second core sits idle, the first core gets to clock higher.

    To isolate this effect: what impact on performance do you get when you run the single threaded variant with the other core used up by something CPU bound (say, int main() { for(;;); })?

    (* AMD have a similar feature; I’ve forgotten the name of it)

  5. BTW. do you need global counter for blobs? Perhaps this global read is what destroys locality…

    1. >BTW. do you need global counter for blobs? Perhaps this global read is what destroys locality…

      Unlikely. I’m using an atomic test-and-increment, so the worst case is the VM has to keep one additional page swapped in.

      I did think about this. Contemplated replacing the serial counter with an MD5 hash of the file content. Might do it yet, but I doubt it will solve the larger problem.

  6. > Telling people to run and flee from multi-threading is much like telling people to run and flee from that internal combustion engine because they break down more often than a horse.

    Parallel programming is not the same as multithreading. Lots of parallel problems can be solved by just using multiple processes communicating over pipes. If there’s too much start-stop overhead, you step up to using the C-based fork()/wait() for starting other processes and synchronizing with them.

    If that’s still not enough, use message-passing APIs – these are often architecturally-preferable to shared-memory concurrency. If you really need the latter, consider using software transactional memory, which is quite possible if you have language support for it, as in Haskell. If even that’s not appropriate for your needs, _then_ you can use locking, or even fancier techniques.

    And all of that only applies _if_ the particular problem you’re dealing with hasn’t been encapsulated in an external library.

  7. This should be orthogonal to your performance problems, but FYI… I’ve done a fair amount of multi-threaded programming, and my experience is that the POSIX C thread API basically sucks for getting application level work done; it is too low-level.

    The Tcl Threads Extension, which layers on top of POSIX (or Windows), is excellent. In particular it makes each Tcl thread effectively its own PROCESS, with explicit APIs for communication with other threads. Only at the C level do you see the underlying shared-everything semantics. Presumably other languages have their equivalents. (Including C, although I have not looked at those.) OpenMP works differently, but might be quite appropriate for your use case.

  8. The standard system Linux malloc is pretty good, but the more threads you use, the more likely it is to become a problem. It’s workload dependent of course, but from these nice (2012) empirical graphs, tcmalloc looks like a clear winner in both speed and memory use:

    https://next-scripting.org/xowiki/docs/misc/thread-mallocs/index1

    More importantly for you, tcmalloc MIGHT drastically improve your cache locality; it should be designed for that.

    However, it sunds like you’re not sure yet whether your multi-threaded code triggers contention for disk IO, which could easily be orders of magnitude worse. There must be better ways to determine that, but simply watching the output from “iostat” while your program runs may give you an idea.

    1. > There must be better ways to determine that, but simply watching the output from “iostat” while your program runs may give you an idea.

      That sounds useful, but reading the man page doesn’t clue me in to what stats might indicate that the I/O subsystem is thrashing. What would you look for?

  9. Instead of a reduce step over the output of N workers, which will blow up your cache if N is large enough, can you have a logarithmic number of reduce steps with the outputs of 2 workers at a time, which might better preserve locality?

    Also, instead of parallel threads, can might distribute the work across many machines, that can each preserve their own locality — if you have enough CPU cores, you will be bandwidth-limited, but your network bandwidth can be much higher than a single computer’s disk’s bandwidth (and even more so if you can afford better network topologies — or rent time on Google’s datacenters).

  10. Back in 201, BSD-hacker Poul-Henning Kamp wrote a nice little article on how modestly tweaking his data structures to use a b-heap rather than the “optimal” binary heap, gave Varnish a 10x speedup – due to memory and cache locality:

    http://queue.acm.org/detail.cfm?id=1814327
    You’re Doing It Wrong; June 11, 2010

  11. Performance should increase until the number of threads equals the number of cores. Then it will start decreasing due to the overhead of switching and the loss of locality. This actually sounds more like a problem best tackled using map-reduce on a large number of cloud instances.

  12. I think there is a way to disable the cache on the processors, so you could test it to see if threading is faster than no-thread if the cache is off.

  13. Multithreading is a very specific tool. The problem is typically that it is misused or overused.

    When adapting the flash camera input on the Nokia n8x0, flash only understood V4L1, and the camera only did V4L2, so I had to write a shim, but the read and writes had problems if I made them blocking, so I had to use threads, so when the flash opened the far end of pipe to read the camera, the thread doing the actual camera read would resume and the data would flow until the write to the flash pipe closed. That was the last big thing I really needed threads for within a program, but it was to fix some incompatibilities.

    This was one of the classic uses – to adapt between synchronous and asynchronous actions.

    In the embedded realm, you usually get “tasks” which are usually not quite either processes or threads.

    Threads are a dynamic resource like mallocs – you can get strange things if you don’t check every state and path. Threads deadlock, and mallocs can create memory leaks if you aren’t careful.

    And subtle things – you can kill a thread, but its resource remains allocated until you specifically do a join.

  14. Something that may be helpful from an architects problem solving perspective rather than someone who’s down in the functions… abstract and reframe.

    Don’t reduce the working set… pipeline, conditional workstream, and chunk it (not necessarily in that order).

    Think of it as a latency sensitive packetized stream.

    If necessary, include an intelligent workload manager and dispatcher thread.

    Also, you may be doing this already, but don’t forget presume a larger main memory space and higher performance IO into that memory space than baseline C assumptions. Use that facility, and optimize for it.

    Even lean and mean code meant for embedded systems has a larger main memory space and faster primary IO than our conventional assumptions allow for. Yes the wait states and transfer overheads stack and multiply… nastily sometimes… but where historically we optimized with smaller transfers, smaller chunks, and lower footprints in main memory etc… that is generally no longer the most efficient method for moderate dataset sizes, or small datasets which can be sensibly related and batched.

    Anything we can do to reduce the wait state penalties and transfer overheads, and bring more of the workload, closer to the workload execution.

    1. >Don’t reduce the working set… pipeline, conditional workstream, and chunk it (not necessarily in that order).

      Probably not practical here. An important blocking constraint is that neither I nor anyone else understands the central branch merge algorithm. This limits the kind of surgery that can be done on the code.

  15. To explicitly state what’s been hinted at above: sometimes an algorithm that’s hyper-optimized for serial processing trips all over itself when you attempt to parallelize it. A high-level algorithm that sounds like a bad idea may actually be a good idea if it can be broken up.

    I’m looking, but I don’t understand the cvs-fast-export algorithms enough to say anything more specific.

  16. I haven’t seen any comments pointing to this, but one that mentioned you were testing on a dual-core system… which suggests this as an important consideration. When doing thread-based programming on a CPU-bound algorithm, number of threads can matter a lot. On Intel systems, hyperthreading also comes into play (Intel CPUs with hyperthreading report twice the number of cores that they actually have, because they duplicate some of the registers for fast switching between threads).

    If you really are running on a dual-core CPU with CPU-bound processes, you’re not going to see much benefit from threading unless you limit the number of threads to 2-4. Personally I’d run with three; one to load the next workset into memory in your main thread, and 2 worker threads to process the results.

    Maybe you’re doing this already — but it wasn’t spelled out explicitly, so I figured I would.

    1. >Personally I’d run with three; one to load the next workset into memory in your main thread, and 2 worker threads to process the results.

      That’s what I’ve been doing.

  17. You may well be right. I’ve been kicking around the idea of upgrading anyway, because my procedure for testing the Emacs repository conversion recipe takes forever to run. Investigating this might be enough additional excuse – I do, in fact want to see how the behavior changes on beefier hardware.

    Might I suggest http://info.sgi.com/uv_20/ ?

    A bit much to carry around, but get someone to stick it in a server room for you, and since you do almost everything on the command line anyway…

    Several years ago I was working in a research lab in [censored] where we had a couple of SGIs–A UV100 and a UV1000.

    It turns out that if you don’t write your code properly, and don’t set up your jobs properly you CAN get a 1 million dollar machine with 640 cores and 7 terabytes of memory to run slower than a HP DL380G5.

    At least in the 2.4 and .6 kernels the kernel could scatter memory around quite a bit and wouldn’t “stick” a process to a specific CPU (when memory is 2-3 numalink hops away moving memory to and from the CPU doing the work can be expensive). There were some tools developed to solve these problems.

    I’ve done a *slight* bit of multithreaded programming in Python, mostly for running stuff on or doing stuff to lots of computers at once.

    Turns out that python’s threading (at least in 2) isn’t really optimal for performance, but when you’re establishing network connections and opening SSH sessions it helps a lot.

  18. William…

    I do a lot of work with very large working set optimization, from both an architecture and infrastructure perspective (not so much with the direct coding side)… and yeah, It really is all about intelligent workload management and optimization.

    I’ve been able to get more performance out of a single Sun t1000 that had been written off (so we could experiment with it freely), than we were able to get from a whole cluster of bigmem supposedly ADW and OLTP optimized machines; simply by managing the workloads INTELLIGENTLY.

    The vendors were talking about putting in entire machines full of memory and SSDs just to cache, and to try spreading the workload even broader into smaller systems… for, of course, vastly greater sums of money…

    It was clear to my team that while doing so would improve throughput, it would be a… shall we say suboptimal solution at best?

    We would have machines that were running at 8-12% utilization, that just absolutely would not push through more work, because their workload characteristics were in the coffin corner of their infrastructure characteristics.

    Throwing hardware at the problem stops working when the inefficiencies and intersections stop adding or multiplying, and start raising themselves to the power of their own exponents.

    Once we restructured the workloads and put better… well, really ANY workload management in place we saw a 2400% workload throughput improvement IMMEDIATELY. Eventually we managed to get a 14,000% advantage (yes, we were processing 140 times as much workload).

  19. Oh and if you’re testing on VMs can you do some testing on bare metal as a baseline comparison?

    Even on a lightly laden, or basically unladen VM infrastructure, you may be having issues with context handling, thread management and IO management on the VM that you wouldn’t have on bare metal.

    Doesn’t mean you shouldn’t code a solution that works well on VMs… it would just give you an indicator of where and how you can effectively refactor to better match the workload handling to the performance characteristics of the platform.

    1. >Oh and if you’re testing on VMs can you do some testing on bare metal as a baseline comparison?

      I’m not using a VM.

  20. > An important blocking constraint is that neither I nor anyone else understands the central branch
    > merge algorithm. This limits the kind of surgery that can be done on the code.

    Yeah… that’s going to limit your options pretty severely. You’d have to build a higher level packeting and routing subroutine without being able to do predictive optimization… you’d just be moving your cache miss penalties further down the chain, so there would be fewer of them but the penalties would be order of magnitude longer.

  21. “Throwing hardware at the problem stops working when the inefficiencies and intersections stop adding or multiplying, and start raising themselves to the power of their own exponents.”

    The “mythical man-month” has apparently reached into the hardware runtime as well, not just programming time.

  22. Read up on how typical cache coherency protocols work. cpu caches are at the most efficient with either: a) read-only access patterns (all cpus can cache it) or b) exclusive patterns (only one thread ever writes to a location).

    Worst case is when all cpus are trying to write to the same cache line.

    Watch out for false sharing – say you have int foo[N], and thread 0 only touches foo[0], thread 1 only touches foo[1]. Result: sadness: their cpus end up fighting over the cache line containing both foo[0] and foo[1]. Padding things to multiples of the cache line size can help.

    Good cache-friendly memory access:
    – memory touched by only one thread ever.
    – memory that is 99%+ read-only and shared between cores.

    Watch out for false sharing – you may need to pad array elements to cache-line sized chunks so that CPUs don’t end up fighting over.

    Atomic operations help less than you might like if multiple cpus are pounding on the same cache line. A good pattern is to have per-cpu or per-thread sharding; see Bonwick’s second allocator paper: https://www.usenix.org/legacy/event/usenix01/bonwick.html

    another good scalability pattern: have per thread counters, and only add them up cross-thread when you actually need to know the global sum.

  23. Note that, strictly speaking, you don’t need to _pad_ your data; you can simply interleave it so that things that are most likely to conflict are spread across different cache lines. But yes, that’s something to think about.

  24. Eric, can you spell out how you’re invoking cvs-fast-export to benchmark/profile it?

    1. >Eric, can you spell out how you’re invoking cvs-fast-export to benchmark/profile it?

      #!/bin/sh
      #
      # Profile, with or without thread options
      #
      while getopts t: opt
      do
          case $opt in
      	t) opts="$opts -t $OPTARG" ;;
          esac
      done
      shift $(($OPTIND - 1))
      
      echo "find $1 -name '*,v' -print | cvs-fast-export -p $opts >/dev/null"
      find $1 -name '*,v' -print | cvs-fast-export -p $opts >/dev/null
      

      As previously noted, my benchmark repos are the groff and robotfindskitten repositories mentioned on cvssync.asc.

  25. > Note that, strictly speaking, you don’t need to _pad_ your data; you can simply interleave it so that things that are most likely to conflict are spread across different cache lines. But yes, that’s something to think about.

    So threads 0 and 1, on the same CPU, chip or socket, access foo[0] and foo[N] (N > data line width) and not foo[0] and foo[1].

  26. Question for you: did you graph the performance reduction against the number of threads? Assuming you have a fairly new processor there will be a lot of on silicon cache memory. So if it really is a cache problem I’d suggest that you’d see a 1/x type performance graph as the number of threads start overusing the fast caches.
    My experience (and I’ve written a lot of multi-thread code) is that it is usually caused by one or two lines colliding for some reason, and so I’d suggest both the above and profiling it down to the line of code rather than just the function. You might have done that already, but just an idea.

    1. >Question for you: did you graph the performance reduction against the number of threads?

      Dual-core machine, so the plot wouldn’t be very interesting. Maybe when I upgrade to a quad-core…

    1. >If you haven’t read Ulrich Drepper’s What Every Programmer Should Know About Memory, now would be a good time.

      Read it when it came out. Could stand to re-read it, probably.

  27. >Skepticism is reasonable, but I had this instrumented enough that I could tell it was doing the right things (not just spuriously appearing to) from the logging.
    Dumb question, but you’re not including that instrumentation in your normal release builds, right?

    1. >Dumb question, but you’re not including that instrumentation in your normal release builds, right?

      It was conditioned out. Now it’s gone entirely.

  28. @William O. B’Livion:

    Turns out that python’s threading (at least in 2) isn’t really optimal for performance, but when you’re establishing network connections and opening SSH sessions it helps a lot.

    It’s no better in 3. The global interpreter lock insures that only one thread runs at a time. CPython threading is really only useful if you have I/O-bound threads.

  29. Dumb question time, Eric…

    “An important blocking constraint is that neither I nor anyone else understands the central branch merge algorithm.”

    Perhaps it’s time to rethink using that algorithm entirely, then? Could it be that simplifying and letting the compiler optimize the code might produce results that you can actually understand?

    1. >Perhaps it’s time to rethink using that algorithm entirely, then?

      You first. :-(

      CVS-lifting logic is nasty. The problem is tricky, the edge cases are wicked. Code that has been proven to work over a wide range of real repositories is hard-won and not lightly to be discarded.

    1. >In the 64 bit age, I have generally switched to using mmap on everything rather than stdio, if that helps.

      Already done. That’s one of the optimizations David Leonard added.

  30. >>Question for you: did you graph the performance reduction against the number of threads?
    > Dual-core machine, so the plot wouldn’t be very interesting. Maybe when I upgrade to a quad-core…
    I have SSH access to a relatively underused 16-core (plus hyperthreading) machine at my University. I can basically use it for any non-commercial purpose as long as I don’t get too obnoxious, otherwise the sysadmin will kill it. Do you have a test that runs single-threaded in 10-20 seconds or so? I might be able to get some data for you.

    1. >Do you have a test that runs single-threaded in 10-20 seconds or so?

      The robotfindskitten repo converts that fast. If your indulgence extends to 90 seconds, we can test groff as well.

  31. Doesn’t compile on that machine. It had GNU Bison 2.4.1, which obviously wasn’t new enough, but even with a local copy of Bison 3.0.2 it wouldn’t work:
    bash-4.1$ BISON=~/bin/bin/bison make
    /nfs/stak/students/e/eliaserm/bin/bin/bison --defines=gram.h --output-file=gram.c gram.y
    flex --header-file=lex.h --outfile=lex.c lex.l
    cc -Wall -Wpointer-arith -Wstrict-prototypes -Wmissing-prototypes -Wmissing-declarations -Wno-unused-function -Wno-unused-label -Wno-format-zero-length -pthread -O3 -g -I. -I/nfs/stak/students/e/eliaserm/for-esr/cvs-fast-export/ -DVERSION=\"1.20\" -DTHREADS -DUSE_MMAP -Drestrict=__restrict__ -c -o gram.o gram.c
    In file included from gram.y:22:
    lex.h:118: error: redefinition of typedef ‘yyscan_t’
    cvs.h:727: note: previous declaration of ‘yyscan_t’ was here
    make: *** [gram.o] Error 1
    bash-4.1$

    The OS is RHEL 6.5. Flex is version 2.5.35.

    BTW, here is the patch to add a BISON environment variable:
    diff --git a/Makefile b/Makefile
    index 55cc734..3448b7d 100644
    --- a/Makefile
    +++ b/Makefile
    @@ -69,8 +69,10 @@ cvs-fast-export: $(OBJS)

    $(OBJS): cvs.h

    +BISON ?= bison
    +
    gram.h gram.c: gram.y
    - bison --defines=gram.h --output-file=gram.c $(YFLAGS) $<
    + $(BISON) --defines=gram.h --output-file=gram.c $(YFLAGS) $<
    lex.h lex.c: lex.l
    flex $(LFLAGS) --header-file=lex.h --outfile=lex.c $<

  32. Got it compiling. Here is the patch:
    http://pastebin.com/JiNTVViA
    Note: this may not be suitable for merging upstream, this patch is simply what I needed to work around the quirks of the Bison version I compiled.

    In addition to the BISON variable I added, I also had to invoke make with LIBS=-lrt for the clock_* functions.

    If all else goes well, I should be able to get some numbers now.

    1. >Got it compiling. Here is the patch:

      I’ve pushed port changes that should be equivalent to the repo. With any luck you won’t have to hand-hack anything on your next go-round.

  33. @esr: “CVS-lifting logic is nasty. The problem is tricky, the edge cases are wicked. Code that has been proven to work over a wide range of real repositories is hard-won and not lightly to be discarded.”

    I can understand why you wouldn’t want to discard code that has been proven to work. But right now it’s an indigestible lump of black magic that causes problems because you don’t understand it, and the optimization you need may have to be done on what it does, and not in the stuff that encapsulates it.

    (And with absolutely no disrespect intended to Keith, I suspect code that is so algorithmically dense that even the author no longer understands it is a side-effect of crafting something that runs efficiently on older hardware with less resources, and were he to write it today it might look rather different because there *is* more hardware to play with. The fact that hardware is more powerful is why you can develop mostly in Python now and not have to resort to C to get adequate performance. These days, I’d err in favor of code easily read and understood even if it isn’t as efficient as possible, because hardware is cheap and developer time isn’t.)

    Rather than trying to understand the code, I’d start at an earlier stage. What must code that wants to lift from CVS do? What problems is Keith’s code solving?

    I’d also like to see tests of your existing code on more powerful hardware. Right now it’s too slow for certain big jobs, but that’s relative. What may be too slow on your system might be acceptable on another with more horsepower, and the ultimate solution might be running the job on something more powerful.

    1. >But right now it’s an indigestible lump of black magic that causes problems because you don’t understand it, and the optimization you need may have to be done on what it does, and not in the stuff that encapsulates it.

      Actually, I know this isn’t true from profiling. The black magic – stage 2 – is the least expensive part of the computation. It’s complex but very fast. It’s all the I/O pumping in the stage 1 parsing and stage 3 stream generation that’s expensive. I don’t expect to have to touch the stage 2 code at all.

      >These days, I’d err in favor of code easily read and understood even if it isn’t as efficient as possible, because hardware is cheap and developer time isn’t.

      So would I. But there are two reasons cvs-fast-export is a special case:

      (1) The black magic would be brutally hard to replicate. We’re talking almost a Lovecraftian to-look-upon-this-is-to-bleed-sanity situation here. Do not object that it can’t be that bad; I have nibbled at the edges of the problem and I do not want to go in any deeper.

      (2) As the average size of remnant CVS repos rises, the viability of an attempt at replacing cvs-fast-export in a scripting language is actually falling. Its speed is actually a major feature – these conversion runs can take days, and in a slower language that could balloon into weeks.

  34. A few more things to note about the machine I’m tested on:

    My home dir is mounted via NFS, this may slow down disk I/O speed for this test. On the other hand, the 64 gigs of installed RAM (most of which is free) pretty much guarantee that every single solitary file accessed for this benchmark will likely be cached in RAM simply from my having created it. So I don’t actually expect any I/O to happen at all.

    The cache configuration is as follows (inferred from /sys/devices/system/cpu) :
    * 32K L1 data per physical core (shared by two logical cores)
    * 32k L1 instruction per physical core (shared by two logical cores)
    * 256k L2 unified per physical core (shared by two logical cores)
    * 20480k (20 MiB) L3 unified per CPU
    * Dual-socket motherboard with two identical eight-core CPUs
    This adds up to up to a bit over 40 MiB total cache, which is monstrously huge.

    Here’s the /proc/cpuinfo as well: http://pastebin.com/vfSkrygC

    Basically, if your program is bottlenecked by either disk I/O or cache misses, those bottlenecks will be pretty much gone on this server, but we won’t be able to tell which bottleneck it was.

    Make check output:
    http://pastebin.com/Q86ifR5w
    I had to add `pwd` to the PATH variable so the tests could find the cvs-fast-export binary I compiled. Note that the tests don’t all pass, not sure if that proves anything here given how much I had to tinker to make it compile/run. Clearly, people who don’t have root were not part of your target audience here. :) I’m using 4fa889454563b0248b66eb076877681c737f846c, for which the commit message claims the tests all passed, so there’s probably something amiss somewhere.

    I decided to proceed with the benchmarking anyway. I found that the robotfindskitten repo converted almost instantly, no matter what I did. However, I did measure a slight slowdown when threading was enabled.
    http://pastebin.com/uvLZDwJN
    (With the caviat that I don’t know if it converted *correctly*, since I the regression tests didn’t pass.)

    Wasn’t able to clone groff:
    bash-4.1$ cvs-fast-export/cvssync anonymous@cvs.savannah.gnu.org:/sources/groff groff
    The authenticity of host 'cvs.savannah.gnu.org (140.186.70.72)' can't be established.
    RSA key fingerprint is 80:5a:b0:0c:ec:93:66:29:49:7e:04:2b:fd:ba:2c:d5.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added 'cvs.savannah.gnu.org,140.186.70.72' (RSA) to the list of known hosts.
    Permission denied (publickey).
    rsync: connection unexpectedly closed (0 bytes received so far) [receiver]
    rsync error: unexplained error (code 255) at io.c(600) [receiver=3.0.6]
    cvssync: child returned 255.

    I think maybe you have some kind of permission that I don’t, or there’s some kind of proxy messing things up on campus. If it’s not too big, can you just upload a CVS tree of groff somewhere I could download?

  35. @ esr

    (I accidentally posted initially added this comment to the previous blog post.)

    You micro-tweaked the small function in step 1 and the results were much better than one would expect. Why? It suggests that there are changes you could make to the broader code in steps one and three that would help a lot.

    I would almost be tempted to start removing the micro-tweaks to find out which tweaks are helping out the compiler a little bit and which are improving the big problem.

  36. John Bell and I are trying SUNABH where ESR has a hardware reference, he acquires a copy using “$”.

    See the last few comments in the previous post.

    Sometimes You Need A Bigger Hammer

  37. Re. iostat, I’d be suspicious if %iowait goes up or the number of reads or writes per second goes down, in your multi-threaded vs. single threaded runs. But whether iostat is really adequate for this, I’m not sure.

    If you want a baseline of your machine’s disk and filesytem performance, I’d try iozone. It both runs its benchmarks and generates plots from the data. Unfortunately, that isn’t directly useful for deciding if/when you’re thrashing the disk, but it might at least clarify what numbers from iostat would indicate “good” performance.

  38. Eric, are you familiar with the concept of “false sharing”? AFAIK there are no tools to find this, but it has to do with cache contention between threads. Herb Sutter wrote a column on it a few years back (I picked the page that has the core utilization display): http://www.drdobbs.com/parallel/eliminate-false-sharing/217500206?pgno=2

    Also, I haven’t used CVS since college (now over 15 years ago). Do you have (or know of) small test repositories I could use as cases for testing that main merge algorithm?

  39. > I’ve pushed port changes that should be equivalent to the repo. With any luck you won’t have to hand-hack anything on your next go-round.
    Yep, those worked, although the same tests failed as last time. If you think it will help I can tar up the tests/ part of my tree so you can look at what’s going wrong.

    Here is another patch that allows the test suite to run cvs-fast-export from the build directory without messing with PATH:
    http://pastebin.com/dTkMT6RF
    I think this is also a good idea because it insures that you’re testing the cvs-fast-export you’ve most recently built, even if you haven’t done “make install.” Especially since tests/Makefile seems to use that binary anyway– this patch eliminates the possibility of getting your signals crossed and testing two different versions at once.

    This version was measurably faster, and actually seemed to scale in the right direction:
    http://pastebin.com/d4qLmkzH
    Revision tested was fd453639b0c89b360327764d3d126c099fcc5187.

    1. >This version was measurably faster, and actually seemed to scale in the right direction:

      Yeah, I think that’s due to three recent changes:

      (1) Frank Ch. Eigler found that if a program ever runs threads, the glibc version of stdio enables some internal locking that slows it down a lot – even if no threads are still active at the time of later operations. He changed the code to use glibc extended calls that bypass the locking.

      (2) I significantly reduced the amount of computing in the export stage by changing where export filenames are computed – early, and only once per master file, rather than late and once per output fileop.

      (3) I optimized the stage 3 I/O by pumping out blobfile contents with fread/fwrite rather than a naive fgetc/putchar loop.

      The threaded version is still slower but in my latest profiling runs with groff it cranked 79 commits per second threaded vs 85 cps unthreaded – no longer the factor of two to four difference I had been seeing. That’s much faster than previous versions, where I was getting about 42cps unthreaded and 19cps threaded.

      Here’s how I analyze the difference in those numbers:

      * Frank’s fix getting glibc stdio’s thread locking out of the way removed the largest drag on threaded performance relative to unthreaded.

      * My optimizations sped up stage 3 in both versions. By a surprising amount, actually.

      Obviously we’re still seeing some cache-miss effects and probably a bit of disk contention as well, but the gap between nonthreaded and threaded versions has closed a lot. I think this actually makes the prospect of testing on various beefy multicore machines more interesting, because it now seems more likely that they’ll yield a significant speedup.

  40. I figured out how to clone groff:
    rsync -az rsync://cvs.savannah.gnu.org:873/sources/groff/groff/ groff
    See here: https://savannah.gnu.org/forum/forum.php?forum_id=4142
    For whatever reason, the rsync command generated by cvssync wouldn’t work.

    Here are the numbers for groff:
    http://pastebin.com/q1M7aYTW
    Notice how much faster it was the second time it ran? That’s very telling, it suggests that disk I/O is a major limiting factor. More RAM and an SSD or RAID setup would be the way to fix this.

    I’m not very happy with this test, because the threaded version will have to work well even if the CVS tree is not cached, and I never actually tested it under those conditions. I tried deleting my groff tree and re-cloning it, but was not ever able to replicate that 26-second run without threads, so I assume the data is still cached somehow. All the methods I can find to empty the FS cache require sudo, so I don’t think I actually can test it properly– you are in a better position to do so than I am.

    1. >Max E.’s numbers look good to me; how much faster do y’all expect it to go?

      There’s one more optimization I’m working on. I’ve figured out how to name blobfiles so a shared serial counter isn’t required. I’m debugging that now. (There’s some pesky interaction with export naming.)

      When I get this working, it may have no effect. Or…it could significantly reduce cache line contention in stage 1. (Props to Jakub Narebski for nudging me to think about this.)

      I will call it victory if I can pull the elapsed time for threaded stage 1 below nonthreaded. (As Max E. says, the remaining stage 3 slowdown is probably best addressed with SSD or RAID.) This optimization might do that. Cross your fingers.

      As a matter of potential interest, there are only three classes of mutex interaction between stage 1 threads:

      (1) Dispatcher stuff – two mutexes guarding the input filename and output revision-list queues. The are only two share points per master involving these, one at the beginning of processing and one at the end.

      (2) Blob id generation. One share point when the shared counter is bumped. Gets called a lot, once per delta. This is the one I’m trying to eliminate now.

      (3) Tag registration. A small burst of share points while the symbols list in the master header is being parsed.

      Once I get rid of the shared blob counter the stage 1 threads will run free for most of their lifetimes.

  41. > (3) I optimized the stage 3 I/O by pumping out blobfile contents with fread/fwrite rather than a naive fgetc/putchar loop.

    If you’re going to use stdio, you might want to use unlocked primitives if available (where fread/fwrite is an optimization, it’s likely because it uses an unlocked fgetc/putc loop)

    1. >I had a stab at mostly eliminating the revlist mutex from stage 1: […] However, if anything that made performance a little worse.

      I would be extremely surprised if the revlist mutex were a problem. It’s raised only once at the end of processing of each master and its critical region is small.

      >I don’t think locking per se is a problem in stage 1.

      Maybe not. I’d say the revlist and enqueue mutexes almost certainly aren’t a problem, the seqno mutex has the highest odds of being a problem, and the tag table mutex is somewhere in between.

  42. Eric, by the way you can apply the glibc _unlocked optimization to multi-threaded stage-1 code too, but only for those FILE*’s that are not shared between threads.

    1. >Eric, by the way you can apply the glibc _unlocked optimization to multi-threaded stage-1 code too, but only for those FILE*’s that are not shared between threads.

      Yup, trying that now – and have tripped over something fishy in the flex-generated scanner. There might be a substantial performance win here if I can figure out why removing the YY_INPUT definition breaks parsing.

  43. > Yup, trying that now – and have tripped over something fishy in the flex-generated scanner. There might be a substantial performance win here if I can figure out why removing the YY_INPUT definition breaks parsing.
    I had a look at that earlier, I think it has something to do with EOF handling.

    1. >I had a look at that earlier, I think it has something to do with EOF handling.

      Yeah, I think so. When I delete the handcrafted YY_INPUT macro, strace shows the program looping reads forever after it hits the first EOF.

      The peculiar thing here is that the default YY_INPUT seems not to do the right thing on EOF. I’m wondering if this is some bad interaction with the re-entrancy options.

  44. What about abandoning stdio entirely? If you’re sharing a file descriptor between threads, Low-level IO has e.g. an atomic “read from this offset in the file” function that stdio lacks.

    In the course of trying to become familiar with how your program uses I/O, to figure out if this or other questions I might think of make any sense, I noticed a typo:
    #define write fwrite_unlocked
    Should be fwrite.

  45. @Max:
    >I tried deleting my groff tree and re-cloning it, but was not ever able to replicate that 26-second run without threads, so I assume the data is still cached somehow. All the methods I can find to empty the FS cache require sudo, so I don’t think I actually can test it properly– you are in a better position to do so than I am.

    I own a 4-core machine with 8 MB cache and 32 GB RAM. I’m not familiar enough with cvs-fast-export or its problem domain to jump in and start testing on my own, but if you could provide instructions for any tests you’d like to try that require root I could have a go at them.

  46. @Jon Brase
    This is the method I found:
    https://wiki.openoffice.org/wiki/Cold-start-simulator
    Specifically the /proc/sys/vm method.

    The cvs-fast-export compile process is pretty conventional. Just make and then make install. Be sure you have Bison 3.0.2, if you don’t you’ll need to compile your own copy or find it packaged somewhere.

    ESR provided the benchmarking script in this comment:
    http://esr.ibiblio.org/?p=6373&cpage=1#comment-1184637

    Clone the groff repository like so:
    http://esr.ibiblio.org/?p=6373&cpage=1#comment-1187166

    Then do this:
    cd groff
    time ../benchmark.sh -t $numthreads

    What I’m curious about is, does clearing the filesystem cache prior to running the benchmark effect how the program takes to run? Are multiple threads with an empty cache still faster than a single thread with an empty cache, or does emptying the cache negate the benefit of threading altogether?

  47. @esr you need to audit your use of other stdio functions. you’ve got an fprintf (there doesn’t seem to be an unlocked version, but that doesn’t mean it’s not acquiring the file lock), and fputc (defining putchar putchar_unlocked isn’t going to do you any good, you need putc_unlocked) and fclose both have unlocked versions.

    It might be worthwhile to benchmark removing stdio. I’m not 100% sure I have the correct idea of which file I/O operations are performance-significant, though.

    1. >you need to audit your use of other stdio functions

      Done it. The main thing is that the bit-pump for writing out blobs needs to be fast. A bit of slowdown on fprintfs isn’t a terrible tragedy.

  48. > Done it. The main thing is that the bit-pump for writing out blobs needs to be fast. A bit of slowdown on fprintfs isn’t a terrible tragedy.

    That doesn’t make sense; fwrite only acquires the lock once – if it’s causing a performance hit, so should anything else in the same loop. The “bit-pump” itself is in _IO_sputn [technically _IO_default_xsputn – glibc’s stdio is annoyingly C++-ish, it’s a macro hiding a virtual function call], which doesn’t care about locks and is called the same way by both.

    1. >That doesn’t make sense; fwrite only acquires the lock once – if it’s causing a performance hit, so should anything else in the same loop

      Right, but there’s a large difference in relative frequency that matters. Most of the I/O is going to be for blobs. only a relatively small amount (and only on the write side) will use fprintf(). I’ll try changing to sprintf/fputs_unlocked (it’s #3 on my list of optimizations right now) but I don’t expect to see a lot of speedup.

  49. @Max:
    >What I’m curious about is, does clearing the filesystem cache prior to running the benchmark effect how the program takes to run?

    What would be excellent here is a kernel feature allowing a program to be run with limitations on its access to filesystem caches, both for debugging purposes like this, and for things like preventing backup jobs from pushing files that are actually being used (rather than just backed up) out of cache. For example, if a desktop machine sits unused for eight hours a day, during which a whole-system backup runs, and the user has a set of files that are accessed once every three days on average, the cache will usually contain whatever files are on the tail end of the backup, rather than the user’s files, when those files are accessed. Meanwhile, the backup won’t be sped up much, as it touches every file on the system once, and the used size of the filesystem is likely larger than RAM. Thus it would be nice for a shell to be able to set a cache priority for its child processes, or disallow them from accessing filesystem caches altogether, so that a programmer can test the I/O performance of a program, or so that a backup job can just push data straight to disk without caching useless data.

    That thought is actually inspired by operations on my own desktop: I’m running a daily incremental backup to an external drive, and while I’m not noticing a huge performance hit as a result, I’m fairly certain that the backup is completely wiping my caches of useful data and replacing it with junk from the backup (plus, even if the entire pagecache is filled with useful things even after a backup, every inode and dentry on the system ends up cached in RAM, to the tune of 6-10 GB tied up on a 32 gig system, a fairly small fraction of which probably consists of files actually accessed frequently for anything but backup).

  50. Surly someone has suggested to run under cachegrind haven’t they. Compare different runs under 1 vs N threads.

    1. >Surly someone has suggested to run under cachegrind haven’t they. Compare different runs under 1 vs N threads.

      Current task list:

      1. Figure out how to get rid of the YY_INPUT declaration that’s forcing character-by-character input in the scanner and still have EOF passed back properly. This would (a) increase input speed during parsing, (b) allow me to set some flex switches that would optimize the automaton generation for speed.

      2. Finish getting rid of the shared blob-ID counter.

      3. Change output fprintf calls to sprintf/fputs_unlocked. Alternatively, find some magic to disable the thread lock checks in glibc stdio so I can tell it not to do that in stages 2 and 3.

      4. Run the whole thing under cachegrind.

  51. @Jon Brase – the manpage for open makes the following sweeping assertion about the O_DIRECT flag:

    Try to minimize cache effects of the I/O to and from this file. In general this will degrade performance, but it is useful in special situations, such as when applications do their own caching.

    That sounds to me like the data never touches the cache – if it were just write-through, then it wouldn’t have any effect on I/O “from” a file. Might be worth a shot. Of course, that’s per-file-open rather than per-application, so it won’t help here without some kind of LD_PRELOAD rig, but might be worthwhile for a backup application, which should also be passing O_NOATIME anyway.

  52. Processing groff gets me:
    4.3-4.5 seconds with empty caches for 3-5 threads.
    Slower times with more or fewer threads.

    3.0-3.1 seconds with 4 threads with data in cache
    Slower times with more or fewer threads, in particular:
    3.1 seconds and change with 3 threads
    3.3 seconds with 2 threads
    3.6-3.7 seconds with 5 threads

    My processor is a Xeon E3 1220v3 (Haswell, 4 core, 8 MB L3), the machine has 32 GB RAM and / and /home on separate 2TB magnetic disks.

  53. I would urge you to consider pipelining again.

    You say you have 3 stages now, which sounds like it can be done with 3 threads. Have the stage 2 and 3 threads idle until they get work in. You can probably break the non-magic stages into substages too, possibly with more threads.

    In projects not even remotely similar to yours, I’ve processed hundreds of gigabytes of text data (billions of lines), and I had excellent results by breaking things into subtasks and running them opportunistically. In some cases, I’ve turned months into days.

    A good pipeline lets each task maintain cache and/or memory locality for the duration of a chunk.

    1. >You say you have 3 stages now, which sounds like it can be done with 3 threads.

      No. Stage 2 involves a data randezvous in the branch merge algorithm. Stage 3 is pumping out a stream representation in a way that’s intrinsically serial.

  54. One simple question, why don’t we emit blobs immediately rather than persisting them?
    On my laptop with a single thread and groff, persisting and emitting blobs accounts for 40% of wall time.

    1. >One simple question, why don’t we emit blobs immediately rather than persisting them?

      The original reason was that, for test purposes, I wanted the tool to emit blobs and commits in the exact canonical order used by git fast-export. This required that blobs be emitted as late as possible, that is just before the first commit operation that references them in an M fileop.

      I have recently contemplated dropping that constraint, because it would indeed be faster to write out each snapshot only once when running serially. It wouldn’t play well with parallelizing stage 1, though. At the moment the threads can do snapshot output from different masters to temp files independently of each other; they couldn’t do that if they were all dumping to stdout, instead it would have to be locked as each individual blob were emitted.

      So, the unanswered question is this: is the second blob copy costing more than the additional time in parsing would if N-1 parses had to stall during blob output? It might seem obvious that the answer is ‘yes’ until you consider that not only computation would be stalled, but also continuing input from the other master files.

      At some point I’ll probably modify the code and run an experiment.

    1. >@esr – could the output operations be sensibly queued to a dedicated writer thread?

      Possibly, but what would the actual point of that be? At the moment each first-stage thread becomes a dedicated writer exactly when it needs to be – that is, when the correct points in its computation have been reached.

  55. Nb. I have heard good things about ThreadSpotter software (now part of TotalView debugger from Rogue Wave)… unfortunately it is proprietary tool, though there is evaluation version.

  56. > Possibly, but what would the actual point of that be? At the moment each first-stage thread becomes a dedicated writer exactly when it needs to be – that is, when the correct points in its computation have been reached.

    Doing all the writing at the end means that the program spends some time not writing.

    Doing the writing immediately within the worker threads means that those threads spend some time not calculating, and the writing at the end is done while nothing is doing any calculating, and that you can run into a case where all the other threads are waiting for the first one to finish writing before they can write or do any more calculating.

    1. >Doing all the writing at the end means that the program spends some time not writing.

      But it also means all the per-delta snapshots would have to be stored in memory until late in processing. That would get really expensive really fast. You’d, in effect, be keeping the entire history of resolved snapshots in your swap area and calling on your VM layer to page it in.

      Wow. I’ll have to tweak the code to try this at some point. It’d be a helluva stress test for the VM layer. And if the repo history ever got larger than your swap area (a distinct possibility) you’d be screwed.

  57. > But it also means all the per-delta snapshots would have to be stored in memory until late in processing.

    I’m confused, I thought we were talking about what you were doing now. You know, instead of @David Michael Barr’s suggestion regarding “One simple question, why don’t we emit blobs immediately rather than persisting them? On my laptop with a single thread and groff, persisting and emitting blobs accounts for 40% of wall time.” Wait, are you saving them to files, then reading them back for copying into the output stream?

    > And if the repo history ever got larger than your swap area (a distinct possibility) you’d be screwed.

    What if it were backed by a file instead of swap? Of course, then it’s probably not functionally different from using files and benefiting from the cache.

    1. >Wait, are you saving them to files, then reading them back for copying into the output stream?

      Yes. Originally this was to be able to emit in canonical order; now it’s because it allows the worker threads to stash their product without having to serialize,

  58. Also, when I said “Doing all the writing at the end means that the program spends some time not writing” I meant that as a negative (vs the third option of having a dedicated thread that is writing all the time what the other threads feed it).

  59. The split difference I’ve used in the past is to have a single dedicated writer/manager. When a worker finishes, it hands off the completed memory block to the writer, then keeps writing.

    Dumb question, have you tried /not writing/ the output (just commenting out the outbound I/O) to see if the processing brings the threaded solution to be the kind of run time you’d expect from the multiple threads?

    1. >Dumb question, have you tried /not writing/ the output (just commenting out the outbound I/O) to see if the processing brings the threaded solution to be the kind of run time you’d expect from the multiple threads?

      Going on my list of things to try.

    2. >The split difference I’ve used in the past is to have a single dedicated writer/manager. When a worker finishes, it hands off the completed memory block to the writer, then keeps writing.

      That’s an interesting idea, performance-wise. No worker thread would ever stall on output – but the block queue could get huge as the writer thread fell behind. OOM killer invoked in 3…2…1…

  60. That’s an interesting idea, performance-wise. No worker thread would ever stall on output – but the block queue could get huge as the writer thread fell behind. OOM killer invoked in 3…2…1…

    Well, you could have a queue-depth limit and block until it was “small enough”.

    At some point you’d end up stabilizing at “as fast as it can be written out”, which would seem to be a hard-to-avoid choke point.

    (And a good reason to buy an SSD if this was for something you did all the time, rather than what it actually IS.)

    1. >Well, you could have a queue-depth limit and block until it was “small enough”.

      Yes, that’s true. But now I’m wondering…if I’m willing to hold all the metadata in memory at once, maybe there’s a way to avoid composing the deltas until they’re needed? Hmmm…

  61. You guys arent dumb enough to ask dumb enough questions. Would underclocking the CPU affect the results, or could it be an intel frontside bus bottleneck problem? Im using cpufreqset since i prefer my laptop to be slightly underclocked.

  62. I’m fashionably late to the party, as usual…

    ESR – if your code doesn’t have the iron to back it up, I suspect your desktop piddly-pants dual-core processor is likely to always perform like a slack-jawed yokel at a chili dog eating contest.

    How far do you reckon you are from a solution? Worth my time taking a dive into the code?

    1. >How far do you reckon you are from a solution? Worth my time taking a dive into the code?

      I think the biggest mystery has been solved. The inexplicable slowdown in stage 3 turned out to be due to a glibc feature; it tries to make stdio thread-safe when it thinks threads might be in use, and this imposes overhead even after all your worker threads are gone. Now that that has been worked around, the numbers look like there are pretty normal cache- and disk-contention issues that can be alleviated by throwing a non-yokelish PC at the problem.

      Summary of the state of play will issue as another blog post shortly.

  63. Eric: anything worth doing is worth doing with fake numbers, so let’s do a toy example. Suppose that you have a bunch of items to process, and each one takes 9 seconds of CPU time followed by 1 second of blocking on I/O to write it out. You have four CPU cores. How to arrange this?

    Option number 0: single-threaded. Do each item in turn. Each one takes 10 seconds. Easy, but maybe you’d like to use those extra cores.

    Option 1: Create a pool of worker threads, roughly one for each core, and have each of them process any work available. They’ll spend 90% of their time on CPU, but 10% of the time, each of your cores will be sitting idle, blocking on I/O. Each item takes 10 seconds, but you have four cores, so that’s a throughput of 2.5 seconds per item. Yay for a 4x speedup, but we can do better….

    Option 2: More threads! Say you make 8 threads — twice your number of cores. It’s not unreasonable to think that the scheduler will have some CPU-heavy thread ready to run when another is blocking on I/O. This is a plausible option! However, if those worker threads are memory-heavy, then you may not want to have so many of them. (Throughput: ideally, since the I/O and CPU parts will be running at the same time and you’re using all four cores, that means 9 seconds per item, across four cores, so 2.25 seconds per item.)

    Option 3, which MP suggested: make one CPU-heavy worker thread per core, plus one extra thread to handle writing their results. Stick a boring, off-the-shelf, bounded-size queue between them, with the size of the queue set to some small integer — say, 3. This is essentially what Python’s queue module gives you, and there are ways of doing the same in C. The CPU worker threads would be running essentially all the time, with the writer thread coming alive once in a while to handle some I/O, but spending most of its time blocked. If the worker threads outpace the writer thread, then they block so they don’t fill up the buffer. And the only point of synchronization between these is the work queue, which you can and should get from some library of multithreading primitives that somebody else wrote, because this sort of thing is a pain in the ass to write yourself. (Throughput: again, 2.25 seconds per item with our toy numbers and simplified accounting. This just happens to use less memory, and context-switch less.)

    As a side-note: I’ll add to the chorus of people telling you to check out higher-level libraries for multithreading. Sometimes they’re so good that using them isn’t even horrible!

  64. @esr: If you are looking at either major upgrades or system replacement, I just got through computer-shopping and might have both some general and specific recommendations.

    1. >@esr: If you are looking at either major upgrades or system replacement, I just got through computer-shopping and might have both some general and specific recommendations.

      Yeah, I plan to post asking for suggestions shortly.

  65. > That’s an interesting idea, performance-wise. No worker thread would ever stall on output – but the block queue could get huge as the writer thread fell behind. OOM killer invoked in 3…2…1…

    Use a pipe as the queue. Guaranteed atomic as long as the structure you pass in is less than 512 bytes large, and if the writer falls behind, the pipe will fill up and workers will start to block.

  66. How about this for an idea:

    – spend a subset of the cash on a nice, new laptop (perhaps a new quad-core ThinkPad?)
    – put the rest in ‘ESRs cloud computing account’

    Then, when the need arises to process a monster repo (or any other power-hungry task), fire up a cloud image, run the conversion, and shut down the image again.

    Sure, there’d be an ongoing as opposed to up-front cost. But evidence suggests it might be met adequately by donations, and you wouldn’t have to worry about investing heavily in hardware that would become obsolete. As time progresses, your cloud computing dollar will buy more and more power.

    I have a working knowledge of EC2, and would be happy to assist in getting you set up with everything you need, as well as perhaps some utility scripts. As one of my colleagues remarked upon reading this thread, “converting a giant repo in the cloud” is something that might be amenable to hands-off automation. A script could fire up an instance, install / build the required software, perform the conversion, store the data somewhere else (perhaps in Amazon S3 cloud storage?), and cleanly shut down.

    1. >Then, when the need arises to process a monster repo (or any other power-hungry task), fire up a cloud image, run the conversion, and shut down the image again.

      One of the things we now know is that an EC2 instance has terrible throughput for this job load.

  67. Actually, the resultant tooling could potentially be quite useful to folks other than just ESR.

    It’d essentially function as a wrapper around reposurgeon, and could be used to spawn instances locally for development (on VirtualBox, through Vagrant or Corona or whatever) or remotely for ‘production’ (on EC2 or whatever).

    Fear the cloudsturgeon :)

Leave a comment

Your email address will not be published. Required fields are marked *