A low-performance mystery

OK, I’ll admit it. I’m stumped by a software-engineering problem.

This is not a thing that happens often, but I’m in waters relatively unknown to me. I’ve been assiduously avoiding multi-threaded programming for a long time, because solving deadlock, starvation, and insidious data-corruption-by-concurrency problems isn’t really my idea of fun. Other than one minor brush with it handling PPS signals in GPSD I’ve managed before this to avoid any thread-entanglement at all.

But I’m still trying to make cvs-fast-export run faster. About a week ago an Aussie hacker named David Leonard landed a brilliant patch series in my mailbox. Familiar story: has a huge, gnarly CVS repo that needs converting, got tired of watching it grind for days, went in to speed it up, found a way. In fact he applied a technique I’d never heard of (Bloom filtering) to flatten the worst hot spot in the code, an O(n**3) pass used to compute parent/child links in the export code. But it still needed to be faster.

After some discussion we decided to tackle parallelizing the code in the first stage of analysis. This works – separately – on each of the input CVS masters, digesting them into in-core revision lists and generating whole-file snapshots for each CVS delta; later these will become the blobs in the fast-export stream. Then there’s a second stage that merges these per-file revision lists, and a third stage that exports the merged result.

Here’s more detail, because you’ll need it to understand the rest. Each CVS master consists of a sequence of deltas (sequences of add-line and delete-line operations) summing up to a sequence of whole-file states (snapshots – eventually these will become blobs in the translated fast-import-stream). Each delta has an author, a revision date, and a revision number (like 1.3 or 1.2.1.1). Implicitly they form a tree. At the top of the file is a tag table mapping names to revision numbers, and some other relatively unimportant metadata.

The goal of stage 1 is to digest each CVS master into an in-core tree of metadata and a sequence of whole-file snapshots, with unique IDs in the tree indexing the snapshots. The entire collection of masters is made into a linked list of these trees; this is passed to stage 2, where black magic that nobody understands happens.

This first stage seems like a good target for parallelization because the analysis of each master consists of lumps of I/O separated by irregular stretches of compute-intensive data-shuffling in core. In theory, if the program were properly parallelized, it would seldom actually block on an I/O operation; instead while any one thread was waiting on I/O, the data shuffling for other masters would continue. The program would get faster – possibly much faster, depending on the time distribution of I/O demand.

Well, that’s the theory, anyway. Here’s what actually happened…

First, I had to make the parser for the master file format re-entrant. I did that, and then documented the method.

Then I had to examine every single other bit of global state and figure out if it needed to be encapsulated into a local-context data structure for each first-stage parse or could remain shared. Doing this made me happy; I hate globals, they make a program’s dataflow harder to grok. Re-entrant structures are prettier.

Once I thought I had the data structures properly refactored, I had to introduce actual threading. OK, time to actually learn the pthreads API. Did that. Wrote a multi-threading tasklet scheduler; I’d never done one before and took me about four tries to get it right, but I knew I was going in a good direction because the design kept getting simpler. Current version is less than 70 LOC, that’s for the dispatcher loop and the worker-subthread code both. I am proud of this code – it’s pretty, it works, and it’s tight.

While this was going on, my Aussie friend was writing a huge complex map/reduce-based scheduler to do the same job. He realized he’d succumbed to overengineering and gave it up just about the time I got mine working.

Not all was wine and roses. Of course I missed some shared state the first time through; multithreaded operation revealed this by sporadically crashing. I misunderstood the documentation for pthreads condition variables and had a few headaches in consequence until David clued me in. But these were solvable problems.

I got the code to the point where I could choose between straight sequential and N-threaded operation by flipping a command-line switch, and it was solid either way. Felt a warm glow of accomplishment. Then I profiled it – and got a deeply unpleasant shock. The threaded version was slower. Seriously slower. Like, less than half the throughput of naive one-master-at-at-time sequential parsing.

I know what the book says to suspect in this situation – mutex contention overhead. But I knew from the beginning this was unlikely. To explain why, I have to be more specific about what the contention points are.

The scheduler manages an array of worker threads. There’s one slot mutex for each worker thread slot, asserted when it’s active (that is, there’s a live thread processing a master in it). There’s one wakeup mutex associated with a condition variable that each worker thread uses to wake up the manager loop when it’s done, so another master-parsing thread can be scheduled into the vacant slot. And there’s another output mutex to make updates to the list of parsed masters atomic.

These mutexes, the ones for the scheduler itself, don’t change state very often. The history of a CVS master parse causes only these events: slot mutex up, output mutex up, output mutex down, slot mutex down, wakeup mutex up, signal, wakeup mutex down. The critical regions are small. This just isn’t enough traffic to generate noticeable overhead.

There are just two real mutexes that handle contention among the masters. One guards a counter so that the code can issue sequential blob IDs common to all threads; that one gets called every time a delta gets turned into a revision blob. The other guards a common table of named tags. Neither showed up as hotspots in profiling.

Only the counter mutex seemed even remotely likely to be getting enough hits to reduce throughput by half, so I replaced it with an atomic fetch-and-increment instruction (the comment reads /* the sexy lockless method */). That worked, but…no joy as far as increased performance went.

(Yes, I know, I could be replacing the guard mutex with an equally sexy lockless read-copy-update operation on the output linked list. Not worth it; there’s only one of these per master file, and at their fastest they’re on the order of 50 milliseconds apart).

Time to seriously profile. David had clued me in about Linux perf and I knew about gprof of old; armed with those tools, I went in and tried to figure out where the hell all those cycles were going in the threaded version. Some things stood out…

One: Mutex calls were not – repeat not – showing up in the top-ten lists.

Two: The first stage, which should have been sped up by multithreading, was running a factor of two slower than the straight sequential one-master-at-a-time version.

Three: Stage two, the merge magic, was running about the same speed as before.

Four: Here’s where it gets not just puzzling but outright weird. Stage three, the report generator, as far as it’s possible to get from any mutex and still be in the same program – running two to four times slower, consistently.

Very, very strange.

I noticed that the threaded version has a much bigger memory footprint. That’s as expected, it’s holding data for multiple masters at the same time. Could I be seeing heavy swap overhead?

Turns out not. I profiled on a much smaller repo, enough smaller for the working set to fit in physical core without swapping. Same strange statistics – Stage 1 slower, stage 3 much slower than the unthreaded code. Factors of about 2 and 4 respectively, same as for the larger repo.

(For those of you curious, the large repo is groff – 2300 deltas. The small one is robotfindskitten, 269 deltas. These are my standard benchmark repos.)

Now I’m fresh out of ideas. But I have noticed something separately interesting. According to the profiles, both of them are spending upwards of 40% of their time in one small routine, a copy operation in the delta-assembly code. Having no better plan, I decide to try to flatten this hot spot. Maybe something will shake loose?

I start putting in const and restricted declarations. Replace some array indexing with pointers. I normally don’t bother with this kind of micro-optimization; usually, when you have a speed problem you need a better algorithm rather than microtweaks. But in this case I think I already have good algorithms – I’m trying to help the compiler optimizer do a better job of reducing them to fast machine instructions, and hoping for insight into the slowdown on the way.

This works rather better than I expected. A few hours of work later Stage 1 is running at rough speed parity in threaded amd unthreaded versions. It ought to be faster, but at least the threaded version is no longer absurdly slow,

Stage 3 of threaded execution is still absurdly slow.

And that’s where things stand now. I wrote this partly to clarify my own thoughts and partly to invite comment from people with more experience optimizing threaded code than I have.

Does anything jump out of this fact pattern and say boo? If you were in my shoes, what forensics would you try?

Code is here: https://gitorious.org/cvs-fast-export/

Benchmark repos can be fetched with cvssync as follows:

cvssync cvs.sourceforge.net:/cvsroot/rfk robotfindskitten

cvssync anonymous@cvs.savannah.gnu.org:/sources/groff groff

67 comments

    1. >I didn’t notice the term “cache miss” in your exposition…

      It’s a reasonable suspicion that some sort of cache phenomenon might be involved, but I don’t know how to investigate that.

  1. Can you elaborate on why you wrote a scheduler? Normally the goal of multiple threads is to let the OS do the scheduling for you.

    Wild guesses: If all the threads are roughly-equal peers, come up with some independent way to measure which are going fast and which are going slow, and where. Like, main loop iterations per second. If some threads are higher-priority then other threads, make sure you understand the definition of “priority inversion”.

    But I normally take the advice of TAOUP and avoid threads whenever possible.

    1. >Can you elaborate on why you wrote a scheduler?

      Well, it’s only a scheduler in the sense of dispatching jobs to pool threads and collecting the results. It doesn’t actually do its own timeslicing or prioritizing. Perhaps “dispatcher” would have been a better term.

  2. I’ve never tried threading, never intend to. But I heard in a very interesting lecture that the key to multi-threaded performance is cache misses. Or rather the lack of them. Memory is a few magnitudes slower than the processor cores. So it’s recommended that each thread works with a limited size chunk of memory that fits into that core’s cache, and with nothing else.

    I suspect that your copy routine breaks this rule and that’s why it’s so slow.

  3. Now, it’s been awhile since I’ve done anything at this level, but to elucidate further:

    First off, if your working set is large enough to swap, it’s certainly large enough to blow the cache. It’s telling that this is happening with copies. There are often prefetches happening with caches:

    https://software.intel.com/en-us/articles/optimizing-application-performance-on-intel-coret-microarchitecture-using-hardware-implemented-prefetchers

    Also, each processor will typically have its own first level cache. If you have some shared data structure that is constantly being written by a thread and read by other threads, the reading threads will block until their local copy gets updated.

    It’s even worse if the data is written by multiple threads.

    Also FWIW, some hyperthreaded processors share some internal resources. So you might have 4 cores with 2 threads each.

    If you can run four threads, on on each of the four cores, and are not IO-, swap-, or cache-bound, you could expect a fairly linear improvement.

    But as soon as you run 8 threads, you will not get a linear performance improvement even if you are CPU-bound, because of the shared resources.

    So when you add that in to the caching/prefetch issues, once you organize your data to try to minimize multiple processors touching the same data, and once you add code to your program to reduce/remove prefetches (especially for whole cache lines you are only going to write, if your architecture lets you do that), then you will probably still find that the optimal number of threads is less than the theoretical maximum supported by the hardware.

    1. >you will probably still find that the optimal number of threads is less than the theoretical maximum supported by the hardware.

      The problem shows up clearly with 2 threads. Of course, I only have a dual-core chip in this desktop machine.

      The curious part is that the biggest slowdown isn’t in the threaded section.

  4. Building on the idea of you mentioned of swapping being an issue, what’s the typical size of the data set this copy operation is working on? Are the threads bumping each other out of L3 cache?

    1. >Building on the idea of you mentioned of swapping being an issue, what’s the typical size of the data set this copy operation is working on?

      It varies a great deal, enough that the concept of a “typical” size doesn’t really exist. For each delta of each master, the size of the working store will be dominated one by one entire file copy (the snapshot for the previous revision) and the patch to it that is the content of this delta. The overhead of the edit context structure is not much – 3.4K.

  5. >> I didn’t notice the term “cache miss” in your exposition…

    > It’s a reasonable suspicion that some sort of cache phenomenon might be involved, but I don’t know how to investigate that.

    AFAIK perf (and other tools using hardware counters, like HPCToolkit) can report cache misses, if requested.

  6. What if you put a mutex around the slow copy operation? Presumably the new mutex itself wouldn’t take up much time, because everything is spending 40% of it’s time actually copying, not starting and stopping little copies.

    But with the new mutex, each thread that wanted to copy would no longer be contending with other threads also wanting to copy.

    1. >What if you put a mutex around the slow copy operation?

      It’s complicated. The copy isn’t actually a straight copy; it’s a delta application, like patch does. And, it being the most compute-intensive part of the dta flow, if I’m going to mutex-lock it I might as well give up and stay single-process.

  7. > everything is spending 40% of it’s time actually copying, not starting and stopping little copies

    Or, if each thread actually does zillions of little copies, have one thread grab the mutex at the next level up, until it does, say, a million copes, then give up the mutex to the next guy.

  8. Do I read correctly that the inner loop of analyze_masters(), the while (fn_head) part, constantly creates new N threads, and gives them a small snippet of work to do? That’s not canonical – instead, how about creating N threads once, and let them live/work a long time, racing/walking down the fn* list?

    1. >That’s not canonical – instead, how about creating N threads once, and let them live/work a long time, racing/walking down the fn* list?

      Thank you, that is actionable advice.

  9. @esr:

    > some sort of cache phenomenon might be involved, but I don’t know how to investigate that

    I second what Shawn Yarbrough wrote — if things are configured correctly, I think perf gives cache statistics:

    http://stackoverflow.com/questions/14674463/why-doesnt-perf-report-cache-misses

    The simplest check would be total cache misses for an execution when running single-threaded, vs. total cache misses when running multi-threaded.

    > The curious part is that the biggest slowdown isn’t in the threaded section.

    That’s interesting. If you’re not maintaining per-thread sub-heaps, then it may be that you have trashed data locality enough to affect the cache and/or even paging.

  10. cachegrind is very slow but model cache effects well.

    I second Frank’s comment – if you constantly creating new threads, you are doing it wrong. Thread creation is not free or cheap, you probably want to create them near startup and dispatch work to a pool of them.

  11. Valgrind’s callgrind and cachegrind tools can be used to simulate cache behavior. My understanding is that callgrind is a strict superset of cachegrind. Cachegrind only does cache profiling, whereas callgrind can also count instruction fetches as well.

    You can set whatever cache size and associativity you want to emulate if you want. The defaults will be populated from your CPU.

    Note that this causes emulation of the cache in software, which can slow your program down by 100x or more.

    In general, callgrind is an awesome benchmarking tool for purely CPU-bound programs, but because it doesn’t slow down I/O proportionally to the way it slows down everything else, it can underrepresent how much of your overhead is due to I/O.

  12. If you could refactor to avoid copies, it might speed things – do you alter the data so need to copy first, or could you just use a pointer/length or something and just use that. I’m also thinking about mmap-ing the input and maybe the output.

    Multithreading is good for compute intensive but not good for data movement. Think of the cpu as a heart – the veins and arteries limit the efficiency of the pump.

    Side story – I had a C program that would overheat the processor only under linux. Not windows. Turns out the data that I was brute force compressing fit inside the cache, but windows tick would take too much so data had to be recached. Linux would load and lock. And get hot.

    So the processor actual cycles or temp might show something. If the cpu is mostly starved, threads will make it worse.

    Another possibility is a different fracture line – thread(s) reads and scans but creates a copy list, and other thread(s) process it to create the output.

    1. >If you could refactor to avoid copies, it might speed things – do you alter the data so need to copy first, or could you just use a pointer/length or something and just use that. I’m also thinking about mmap-ing the input and maybe the output.

      All three of these tricks are already in use. Keith Packard wrote in two of them, and David Leonard added mmap for input.

  13. That the code that is not multithreaded has slowed down is the tell.

    Obviously, you have trashed locality.

    When the inputs to that stage were generated by single threaded code, the data was localized.

    When generated by two threads, unrelated material in the generated inputs was interwoven.

    1. >That the code that is not multithreaded has slowed down is the tell.

      You are the first to actively zero in on that, which is to me the most interesting feature of the whole mess.

      Your theory is plausible, and reinforces very sensible suggestions by others that I should figure out how to profile cache misses. It almost has to be something like cache or swap issues, since I’m very confident that the code is sound. I only added about 60 lines to put in thread dispatching; the rest is extremely well tested.

      Supposing it’s true, how much gain does your experience suggest I can get from swapping my long-in-the-tooth dual-core/4GB-memory desktop box for one with more cores and a buttload of RAM? (I’ve been thinking about this anyway; it would speed up my tests of Emacs repository conversion.)

  14. When you are grinding @#$%^& big datasets, locality is everything, even though you are grinding them with layer after layer of abstractions designed to hide locality issues from the programmer.

    Indeed, because you are grinding them with layer after layer of abstractions designed to hide locality issues from the programmer, locality is everything, since lack of locality makes each of these abstractions do extra work under the covers.

  15. You have umpteen layers of caching, each of them hidden from you.

    For caching to work, the working set has to fit into the cache. For the working set to fit into the cache, local data has to be local. You have a pile of dags, which means you are endlessly dereferencing stuff. If the stuff referenced is generally near the dag, will generally be in cache. If it is not, will not.

    So what I think is happening is one thread is working over one set of dags, and the other thread working over another unrelated set of dags, and the outputs of the two threads wind up interleaved.

    Even worse, you might have a hundred threads, and two processors, so each processor is endlessly switching threads, and at each thread switch loses locality – and impairs locality of the storage pattern of the outputs.

    As in politics and real estage, its locality, locality, locality.

  16. (Considering the relatively pessimal use of threads, I wouldn’t even look at cache behavior yet. Make strace -eclone traffic go way down first.)

    1. >(Considering the relatively pessimal use of threads, I wouldn’t even look at cache behavior yet. Make strace -eclone traffic go way down first.)

      Yeah, I think you’re right. The first thing I’m going to do is rebuild it so thread allocation is done only once.

      I’m having a learning experience. It’s not hard, not really – I’ve done enough years of systems programming to know how to think about concurrency and resource management pretty effectively. But the tools in the box labeled “threads” have unfamiliar handles and slightly different affordances. They’re not comfortable in my hand yet.

  17. Nb. there might be also issue of so called “false sharing”.

    Why low-level pthreads and not higher-level OpenMP, or some library?

    1. >Why low-level pthreads and not higher-level OpenMP, or some library?

      Because until this moment I did not really know there were “higher-level” libraries. I’ll g read up on OpenMP.

  18. > if I’m going to mutex-lock it I might as well give up and stay single-process

    If you have something like idealized threads that are 60% cleanly-parallelizable, and 40% thrashing when competing for some resource(s), then mutex-locking the 40% should speed things up.

    Because the *worst* case for your whole process is that every other thread will block whenever one thread has the 40% locked. Better cases will have some threads continuing to work on the 60% section in parallel while one thread is chugging through the 40% section.

  19. @esr:

    > You are the first to actively zero in on [locality]

    Nah, I beat him by 45 minutes :-)

  20. James A. Donald seems to be providing solid advice. A few odds and ends I’ve picked up over the years:

    Did any of your data structures increase in size? Increasing the number of cache lines required to access your data can have drastic results in performance. Packing data structures and using unions can (under certain circumstances) provide major performance gains.

    I/O is a finicky beast. There’s a whole knee-of-the-curve problem where you go from almost saturating the I/O subsystem and you’re getting great performance, to just past saturation and suddenly your latency goes through the roof and you have all sorts of contention issues.

  21. Supposing it’s true, how much gain does your experience suggest I can get from swapping my long-in-the-tooth dual-core/4GB-memory desktop box for one with more cores

    More cores don’t necessarily have bigger caches. Likely they merely have more caches. More memory means more disk cache, but we don’t know to what extent your problem is disk cache.

    If more power makes stuff go faster, your algorithms and data structures are in good shape, and more power is then always a cheaper solution than efficiency. But all too often, does not make stuff go faster. Applying power to the problem is apt to be subtle.

    Optimizing operations done on large data sets is an arcane and mysterious art.

    Map reduce is seldom as efficient as hand optimized solutions, but it is seldom a lot less efficient. The advantage of map reduce is that you will get a tolerably efficient solution without a lot of clever engineering for efficiency. If you are worried about efficiency on huge datasets, map reduce is the quickie low engineering solution. It is the easy way to throw lots of power at the problem.

    Unless, of course, map reduce involves a radical rewrite of existing code that is working correctly, in which case it is indeed an over engineered solution, and you are probably better off trying to optimize by making your representation of the data as local as possible.

  22. >>> This first stage seems like a good target for parallelization because the analysis of each master consists of lumps of I/O separated by irregular stretches of compute-intensive data-shuffling in core

    &&

    >>>That the code that is not multithreaded has slowed down is the tell.
    >>>Obviously, you have trashed locality.

    May need to get a bit closer to the hardware. As a storage and infrastructure guy, this sounds suspiciously like disk thrashing.

    Stage 3:
    It is the single-threaded report generator trying to read output interleaved onto disk by the threaded workers in the previous stage. Since it is “only” being slowed by 4x, the disk’s internal cache hit rate is probably covering for a lot of the problem, but you are still blocked by the bus speed.

    Stage 1:
    This one is tougher to call. Could be locality, disk thrashing, or a mixture of both.

    Have you run iostats on your /dev/sd* devices to see what your throughput and latencies are?

    {As a sidenote, even a mere dual-core will “run” 4-8 threads if it is an Intel HT (hyper-thread) capable chip. Don’t recall what AMD calls their version.}

    1. >As a storage and infrastructure guy, this sounds suspiciously like disk thrashing.

      That is a plausible variant of JAD’s theory. Alas, it suggests no fix other than “Don’t do that, then!”

  23. On a whim, I compiled e965a0991ec1fb58d25c52bb79b5b3c45d90dea3 on OS X (had to hack out all the clock_gettime stuff). Back when I contributed to libstdc++-v3 (mainly threading support, portability) and it was fun to compare threading libraries and their quirks.

    Anyways, what I notice with the groff benchmark on this hardware (all times are as reported by time, the real column):

    non-thread: stage 1,2: 8 seconds, stage 3, 72 seconds
    -t 4: 5.4 seconds, 72 seconds
    -t 16: 5.2, 72 seconds

    stage 3, which dominates the time isn’t even parallel. Can you write n files as output (and then assume the next process can cat them)?

    Regards,
    Loren

  24. > Don’t recall what AMD calls their version
    AMD has no equivalent to HyperThreading, but their Bulldozer chips have a curious arrangement where two integer cores (and by extension, two threads of execution) share a single floating-point unit. Bulldozer is known to be very good at multithreaded performance but not so good at anything else.

  25. @ esr

    This is probably terribly naive, but what the hell… I have already spent too much time thinking about this – investigating the code would take much longer. If this is irrelevant, so be it.

    If part-one treads are acquiring memory as they work by malloc, you could allocate a huge wack of memory for each thread and replace the calls to malloc with a function that uses the block. ‘Course, that alone might slow the program down, but each thread would have good locality.

    AND, If the output of the threads could be… uh.. sorted such that what goes into the next step is ordered by threads…. wouldn’t that give the last step little excuse for running slower?

    1. >you could allocate a huge wack of memory for each thread and replace the calls to malloc with a function that uses the block.

      The program already has a mallopt(3) call that changes the granularity of malloc calls to get something close to this effect.

  26. JAD wrote “So what I think is happening is one thread is working over one set of dags, and the other thread working over another unrelated set of dags, and the outputs of the two threads wind up interleaved.”

    As a test of interleaved data being the problem, would it be practical to sort the data, or re-copy it to restore the locality before embarking on stage 3?
    -George Bullis

    1. >As a test of interleaved data being the problem, would it be practical to sort the data, or re-copy it to restore the locality before embarking on stage 3?

      There’s already a sort phase for the in-memory structures in phase 3 – it’s to get all the gitspace commits lined up in dae order. Nothing analogous to that for the blob files. I could copy them, I suppose, but that would probably just induce disk thrashing sooner…

  27. > Supposing it’s true, how much gain does your experience suggest I can get from swapping my long-in-the-tooth dual-core/4GB-memory desktop box for one with more cores and a buttload of RAM? (I’ve been thinking about this anyway; it would speed up my tests of Emacs repository conversion.)

    There are things you can do that are less drastic than replacing your desktop altogether.

    Have you considered getting an SSD? I imagine all these repository conversions are at least partly limited by disk I/O, and SSDs can make a big difference there. SSDs are also required in order to make multithreaded I/O viable (otherwise you’re spending all your time seeking between the 2+ files being read.) Just don’t ever put a swap partition on an SSD– it will be fast until you kill the SSD.

    Upping the RAM is also a no-brainer because it’s so cheap. I haven’t had a swap partition at all since 2010 and I don’t miss it at all. Be sure to use RAM sticks from the same kit (not just the same make and model, all 2 or 4 of them must be *from the same kit*) in order to allow the memory controller to enable multi-channel reads at the maximum possible speed.

    1. >Have you considered getting an SSD?

      Dave Taht gave me one last year. Installing it has been on my to-do list ever since, but I’m reluctant to mess with my hardware until things get to the point where I have to. If and when I upgrade it will go in.

  28. If you’re thrashing disks, you could manually adjust buffer sizes so that at least you only thrash at 4MB granularity instead of 4KB granularity.

    If you’re thrashing L1, L2 or L3 caches, similar tricks may or may not apply — at the very least, the choice of a better malloc implementation might help.

  29. >>>That is a plausible variant of JAD’s theory. Alas, it suggests no fix other than “Don’t do that, then!”

    If disk-thrashing is really the issue at hand, you have quite a few options, but we have to better characterize the problem first. As mentioned above disk buffer tuning is a quick test and easy fix if it works with SSD being the ultimate easy fix. But there are other solutions too.

    Would really love to see your iostat info, HD model, disk interface and fstype. Comparing single and multi-thread iostats will tell us if the disk is the problem. The rest of the info just helps quantify the options.

    Just a couple “duh” checks: 1) Assuming you ran vmstat to ensure you weren’t page-swapping to disk, and 2) played with your swappiness settings.

    And if you are ever curious to find out how bad disk thrashing can get, ping me. I have a few “while 1 {fork()}” for you to try out. Especially since your use case is sitting very close to one of them I ran into myself…

    1. >Would really love to see your iostat info, HD model, disk interface and fstype. Comparing single and multi-thread iostats will tell us if the disk is the problem. The rest of the info just helps quantify the options.

      >Just a couple “duh” checks: 1) Assuming you ran vmstat to ensure you weren’t page-swapping to disk, and 2) played with your swappiness settings.

      Have done neither of these things yet; I have several more optimizations to fold in first. Right now I’m working on eliminating the shared counter for the blobfile IDs, on the theory that anything reducing cache-line contention has got to be good for speeding up stage 1.

  30. @esr –

    swapping my long-in-the-tooth dual-core/4GB-memory desktop box for one with more cores and a buttload of RAM? (I’ve been thinking about this anyway

    Check your email. Just sayin’….

    1. >Check your email. Just sayin’….

      Holy shit. John Bell took such pity on the antiquated and parlous state of my hardware that he dropped $100 in my tip jar. Public thanks is indicated. This is public thanks.

      Gonna have to upgrade now – can’t not do it after a gesture like that.

      To anybody who might feel like doing likewise, I figure $100 is close to the incremental cost of another processor core when you figure in the whole-system requirements around it (more RAM etc.). Help get Eric a hexacore machine – and stamp out CVS in your lifetime!

  31. @ esr

    You micro-tweaked the small function in step 1 and the results were much better than one would expect. Why? It suggests that there are changes you could make to the broader code in steps one and three that would help a lot.

    I would almost be tempted to start removing the micro-tweaks to find out which tweaks are helping out the compiler a little bit and which are improving the big problem.

    1. >You micro-tweaked the small function in step 1 and the results were much better than one would expect. Why?

      Code repeated tens of thousands of times in a tight inner loop. This is the one situation where classic micro-optimization, rather than looking for a better algorithm, is truly justified.

  32. Holy shit. John Bell took such pity

    I’m going to reproduce my email to Eric here:

    Eric,

    /me is gobsmacked.

    Eric S. Raymond.

    *Eric* *S.* *Raymond*!

    Systems programmer extraordinaire, one of the leading lights of our craft, mentor to hackers all over the world –

    Is still programming on *only* a dual-core, 4Gbyte system?? In the 21st century??? Really??? Hell, my stupid little laptop is an AMD A8 Vision CPU (4 cores) and *12* Gbytes of RAM – and I spent less than $500 on it.

    /me tosses something into ESR’s tipjar.

    “Here’s a nickel, kid. Get yourself a better computer.”

    With respect (and a certain amount of sardonic amusement),

    – John D. Bell

    Note I was not looking for public thanks, but You’re welcome. It just seemed to me that of all of us, who cannot contribute at Eric’s level, should contribute at the level we can….

    Just sayin’…..

    1. >I’m going to reproduce my email to Eric here:

      I’m glad you did that. Such artful and precisely targeted snark should not languish in obscurity.

  33. My thanks to Brian Marshall, who just donated $50 to the Help Stamp Out CVS In Your Lifetime Hardware Upgrade Fund.

    See, with these donations I can feel OK about splurging for extra RAM or whatever instead of pinching pennies until they squeak as I ordinarily do. That’s why I’m running 5- or 6-year-old hardware.

  34. If all you need is big iron to run CVS conversions on, you might approach the Oregon State University Open Source Lab about getting your own VM or something. These are the same guys who administer the 16-core monster I’ve been doing tests on. They have some serious infrastructure, and your use would be hilariously tiny compared to all the other stuff they do.

    (NOTE: I go to OSU, but have never done anything with the OSL, so I don’t know them enough to guarantee that they’ll actually go for it. But it might be worth asking.)

  35. Correct me if I’m wrong, but as I understand it, you’ve got the report generator running separately on each thread, processing the data handled by that thread, so the code that is running slow has multiple threads running simultaneously, but processing different data.

    This says that you’ve got contention for a shared resource that is not cleanly exposed. Top causes for that are memory allocation and cache usage.

    If each thread is doing a lot of memory allocation then it is contending on the allocator. An improved allocator like hoard or tcmalloc might therefore improve the performance.

    On the other hand, if each thread is processing a lot of data then you may just be contending for the cache. If the data than needs to be processed as a chunk by a given thread covers too many cache lines then the threads will cause each other’s data to be retired from the cache, causing a refetch and slowing things down.

    I’ll dig into the code and see if I can spot the problem.

    1. >Correct me if I’m wrong, but as I understand it, you’ve got the report generator running separately on each thread, processing the data handled by that thread, so the code that is running slow has multiple threads running simultaneously, but processing different data.

      No. Here’s how it goes:

      Stage 1, which is (optionally) multithreaded, parses large numbers of CVS masters into an in-core representation of their revision trees (one tree per master) and snasjots (one per each revision of each master).

      Stage 2 is not multithreaded. It takes all the trees processed by stage 1 and does black magic that merges them into a unified DAG of changesets.

      Stage 3, not multithreaded, emits the unified DAG as a fast-export stream.

  36. So the difference between the multithreaded and single threaded versions is purely the order of allocation and location in memory of the data processed by stage 2 and stage 3, and possibly some IDs?

    And stage 3 is the bit that slows down in the multithreaded run?
    Very interesting!

  37. @esr –

    Max E. said:

    If all you need is big iron to run CVS conversions on, you might approach the Oregon State University Open Source Lab

    IIRC, you (ESR) have access to the Debian PorterBox farm. Would it be useful to try running your code on such a wide variety of architectures? Perhaps it would help illuminate the caching issues.

    1. >IIRC, you (ESR) have access to the Debian PorterBox farm.

      I used to, but my password expired and my admin contact has been incomunicado for weeks.

  38. If you do have disk I/O, parallelizing that will have a negative impact unless you can either spread N threads over N disks or switch to a SSD that has no latency/seek delays.

    1. >If you do have disk I/O, parallelizing that will have a negative impact unless you can either spread N threads over N disks or switch to a SSD that has no latency/seek delays.

      Yes, though the degradation may not be bad if the group of masters has good physical locality.

Leave a comment

Your email address will not be published. Required fields are marked *