Bookend consistency

I’ve been thinking recently about writing a shared-memory export for gpsd. The JSON-over-sockets client interface we have is very powerful and flexible, but more than is needed when network access to the server is not required. For embedded deployments, in particular – it would be useful to have a lower-overhead way of shipping results to clients.

Consequently, I’ve been thinking about coherence techniques for shared memory. In this particular case, we have one writer (gpsd) and multiple readers (the application clients). Updates to the shared-memory segment are long enough that writes aren’t guaranteed atomicity. It is permissible for a client to miss an update if it’s not inspecting the segment frequently enough, but required that after a read from the segment the client can always tell when it has a coherent update (as opposed to having read the segment while a write is in progress).

The obvious way to ensure update coherence would be with a semaphore. But a technique that is non-blocking and wait-free would be preferable. I have invented a method I call “bookend consistency”. I present it here for public critique, also because I’m curious whether any of my commenters can identify it with a known, published algorithm. It was inspired by a vague, distant memory of pioneering work by Butler Lampson on lock-free algorithms.

Suppose we have a structure struct payload_t containing our update. The beginning of the technique is to wrap it in a structure that surrounds it with bookends, like this:

struct wrapper_t {
    int bookend1;
    struct payload_t payload;
    int bookend2;
}

The writer’s side of the technique is this: Initialize both bookends to zero. On each write, increment the bookends. Then copy the wrapper structure to the shared-memory segment byte-by-byte, starting with the last byte and backing down to the first.

The reader’s side is simpler: Copy the structure out of the shared-memory segment in normal (first-byte to last) order. Look at the bookends. If both have the same value, you have a coherent update. If they differ, you must back off and retry the read.

The reason this technique works is that if the shared-memory segment is in a mixed state (update in progress) the reversed-order write guarantees that bookend2 will be greater than bookend1 until the read is complete. For the reader, on the other hand, forward copy guarantees that it can’t be fooled about the value of either bookend.

(Thomas Zerucha <tz@mich.com> corrected my original proposal by exhibiting a case where you get inconsistency if the write copy is not in reverse order.)

The technique is robust in the face of interrupts to the copy operations, but depends on nothing in the software or hardware stack beneath C reordering the reads and writes in the copies. A good place to begin ensuring this property is by declaring the pointer to the shared-memory segment ‘volatile’; this will tell your compiler that locations accessed through it may change asynchronously and it should not cache them or mess with operation order.

There are issues with what the underlying hardware might be doing, however. There are two levels to worry about: instruction reordering in the processor and strange memory-controller optimizations. The latter we can probably forestall by putting memory barriers before and after the copy loops.

One thing not to worry about is multiprocessor cache coherence. MP systems pretty much have to guarantee interprocessor cache coherency, otherwise common instruction sequences like writes to memory-mapped devices would have undefined behavior. (For the same reason, MP systems have to limit instruction reordering.)

If you use memcpy(3) for forward copy, any modern compiler is likely to optimize it to a single-instruction copy or a phrase like X86 REP MOVS that won’t be reordered. The point of maximum vulnerability will be in the reverse copy. The iffiest case would be a single-processor system doing really aggressive instruction reordering that an MP can’t risk. I doubt this is a practical problem, however, as byte-copy loops are so dead-simple that they’re unlikely targets for reordering.

94 comments

  1. There’s no need to copy every word in reverse order; only to copy the fields in reverse order. Also, in gcc you can use “asm volatile” to forbid the optimizer from reordering instructions across that statement. So, your write code can look like this:

    wrapper->bookend2++;
    asm volatile("sfence"); /* For x86.  Beware gcc's __sync_synchronize(), which was completely ineffectual on most targets last I checked*/
    memcpy(wrapper->payload, newpayload, sizeof (struct payload_t));
    asm volatile("sfence");
    wrapper->bookend1++;
    
    1. >There’s no need to copy every word in reverse order; only to copy the fields in reverse order.

      True, but I’d prefer not to have to change the code every time the structure’s field inventory changes.

      UPDATE: Oh, I misunderstood. By “fields”, you meant the three wrapper fields. Hm. You may be right; that would certainly make for a smaller reordering target, since the memcpy is likely to optimize to something like REP MOVS.

  2. How is this different than reading the bookends, checking their consistency and then not reading the data if they are different? And isn’t that the same as a semaphore, but one just broken up into two pieces?

    1. >How is this different than reading the bookends, checking their consistency and then not reading the data if they are different?

      Remember that the shared-memory segment may get updated asynchronously. If you read the bookends and then go back and read the data, the payload could be in an inconsistent state when you go back depending on the timing of the writes.

      >And isn’t that the same as a semaphore, but one just broken up into two pieces?

      In the most general sense, yes. But semaphores are usually associated with synchronization protocols that block and wait. This one doesn’t.

  3. By the fields, I mean the payload and the two bookends. I can’t think of any reason why you can’t copy payload using memcpy. Do you see anything wrong with the code I posted?

    1. >By the fields, I mean the payload and the two bookends. I can’t think of any reason why you can’t copy payload using memcpy. Do you see anything wrong with the code I posted?

      Right, I caught up with that a few seconds later and our comments crossed.

      Yeah, I think that’ll work. It’s likely to optimize to something like 4 instructions (excluding the sfence) – word transfer, REP MOVS, word transfer. My only worry is that sequence might be a reordering target when a single reverse-copy loop spanning the whole structure wouldn’t be.

  4. I don’t know the name for it, but I do know the Linux kernel uses something similar for objects that are read so often that writers wouldn’t ever get access under a classic read-write lock. I believe the kernel solution uses one version tag that is read twice, instead of two version tags.

  5. “One thing not to worry about is multiprocessor cache coherence.”

    If this were so, the linux kernel wouldn’t need all those memory barriers. (RAM and I/O are different.) In practice, you’d have to use atomic operations to do the increments and/or the bytewise copies. And if you do atomic/locked bytewise copies into and out of the buffer, you might as well use a lock, it probably wouldn’t be slower.

    See also seqlocks in linux.

    1. >If this were so, the linux kernel wouldn’t need all those memory barriers.

      Aren’t those normally associated with DMA rather than asynchronous but in-memory operations, though?

      I have no objections to a few memory barriers, though. I pointed out in the OP that they’re a defense against perversity by the memory controller.

  6. > Do you see anything wrong with the code I posted?

    Besides relying on GCC-specific behavior?

    Actually, makes me wonder: esr, how much of the code you work on is built specifically against gcc in some way?

    1. >Actually, makes me wonder: esr, how much of the code you work on is built specifically against gcc in some way?

      I try to avoid such dependencies as a matter of ingrained habit. I learned my chops in a time when there was a lot more diversity in compilers and target hardware than there is now, with correspondingly greater pressure to stick to portable constructs. Care about portability is a mindset that has stuck with me.

      That said, I may have unconscious habits that are GCC-dependent simply because it’s almost all I’ve been using for at least fifteen years. Hard to know, really; somebody would have to audit my code with that in mind.

  7. David:
    If an update started just as you checked the second bookend, you could get inconsistent results.

    I wonder, though… unless you have a fast memcpy-like operation that is guaranteed to be run in forward/reverse order, do you just make do with one bookend.

    Write:
    Increment bookend1
    Write Data
    Increment bookend1

    Read:
    Read bookend1 (abort if odd)
    Read Data
    Read bookend1 (abort if not matching first read)

  8. My only worry is that sequence might be a reordering target when a single reverse-copy loop spanning the whole structure wouldn’t be.

    If so, then it’d be a bug in gcc’s optimizer (not that that’s a rarity). The fact that I said “asm volatile” is supposed to prevent that.

  9. @MikeE, that’s not what I assume would be happening. In order to work correctly, I believe that the algorithm has to be:

    1) Copy shared memory structure into local memory
    2) Validate consistency of local copy
    3) Goto (1) if local copy is inconsistent

    The question is whether the cost of the local copy is greater than the cost of a traditional lock.

  10. I don’t recall having seen this pattern before, but the same technique immediately came to mind after the second paragraph; it seems to be a space equivalent of filesystem journaling (where the need for consistency is across a linear timeline).

    Please do explain Thomas’s inconsistency case. I can think of the tortoise-and-hare case where the client starts reading after gpsd but then reads fast enough to overtake it mid-update (and then falls back behind enough to see the same upper bookend); is there another, or is the pitfall more subtle than this scenario?

    ESR says: Either that or a very similar case. I’d have to dig into the gpsd-dev archives to be sure.

  11. By placing the bookends every other word, and doing the write in true reverse order, you fix the tortoise and the hare problem at notable expense.

    struct wrapper_t {
    int write_protect;
    struct payload_t payload;
    }

    unsigned word_buffer[sizeof(wrapper_t) * 2];

    Writer:
    increment write_protect
    fill word buffer with write_protect
    copy wrapped data into every other word of word buffer starting with the first word
    reverse copy word buffer into shared memory

    Reader:
    repeat
    forward copy the word buffer from shared memory
    until every other word in the word buffer matches
    copy every other word from the word buffer into the wrapped data

    Would making the bookends a CRC and notting it twice be better?

    struct wrapper_t {
    int crc;
    struct payload_t payload;
    }

    Writer:
    Calculate CRC
    Not the CRC
    Copy the entire structure
    Not the CRC

    Reader:
    repeat
    Copy the entire structure
    Calculate the CRC
    until the CRC matches

    Yours,
    Tom

  12. Thinko in paragraph 7. Should read: “the reversed-order write guarantees that bookend2 will be greater than bookend1 until the write is complete”

    The algorithm looks sound enough though. As a hacker, I guess you’re morally obligated to invent new techniques whenever possible, but personally I’d probably just use a semaphore.

  13. If you use memcpy(3) for forward copy, any modern compiler is likely to optimize it to a single-instruction copy or a phrase like X86 REP MOVS that won’t be reordered.

    I just checked this out empirically using the following code and gcc-4.2.4:

    #include 
    #include 
    
    #define N 16
    
    int main(int argc, char *argv[]) {
            int src[N], dst[N];
            int i;
            FILE *bitbucket = fopen("/dev/null", "r");
    
            for(i=0;i<N;i++) src[i] = i + argc;
            memcpy(dst, src, sizeof src);
            fwrite(dst, sizeof (int), N, bitbucket);
            return 0;
    }
    

    The busywork involving argc and writing to /dev/null is so that the resulting data flow analysis doesn’t allow the compiler to optimize the memcpy out entirely.

    The result of this, using -O2 or -O3, is that if I set N so that the size of the array exceeds 16 machines words (so, N>16 for x86, N>32 for AMD64), then the generated ASM calls glibc’s memcpy() function. For smaller N, it generates a bunch of either movl or movq instructions (for x86 and AMD64 respectively), going from the lowest memory address to the highest.

  14. @esr:

    I present it here for public critique, also because I’m curious whether any of my commenters can identify it with a known, published algorithm.

    Don’t know if anyone else mentioned it, but Stephen Hemminger and Andrea Arcangeli already beat you to it. See seqlock, a shared-memory technique used in the Linux kernel.

  15. Since performance is the point, I suppose putting a strong checksum in the data structure is out of the question…

    Daniel:
    Careful there, though. As I understand it, memcpy isn’t guaranteeing how it does anything other than run as fast as possible. This broke flash on some linuxes a while back: LWN article when it switched from running in one direction to the other.

    1. >Since performance is the point, I suppose putting a strong checksum in the data structure is out of the question…

      I thought of that; GPSD already has several checksum implementations in it, ranging in strength from CRC-256 up to a rather fearsome Qualcomm 24-bit polynomial checksum. That would give a better integrity guarantee, but, as you say, at a performance cost. I may go there anyway, as it would bullletproof things pretty comprehensively regardless of what the compiler optimizer and hardware get up to.

  16. @MikeE

    I also thought of the single field interpreted such that “odd”=”write in progress: remainder of this memory segment may be incoherent”. You could do it as a single byte and let the arithmetic wrap mod 256, or simply use “0” for “clean” and any nonzero for “dirty”.

    And yes, this is basically setting aside part of the shared memory itself to hold a “semaphore”. Or a spinlock.

  17. @Monster:

    The very helpful link from Morgan G. seems to indicate seqlocks are basically the single-bookend algorithm, plus a standard lock to support multiple writers; you do need the number to increment because you could have multiple writes completing before you finish reading, and you need to detect that case. (We also assume you can’t wrap the counter all the way around to the same value before you finish reading, but presumably that’s not a high risk.)

  18. or simply use “0? for “clean” and any nonzero for “dirty”.

    This approach produces a dirty read given the following sequence:

    1. Reader checks dirty bit and gets 0.
    2. Reader begins reading.
    3. Writer sets dirty bit to 1.
    4. Writer writes.
    5. Writer sets dirty bit to 0.
    6. Reader finishes reading.
    7. Reader checks dirty bit and gets 0, wrongly concluding that the read was consistent.

    You really do need the incrementing counter, not just a single bit, and the reader needs to check that it sees the same even value at both the start and finish.

  19. Whew.
    1. Shared memory is tricky; I tend to avoid it unless I have *proof* that I need it. What is it about socket comms that you want to optimize away?
    2. Intel’s got an unusually docile memory model (due to its way-backwards compatibility orientation). Arguments that something seems to work on Intel only matter if all you care about is Intel. Do you want this to work on ARM? Sparc? Some not-yet-invented-but-standard-conforming hardware?
    3. There’s a big gap between “what the standards guarantee will work” and “what works in practice because hardware gets tuned to not break old software too much”. That’s particularly blatantly true WRT memory barriers and their behavior. Are you aiming for standard-guaranteed or works-in-practice-where-it-matters? (And where is that?)
    4. For my money, you do want a checksum in your memory records, even if you’re *sure* everything ought to work without them. Without that, your error reports will read “gpsd mysterious failure on alternate Wednesdays.” With it, they’ll read “gpsd shared memory comms corruption (every Tuesday :-).” Even if the breakage is 100% the OS’s fault, you want to know which layer broke.

    Cheers
    — perry

    1. >1. Shared memory is tricky; I tend to avoid it unless I have *proof* that I need it. What is it about socket comms that you want to optimize away?

      Code space. The JSON reporting and interpreting isn’t huge, but it costs about 90K. The use-case for a shared-memory exporter is very constrained and power-limited devices like a Chumby, where what you want to do is strip away everything but the NMEA support and a minimal dispatcher layer.

      >2. Intel’s got an unusually docile memory model (due to its way-backwards compatibility orientation). Arguments that something seems to work on Intel only matter if all you care about is Intel. Do you want this to work on ARM? Sparc? Some not-yet-invented-but-standard-conforming hardware?

      I think, realistically, that Intel + ARM pretty much covers it. Unless you know of any other Linux-hosting processors in real-world use on embedded systems.

      >3. […] Are you aiming for standard-guaranteed or works-in-practice-where-it-matters? (And where is that?)

      I’d say “works-in-practice-where-it-matters”, but nav systems are potentially life-critical so the GPSD project has quite a high standard for “works”.

      >4. For my money, you do want a checksum in your memory records,

      As I noted to a previous commenter, I had thought of that and may go there yet.

  20. Seqlocks is what I was thinking (thank you Frank Ch. Eigler and Morgan Greywolf). And it turns out that those have write locks (to synchronize multiple writers), so it’s not quite the same thing.

  21. Here’s the linux kernel implementation of seqlocks:

    /* Lock out other writers and update the count.
     * Acts like a normal spin_lock/unlock.
     * Don't need preempt_disable() because that is in the spin_lock already.
     */
    static inline void write_seqlock(seqlock_t *sl)
    {
            spin_lock(&sl->lock);
            ++sl->sequence;
            smp_wmb();
    }
    
    static inline void write_sequnlock(seqlock_t *sl)
    {
            smp_wmb();
            sl->sequence++;
            spin_unlock(&sl->lock);
    }
    

    And here’s the code around smp_wmb():

    /*
     * Force strict CPU ordering.
     * And yes, this is required on UP too when we're talking
     * to devices.
     *
     * For now, "wmb()" doesn't actually do anything, as all
     * Intel CPU's follow what Intel calls a *Processor Order*,
     * in which all writes are seen in the program order even
     * outside the CPU.
     *
     * I expect future Intel CPU's to have a weaker ordering,
     * but I'd also expect them to finally get their act together
     * and add some real memory barriers if so.
     *
     * Some non intel clones support out of order store. wmb() ceases to be a
     * nop for these.
     */
     
    
    #define mb() alternative("lock; addl $0,0(%%esp)", "mfence", X86_FEATURE_XMM2)
    #define rmb() alternative("lock; addl $0,0(%%esp)", "lfence", X86_FEATURE_XMM2)
    #define wmb() alternative("lock; addl $0,0(%%esp)", "sfence", X86_FEATURE_XMM)
    

    alternative() wraps the supplied instructions in “asm volative” and some assembler directives to handle the choice of alternative.

    So in summary, it looks like:

    1. My code in comment #1 is correct.

    2. On the current generation of Intel processors, you’ll get away with almost anything.

  22. I think, realistically, that Intel + ARM pretty much covers it. Unless you know of any other Linux-hosting processors in real-world use on embedded systems.

    AVR32, maybe?

  23. > I think, realistically, that Intel + ARM pretty much covers it. Unless you know of any other Linux-hosting processors in real-world use on embedded systems.

    Consider MIPS. They are frequently integrated into FPGAs/ASICs (purchased as an IP license rather than direct hardware), so that they can cut the number of ICs on board and still get full functionality (think MIPS + MPEG video encoder).

    Eric – for this problem, what is the lifecycle that you’re looking at dealing with, and how much data (bytes/words) are you looking at having for the payload, approximately? Given an expected system, how often do you expect to write new data to the data structure? How often do you expect clients to be reading from it?

    If you want something where the payload is pretty small and all accessors are going to simply copy it wholesale out, the proposed seqlock looks like it will work. If there are very common operations which read only a very small subset of the data, you might want to provide an accessor method which will just copy those out. However, if you have a system where there will be relatively few readers and few writers and all that is going to happen is the data will be copied out directly, using spinlocks, reader-writer locks or another well-defined and understood primitive might be the best approach. I don’t say best because it will use the fewest cycles, but because it is visually easy to see that it is demonstrably correct and is immune to a lot of extra side-effects, OS, compiler and hardware peculiarities. Premature optimization and all.

    1. >Eric – for this problem, what is the lifecycle that you’re looking at dealing with, and how much data (bytes/words) are you looking at having for the payload, approximately?

      Updates approximately once a second. Update size is just shy of 8K.

      I’m now leaning towards a brute-force approach – 24-bit checksum, and just toss the update if it doesn’t match.

  24. I believe this algo is flawed. Consider the case of a
    reader R and a writer W and for simplicity lets say the
    payload is two ints. So we have

    {int book1; struct payload {int a; int b;} int book2}

    For simplicity, lets say W writes the two payload ints to be
    the bookend values, 1, 2, 3, 4 etc.

    R begins reading the structure. It is:

    { 0, {0, 0}, 0 }

    He compares the two bookends
    by something like

    Load Bookend1 to registerA
    Subtract Bookend2 from registerB
    Jump LT to whereever

    However, an interrupt takes place immediately after the
    first load instruction, so he sees bookend 1 as value zero.

    Now W resumes, and writes a new payload, incrementing to 1.

    {1, {1, 1}, 1}

    Then it keeps doing this

    {2, {2, 2}, 2}
    {3, {3, 3}, 3}
    etc.

    Until it gets to bookend 0xFFFFFFFF (which I will call -1 to save typing)

    {-1, {-1, -1}, -1}

    Now it is writing again, but this time it interrupts in the middle, so
    that it has updated bookend 2 to 0, but bookend 1 is still 0xFFFFFFFF
    and has written half the payload (the second int.)

    {-1, {-1, 0}, 0}

    Now R resumes. He continues, thinking that the bookend1 (which W
    hasn’t written yet) is still zero, and he sees that bookend2 is
    zero, so he is happy, and reads the half written payload as valid. The
    payload should be {0, 0}, but he sees it as {-1, 0}

    Many processors have atomic memory to memory compares, but many
    embedded processors do not, and they are your stated target.

    1. >I believe this algo is flawed.

      You are correct. If the bookend counter wraps around it can lead to a false positive. But no reader is ever going to be stalled for 2^32 updates, let alone 2^64. Remember the updates occur once per second – but let’s say we’re working with survey-grade GPSes that report 10 times per second. The reader would have to stall for 2^32 / 10 = 429496729sec…which is over 13 years!

  25. Sorry, my mistake the reader algorithm is:

    Load Bookend1 to registerA
    Subtract Bookend2 from registerA (not registerB as I wrote above)
    Jump LT to whereever

  26. On second thought, nevermind. That doesn’t achieve anything more than the checksums alone. But you can still do both seqlocks and checksums, independently of each other.

  27. Garrett just beat me to suggesting MIPS. Most of the embedded work I’ve done is OpenWRT-type integration, and all of the SoCs I’ve seen in that space are MIPS-based. It wouldn’t surprise me one bit if somebody’s already using gpsd on one for Google-style wardriving.

  28. @Jessica

    The same objection applies to any version key with wraparound (DNS comes immediately to mind), but with a 32-bit version, even a 1kHz update rate would take the infamous 49 days to roll over, and if a reader’s waiting that long, something else is broken.

  29. # esr Says:
    > The reader would have to stall for 2^32 / 10 = 429496729sec…which is over 13 years!

    So your argument is that the bug is unlikely to show up? Perhaps your right, but I would respond with three points.

    1. It is not a general purpose algorithm (though I don’t suppose you claimed it was)
    2. In my experience, unlikely bugs always show up (usually at the worst possible time), especially in multi threaded/multi process code.
    3. Bugs of this nature are almost impossible to find and fix.

    But perhaps it is adequate for your needs.

    1. >So your argument is that the bug is unlikely to show up?

      The reader would have to stall for more than 13 years continuously while the writer delivered samples every tenth of a second. Remember, we’re not random-sampling the tick-value space here; if the bookend seen by the reader is N, it has to be that long until N can occur again.

  30. FWIW, I think there is actually a similar algorithm with the same failures.

    static struct data { int version; T payload } v;

    WritePayload(T p) { int old = v.version; v.version = 0; v.payload = p; v.version = old++; }

    ReadPayload(struct data* p) { do { *p = v } while(p->version != 0 && p->version = v.version); }

    The above doesn’t deal with rollover properly, but it is too hard to type code in wp and get it to format right. You need something there because of the special meaning of zero (it means writer is writing.) It is also a busy wait, which may or may not be good (and can readily be fixed with an appropriate yield.)

    BTW, in my experience ++ is not generally an atomic operation, but this is, I believe, robust to that.

    Not saying this is better, just offering it for consideration. I might be missing something subtle (or something obvious.)

  31. Shoot. Substitute:

    WritePayload(T p) { int old = v.version; v.version = 0; v.payload = p; v.version = old+1; }

  32. BTW, in my experience ++ is not generally an atomic operation, but this is, I believe, robust to that.

    \begin{pedantry}

    It’s atomic; it’s just not immediately consistent and doesn’t have any strong ordering guarantees. So if one thread increments i from 1 to 2, and then increments j from 1 to 2, it’s possible that another thread will see a moment where j=2 and i=1. But it’s still atomic, in the sense that there’s no danger that any thread will see i=3 because the increment operation has set the 2^1 bit but hasn’t cleared the 2^0 bit yet.

    \end{pedantry}

  33. Daniel Franke Says:
    > It’s atomic; it’s just not immediately consistent and doesn’t have any strong ordering guarantees.

    When I wrote that I was thinking of a specific bug that I had created in the past, in an attempt to solve a similar problem Eric raised here. In particular, I thought this would work:

    int value = lockVariable++;
    if(value == 0) { do-exclulusive-control-stuff; lockVariable–;}
    else { wait-around }

    In C and similar lvar++ means both return the value of lvar and increment its value. I assumed that this would be atomic, that is a thread could not interrupt between these two, and that is what I meant in my statement.

    I implemented a process interlock using this mechanism and it worked nearly all of the time. For those of you who write a lot of multi-threaded code you will know that “nearly all of the time” is about the worst thing you can possibly say about your code. (Which is why I made the comment about Eric’s algorithm, since it also works, nearly all of the time.)

    Anyway, I thought I would share my stupid mistakes with the rest of you.

  34. For a different possibility, I remember a problem something like this discussed in DDJ some years ago. The author called it something like either ‘spin buffers’ or ‘wheel something’. It’s good if you have one data producer and any number of consumers, but the consumers still have to contend for the data source. (No one seems to have addressed *that* problem in the comments so far.) It also requires that the consumers read the shared memory at about the same speed that the producer writes it.

    There are three buffers. There is also a pointer, set by the producer, that indicates which one contains the last, complete update. The producer writes to each buffer in turn, setting the pointer when writing is complete. Consumers check the pointer, see if it has changed, and if it has, read that buffer blindly, knowing that the producer is at least ‘one buffer away’, if it is updating.

  35. Why do you have to copy the update into shared memory? I think it should work like this:

    writer:
    locate your payload structure in shared memory.
    increment the first bookend.
    make your changes to the payload.
    increment the second bookend.

    reader:
    copy the entire payload wrapper out of shared memory into your memory.
    compare the bookends. If they’re different, repeat the copy until they are.

    The only way this could fail is if you’re writing about as fast as your reader can copy, Even then, you just have to keep two wrappers, and alternate between them. The reader would copy both wrappers, compare the bookends, and use the later of whichever one has equal bookends.

  36. esr Says:
    > The reader would have to stall for more than 13 years

    Assuming nothing changes in your software requirements at some point in the future.

  37. I can’t cite a reference to a specific algorithm, but on Wikipedia’s page about Non-blocking algorithms, in the Obstruction-freedom section, there’s a passage which implies that this is a common algorithm:

    “Some obstruction-free algorithms use a pair of “consistency markers” in the data structure. Processes reading the data structure first read one consistency marker, then read the relevant data into an internal buffer, then read the other marker, and then compare the markers. The data is consistent if the two markers are identical. Markers may be non-identical when the read is interrupted by another process updating the data structure. In such a case, the process discards the data in the internal buffer and tries again.”

    Somewhat more concerning is the fact that this is the weakest category of non-blocking progress guarantee, implying that there’s a possibility that some processes would starve (very slow read compared to fast updates, maybe?). Might not be an issue for your project though.

    1. >Somewhat more concerning is the fact that this is the weakest category of non-blocking progress guarantee, implying that there’s a possibility that some processes would starve (very slow read compared to fast updates, maybe?). Might not be an issue for your project though.

      It isn’t. GPSD updates are small and infrequent compared to the expected frequency of client polling.

  38. Fun fact of the day – glibc memcpy copies backwards on some platforms (x64 Atom, among others). Assuming anything about memcpy directionality (by using it for overlapping ranges you “know” won’t be clobbered) results in amusing bugs, like Flash Player audio crackling and outright incorrect behaviour.

  39. “Stop messing about and get back to work.”

    That’s what I would say if you were one of my coders.

    Seriously, what kind of throughput have you got? Exactly why is a semaphore not good enough?

    Think about all the issues you have raised: You need memory barriers. They aren’t free – they have to ensure cache coherence between the cores so they call out to the MMU. You have to retry the read if it fails, so that means you have to either yield (single core), or spin (SMP). You also have to know which to do.

    Wouldn’t it be nice if there was some sort of library which encapsulated all of that, allowing MUTually EXclusive access to shared memory?

    Bottom line:
    1. Semaphores and Mutexes take as long as they do because of the underlying hardware restrictions.
    2. The underlying hardware takes as long as it takes becuase that is where the hardware designers stopped optimising.
    3. They stopped optimising there because *that is plenty fast for the problem they are solving*.

    Making yet another lock-free algorithm my be fun, but it won’t be a better solution to your problem for most reasonable values of “better” and “solution” – i.e. one where your time has value.

    Fun article: “it is hard to make CS be about anything except actual computers and how to actually work the fsckers, although the Good Lord knows enough people have tried”

    http://unqualified-reservations.blogspot.com/2007/07/my-navrozov-moments.html

    1. >“Stop messing about and get back to work.” […] Seriously, what kind of throughput have you got? Exactly why is a semaphore not good enough?

      It is entertaining that you posted this rant mere hours after I posted to the ntp-hackers list that:

      (a) Having thought hard about the tradeoffs and the timing pattern of the data, I am leaning towards discarding the seqlock variant I invented in favor of a simple semaphore-based implementation, and

      (b) I have been having an almost indecent amount of fun exploring different possible approaches to the problem.

      The ntp-hackers list is relevant because (by coincidence) they’re facing a very similar design problem, and it’s where I posted my original proposal for bookend consistency.

    1. >OK now I feel a little bit dumb. And a bit of a jerk. :-)

      Don’t. I thought your comment was both funny and appropriate.

  40. OK, now I’m getting really worried. First I got the bizarre RTCM2 protocol right first time, now my shared-memory export seems to be working first time out of the box. I thought it had a bug, then realized that it had unmasked a minor glitch in my test client.

    *shudder* The dread god Finagle and his mad prophet Murphy must surely be storing up a world of hurt for me.

  41. Seriously, what kind of throughput have you got? Exactly why is a semaphore not good enough? … Wouldn’t it be nice if there was some sort of library which encapsulated all of that, allowing MUTually EXclusive access to shared memory?

    Totally agree. Too many programmers spend too much time trying to reinvent the wheel. POSIX threads already has this covered (pthread_mutex_*)

    1. >POSIX threads already has this covered (pthread_mutex_*)

      Er, no. We’re talking separate processes here, not threads that can be synchronized with a thread mutex.

  42. @Jessica
    To make this not a “bug”, all that is needed is for the reader to use its own internal method for determining the current time before the read, save this value somewhere in its own local memory space, do the read, and re-check the time to determine elapsed time for the read. If this exceeds some ridiculously-generous value (say 20s), consider the data invalid and retry. This implementation does require an internal clock that never rolls over, so make sure it’s Y10K-compliant, or a reader that sits for 10,000 years could have a problem.

    With a buffer expected to be updated every second or so, you could get by with even a byte for the bookend.

  43. The Monster Says:
    > To make this not a “bug”, all that is needed is …

    See I think this discussion is interesting for two reasons.

    1. My experience is “that’ll never happen” almost always happens. Users have a horrible habit of using your code in ways you don’t expect; coders who follow you have a horrible habit of changing the spec, and not remembering all your assumptions. (Often that coder is yourself.)

    2. Most important of all, this type of bug is absolute hell to fix.

    3. Personally, if I knew a bug of this type were in my code I could not sleep. I couldn’t allow it to stay in there, even if it was unlikely to ever arise. I don’t know if it is a pride thing or an OCD thing, but that is just my way of thinking. Ironic innit, since I write Windows programs.

  44. >>POSIX threads already has this covered (pthread_mutex_*)

    >Er, no. We’re talking separate processes here, not threads that can be synchronized with a thread mutex.

    Not so. Check out pthread_mutexattr_setpshared(3). Not supported on all Unix implementations, to be sure; like most of the pthread standard, it’s official optional.

    Cheers
    — perry

  45. Not so. Check out pthread_mutexattr_setpshared(3). Not supported on all Unix implementations, to be sure; like most of the pthread standard, it’s official optional.

    No, it’s not. But it is supported on Linux and Solaris that I know of for sure. I think it might be supported on OS X, but I don’t really know for sure.

  46. > No, it’s not. But it is supported on Linux and Solaris that I know of for sure. I think it might be supported on OS X, but I don’t really know for sure.

    It wasn’t in earlier versions, but is currently (10.6). At least, so claims unistd.h there, and it seems to work.

    Cheers
    — perry

  47. >See I think this discussion is interesting for two reasons.
    :
    >Observant readers will notice that I can’t count :-)

    I do that all the time. I think of additional items while I am writing and enumerating them, then forget to go back and edit the number I wrote in the sentence that precedes the enumerated list.

  48. @Perry:

    It wasn’t in earlier versions, but is currently (10.6). At least, so claims unistd.h there, and it seems to work.

    Well, that’s true, but Solaris 10 has been in production now for more than 5 years.

    @JessicaBoxer

    1. My experience is “that’ll never happen” almost always happens. Users have a horrible habit of using your code in ways you don’t expect; coders who follow you have a horrible habit of changing the spec, and not remembering all your assumptions. (Often that coder is yourself.)

    That’s what good documentation is about. A nice comment in the source code at the top of the semaphore explaining the assumptions made at the time the code was written and the limitations implied by those assumptions will go a long way towards ameliorating the situation.

    And that’s a big difference I see between corporate code and open source. Often corporate coders are on a tight timeline and often the first thing to be sacrificed to altar of the calendar is good documentation. Open source methods, OTOH, dictate that if you don’t have time to document the API for your code, you don’t have time to write it, either, so please don’t bother.

  49. @Morgan Greywolf
    > Open source methods, OTOH, dictate that if you don’t have time to document the API for your code, you don’t have time to write it, either, so please don’t bother.

    Not with the intention of picking a fight- but why do you draw the correlation between Open Source and documentation? As opposed to say, good coding habits and documentation? Or perhaps good coders and documentation? My impression is that the hackers commenting here are, minimally, above average coders and thus have trained themselves to take the time to document non-trivial code sections. The “tight timeline” seems flimsy considering a few lines of explanatory comments around code can take, what, a few minutes?

    I can think of a couple lines of reasoning- guess I’m curious if they’d line up with yours.

    1. >Not with the intention of picking a fight- but why do you draw the correlation between Open Source and documentation?

      It’s there, but I think Morgan is oversimplifying it. I don’t think open-source hackers are specifically better at documentation than their closed-source counterparts; what they do have is higher standards of craftsmanship in general, which tends (though not reliably) to translate into better documentation. The tendency is strongest with respect to comments and programmer-to-programmer documentation, and weaker as we move away from that towards documentation for end-users. One place where open-source programmers show a particularly strong tradition of good documentation is in care about the transparency of wire protocols and file formats.

      The higher standards of craftsmanship are also not simple in origin. Part of it is that open-source programming tends to attract the most skilled and self-motivated programmers (those two qualities correlate closely). Part of it is cultural, deriving from the Unix engineering tradition that is the principal root stock of today’s open-source world. These two drivers, and the resulting high standards, reinforce each other in virtuous cycles; that is, for example, high standards increase the attraction for the most skilled programmers and the higher skill level promotes higher standards. The Unix tradition encourages high standards and is shaped by the high quality of the programmers it attracts.

      I think the lack of deadline pressure is only a minor direct influence on the quality of documentation, but does couple to it more strongly in an indirect way. That is, lack of deadline pressure reinforces the culture of higher standards by allowing room for it! That in turn tends (though, again, not reliably) to increase the quality of documentation.

      But. I think the increment in average quality of work as you move from closed to open source is greater for code than for documentation.

      My GPSD project makes an interesting example in this regard. The quality of our code is top-notch; there is objective evidence for this in the form of Coverity scans, tracker defect rates over time, etc. But even though this is so, the code quality doesn’t stand out from the open-source pack quite the way the quality of our documentation does. This is simply because, as a rule, programmers hate writing documentation and aren’t very good at it whether they’re working in open source or closed; higher standards of craftsmanship can only do so much to counter this.

      In the closed-source world, the fix often attempted for this is to throw technical writers at the problem. This tends to produce documentation that is voluminous and glossy but fundamentally stupid. By ‘stupid’ I mean specifically that (a) it does a poor job of conveying understanding rather than rote procedure, and (b) it is well-nigh useless outside a narrow range of common use cases or (goddess forbid) when anything goes wrong.

      No. To get really good documentation, you need one of the rare programmers who finds writing good documentation rewarding. On the GPSD project that happens to be me. There are damn few of us in either open-source or closed-source worlds.

  50. I think Jessica’s concern regarding wraparound has some merit. One never knows how code may be reused. If the bookends were stored in 8 bit words (I date myself) in order to save memory (saving memory?, I date myself again) then the wrap-around time would fall to 256 clicks of the relevant clock. That’s not so long and its more reasonable to fear its occurring.

    I’m also partial to the simple checksum if it generates an error message. Given that these transfers occur regularly, even a simple checksum, say one that only detected 50% of the relevant errors, would start reporting regularly if a problem appeared with any frequency. Once that happened the problem could be tracked down.

    Chuck

  51. I agree with Jessica’s concern. But her concern is valid for checksums also. The problem with a checksum is that you can get a valid checksum in the middle of a write. This is just an even more random failure. I tried to fix this in my previous effort by notting the checksum twice, but that won’t work either, since the notted checksum may be valid in the middle of a write. You need two copies of the checksum, and they must be different while the data is being written. The next problem is that the new checksum may be the same as the old.

    struct wrapper_t {
    ….int start_crc;
    ….struct payload_t payload;
    ….int end_crc;
    }

    Writer:
    ….Calculate new_crc
    ….If new_crc equals end_crc
    ……..Increment end_crc
    ….Else
    ……..Store new_crc in end_crc
    Copy the payload
    Store new_crc in start_crc
    Store new_crc in end_crc

    Reader:
    repeat
    ….Copy the entire structure
    ….Calculate the crc from the data
    until all three crc values match

    Yours,
    Tom

  52. This can also work if you are updating in place.

    Only the Writer changes.

    Writer:
    Increment end_crc
    Update payload in place
    Calculate new_crc
    Store new_crc in start_crc
    Store new_crc in end_crc

    Yours,
    Tom

  53. “*shudder* The dread god Finagle and his mad prophet Murphy must surely be storing up a world of hurt for me.”

    $EVENT comes in threes……???

  54. @esr:

    > To get really good documentation, you need one of the rare programmers who finds writing good
    > documentation rewarding. There are damn few of us in either open-source or closed-source worlds.

    A pithy way to sum it up.

    It occurs to me that there are actually at least two forms of documentation- one for the user and one for the developer. Ideally, the two should support each other such that the user documentation gives the developer the 1000 ft view of what the code does, while the commented code (useless to the user- see your example a few posts ago in “The Bug That Didn’t Bite”) would alert the developer to tricky sections, required coding techniques, etc. I wonder if those documenting disciplines overlap? In other words, if a developer writes good user docs, do they also comment their code well?

    Not meaning to beat this too much, but I was thinking that competition was the driver for good documentation (all forms). In Open Source, the best way to get someone to use your code is to document it better than the next guy because a user is more likely to go with the solution they can come up to speed and solve problems with quickly, even if the another solution is superior in some way. I’d say Window Managers are a decent example here.

    Where as with closed source developers, documentation typically takes a back seat to product price. Meaning better documentation than your competition won’t drive things nearly as much as a good sales force.

  55. Not meaning to beat this too much, but I was thinking that competition was the driver for good documentation (all forms).

    That’s true. And that’s exactly why I say there is correlation between open source and good documentation. esr is right when he says that the correlation is somewhat weak. But only somewhat: competition amongst open source projects has brought increasingly better programmer-to-programmer (API) documentation. My prime example isn’t window managers, but GUI toolkits. Notice how, in Python-land, PyGTK is much more commonly used than wxPython, for example. Largely this is due to former having much better documentation than the latter.

  56. Gerry: there are three kinds of documentation: Internal documentation for code maintainers, terse but complete reference documentation for users, and tutorial documentation for users. They each have different audiences and the same document cannot address more than one need.

  57. “The “tight timeline” seems flimsy considering a few lines of explanatory comments around code can take, what, a few minutes?”

    For me, writing really *good* comments can be just as hard as writing the code, or harder. It takes more than a few minutes. One thing that can help is if you write the comments *before* you write the code, but that’s no magic bullet either. (You can write what you expect to do, then code something else. This can help catch bugs sometimes, but sometimes the bug is in the comment.)

  58. The shared-memory transport, using this technique, is now fully integrated into GPSD and appears to be working just fine, concerns about wraparound after 13-year-long reader stalls notwithstanding. Actually, since it’s a 64-bit counter, the reader would have to stall for approximately 5.8 trillion years.

    I have been able to implement this interface so that the client API gets no new entry points. You just call gps_open() with the magic cookie GPSD_SHARED_MEMORY as a port argument and gps_read() works normally. Some of the other socket-interface entry points, like gps_stream() and gps_send(), don’t make sense with this transport and won’t work.

    Total LLOC of the implementation is less than 60 lines, and allows shaving about 92K off a 361K statically-linked runtime (x86_64). A stripped-down NMEA-only version drops to 78K.

    Extensive live-testing with gpxlogger -e shm has yet to turn up a single instance of corruption due to spinlock failure.

  59. Tom:
    The problem with a checksum is that you can get a valid checksum in the middle of a write. This is just an even more random failure.

    Yes and no… I mean, if it’s really a reasonable-size checksum (say, 64-bits), the odds of a false success are a lot less than those of some other failure cases, like random bytes flips in the hardware (in which case checksums might actually save you!), or the Illuminati controlling a hypervisor your code is running under and altering the output, or the user having a stroke during program use and misreading the output as a result.

    That said, I fully support the sentiments that a) any serious programmer should do this sort of thing periodically, because it’s fun and educational, and b) you’d better have a really pressing reason to deploy this in real code rather than something standard that has a few zillion man-hours of testing behind it. If I have to deal with one more app that wrote its own transport protocol on top of UDP instead of just using TCP, and then got the flow control wrong, I will go beat the parties responsible senseless with a whole mackerel.

  60. The shared-memory transport, using this technique, is now fully integrated into GPSD and appears to be working just fine, concerns about wraparound after 13-year-long reader stalls notwithstanding. Actually, since it’s a 64-bit counter, the reader would have to stall for approximately 5.8 trillion years.

    It’s fairly obvious now what Finagle has planned for you. A space vessel representing humanity’s first attempt at relativistic time travel will go off course and crash into a black hole, after a communications glitch, only momentary from the vessel’s reference frame yet corresponding to 5.8 trillion years for gpsd (Galactic Positioning System Daemon), results in a counter wraparound and a corrupt read. An angry mob will rip your disembodied brain from its vat and throw it to the space amoebas.

    1. >An angry mob will rip your disembodied brain from its vat and throw it to the space amoebas.

      Daniel wins the thread.

  61. Morgan Greywolf Says:
    > That’s what good documentation is about. A nice comment in the
    > source code at the top of the semaphore…

    Do you really believe that? The only thing programmers do worse than writing documentation is reading documentation. The way to deal with a bug is fix it, not document it!

    > And that’s a big difference I see between corporate code and open source.

    I’m sorry, I just can’t let this slide. It is just plain wrong. Are there vast swaths of crappy, poorly documented corporate code? Well yeah, gazillions of lines of crap. However, if you think OSS is immune I would refer you to that vast wasteland called Source Forge.

    Fact is that lots of windows code has really good documentation. Starting with Microsoft itself which has lots of good, consistent accurate documentation on their APIs. Not perfect for sure, but good. One need only compare for example, Django documentation with ASP.NET documentation, and it is like night and day.

    To be clear, I am not saying all OSS is poorly documented. Far from it, many major projects have really good, well written documentation, and Amazon.com bristles with alternatives too. However, to suggest that OSS in general is some pristine example of well documented, well crafted code compared to the garbage in closed source, is just plain wrong. Most of both is garbage, and there are diamonds in both.

  62. So just an addendum as a follow up to Eric’s comment. I agree entirely that OSS is much better at documenting things like file format, net protocols and so forth. But that is not due to incompetence on the part of closed source designers, it is a deliberate business decision. It might be “evil”, but it is not the same subject as sloppy documentation.

  63. I’m sorry, Jessica. I could agree with you if we were talking about copy on write reference counts, or any kind of reference counts (yes, it may be unlikely that somebody would ever have so many references to a single object that they could overflow a 32-bit or 64-bit counter, but who knows what the future Google will do: http://googleresearch.blogspot.com/2006/06/extra-extra-read-all-about-it-nearly.html ). But I simply can’t agree with you on this.

    Any way to identify the version of something in a fixed number of bits — be it a checksum or a version counter, or something else — is subject to collisions due to the pigeonhole principle. Even combining multiple ways to identify versions runs a risk that you get a collision in all version checks simultaneously (say, your version counter wraps at the exact same time that your MD5 and ARC4 and AES and Blowfish hashes all have a collision). Sure, that collision is unlikely, but it’s theoretically possible.

    Any programmer who uses Eric’s code should recognize this and pick a large enough version counter for their needs. 32 bits would be “adequate for [GPSD’s] needs,” and 64 bits moreso. Likewise, to the extent that identifying a version can’t be done atomically any decent programmer will be sure to use something like Hans Boehm’s atomic_ops library: http://www.hpl.hp.com/research/linux/atomic_ops/ . Yes, for software that I write to work, I have to assume that only competent or decent programmers use it. But, frankly, I don’t know any other option. I have to trust that future programmers will be somewhat competent or my job will become a farce of “Call of Cthulu.” How could I guarantee that a programmer doesn’t change my version counter to, say, the output of the C rand() function on the argument that he’s assigning it to a 64 bit quantity, so there shouldn’t ever be any problems?

  64. Fundamentally, if you ever have the problem Jessica Boxer has identified, then you have very deep issues. Your scheduler is broken, *and* you have starvation problems. Any scheduler that pauses one thread/process and allows other threads/processes to modify the version counters 2^32 times before letting the first thread/process to continue is broken by any definition of the word (and a scheduler that allows 2^64 updates is even more broken, if possible).

  65. esr wrote:
    No. To get really good documentation, you need one of the rare programmers who finds writing good documentation rewarding. On the GPSD project that happens to be me. There are damn few of us in either open-source or closed-source worlds.

    While I agree wholeheartedly with this, I have found that good testing constructs really do a lot to promote good documentation of APIs.

    Python’s doctest structure is one of my favorite ones out there. Basically, you can write simple unit tests as part of a function’s description, capturing both inputs and outputs. As long as you use descriptive names in your test input, it becomes self documenting API code. And it’s easy to generate too, as a simple run, print, copy, and paste is all that it takes 90% of the time.

  66. One need only compare for example, Django documentation with ASP.NET documentation, and it is like night and day.

    I haven’t dived into detail in the ASP.NET documentation, but a quick glance suggests that it’s not that different from the Django documentation. Most people having problems with the Django documentation it seems, looking at the comments on the Django Book 2.0 Web Preview, are running on Windows client OSes. Django mostly wasn’t developed on Windows and Django’s developers run most of their production websites on Linux or *BSD and the documentation reflects that.

    Go get VirtualBox, download a Linux distro, install it in a VM, and do your Django development work there. You will be far less frustrated.

  67. Looking at the approach, you’re somewhat re-inventing “load linked, store conditional” (LLSC) lock-free semantics. It looks like you have to have atomic store instructions for incrementing the bookends, which is also the fundamental component to implementing LLSC. As a result, you might be just as well off with a single counter.

    Software transactional memory has all of the look-and-feel of LLSC over a region of memory, of which this approach has some resemblance.

  68. By the way, Eric, did you ever profile the naive pthreads-lock based implementation to see if yours actually works better? Is there a stress test (many readers?)? Does the “sfence” asm instruction exist & work everywhere you need it to?

  69. 60 seconds in a minute
    60 minutes in an hour
    24 hours in a day
    365 days in a year
    10 updates per second
    equals 315,360,000 counter increments per year (ignoring leap years, seconds, etc.)
    A 64-bit counter is roughly 1.845 X 10^19
    Dividing the two yield about 58.49 billion years.

    Still a while, but shy of 5.8 trillion, lol…

Leave a comment

Your email address will not be published. Required fields are marked *