The Rollover of Doom: a Trap for Good Programmers

GPS, the Global positioning System, was designed in the 1970s under hardware-cost constraints that would seem ridiculous today. This makes interpreting the data it sends into a black art, and produces some really painful edge cases.

There’s one edge case in particular that I’ve come to think of as the Rollover of Doom. This morning I came up with an evil, clever hack for getting around it. I call it clever because you have to think your way out of a conceptual box to see it. As to why it’s evil…well, you’ll see. If you can figure it out.

The root cause of the Rollover of Doom is the peculiar time reference that GPS uses. Times are expressed as two numbers: a count of weeks since the start of 1980, and a count of seconds in the week. So far so good – except that, for hysterical raisins, the week counter is only 10 bits long. The first week rollover was in 1999; the second will be in 2019.

So, what happens on your GPS when you reach week counter zero? Why, the time it reports warps back to the date of the last rollover, currently 1999. Obviously, if you’re logging or computing anything time-dependent through a rollover and relying on GPS time, you’re screwed.

Now, we do get one additional piece of time information: the current leap-second offset. The object of this exercise is to figure out what you can do with it.

For those of you unfamiliar with calendrical arcana, a leap-second is a shim inserted in calendars to cope with variability in the Earth’s rotation, which is slowing very gradually due to tidal braking.

If you start an atomic clock running – say, in a GPS satellite – and you want to compute Earth time such as UTC with it, and you want days and weeks and months in UTC to stay in sync with astronomical time (when the sun rises and sets), then you occasionally have to stuff a second in somewhere so the Earth’s gradually-slowing rotation has time to spin it to where you would have expected it to be if the spin were truly constant.

So, in order to allow UTC to be computed from the GPS-week/GPS-second pair, the satellite also broadcasts a cumulative leap-second offset. The offset was 0 when the system first went live; in January 2010 it is 15 seconds. It’s updated every 6 months based on spin measurements by the IERS.

For purposes of this exercise, you get to assume that you have a table of leap seconds handy, in Unix time (seconds since midnight before 1 Jan 1970, UTC corrected). You do *not* get to assume that your table of leap seconds is current to date, only up to when you shipped your software.

For extra evilness, you also do not get to assume that the week rollover period is constant. The not-yet-deployed Block III satellites will have 13-bit week rollover counters, pushing the next rollover back to 2173CE.

For extra-special evilness, there are two different ways your GPS date could be clobbered. after a rollover. If your receiver firmware was designed by an idiot, all GPS week/second pairs will be translated into an offset from the last rollover, and date reporting will go wonky precisely on the next rollover. If your designer is slightly more clever, GPS dates between the last rollover and the ship date of the receiver firmware will be mapped into offsets from the next rollover, and date reporting will stay sane for an entire 19 years from that ship date.

You are presented with a GPS time (a week-counter/seconds-in-week pair), and a leap-second offset. You also have your (incomplete) table of leap seconds. The GPS week counter may invalid due to the Rollover of Doom. Specify an algorithm that detects rollover cases as often as possible, and explain which cases you cannot detect.

Hint: This problem is a Chinese finger-trap for careful and conscientious programmers. The better you are, the worse this problem is likely to hurt your brain. Embrace the suck.

109 comments

  1. I guess you mean “for hysterical raisins, the week counter is only 10 bits long”, right?

    ESR says: Er. 1024 weeks or 10 bits, right. Typo fixed.

  2. Is it… checking the system time?

    ESR says: No. Among other things, we want to be able to use GPS as a time source.

  3. Never mind I read it right but hadn’t finished my drink.

    10 bits would give you just shy of 20 years.

    So if you have a table that maps leap seconds to when they were inserted, and you have the total number of leap seconds you could apply some magic to kinda-sorta figure out the average time between leap-second inserts, then use that to give a ballpark of what decade you are in. Since you’ll know more or less the margin of error of that calculation and you have a 2 or three really accurate time sources telling you how long it’s been since the last roll over, well, I think you’re done.

    But then what do I know, I’m just the guy that’s got to make the software work in the real world.

    1. >So if you have a table that maps leap seconds to when they were inserted, and you have the total number of leap seconds you could apply some magic to kinda-sorta figure out the average time between leap-second inserts, then use that to give a ballpark of what decade you are in.

      Yes, you can actually get an estimate of the year to within about two years’ accuracy from the leap second. The problem with this is that if the input time is within those two years of a rollover boundary your estimate could put you on the wrong side of it from the actual time, leading to a false positive (reporting rollover when there was none) or false negative (failing to report a rollover when there actually was one).

      >Since you’ll know more or less the margin of error of that calculation and you have a 2 or three really accurate time sources telling you how long it’s been since the last roll over, well, I think you’re done.

      What accurate time sources? All you have is the time you were passed. You should not consider system time reliable for this purpose.

  4. This puzzle doesn’t specify the size allowed for leap-second offset or seconds passed in the week. Since we’re allowed to create a table for this, I assume we can have any size of number in there.

    We just need to take the “correct” leap-second offset table from a UTC-based leap-second offset table, and then add (number of rollovers) * (seconds in a rollover) to the “correct” time. We can find the number of seconds in a rollover by looking at the bitsize of the satellite’s “number of weeks passed” value. That gives us the number of weeks. It is a trivial matter to convert that into seconds.

    I saw that answer before I was done reading it. Does that make me a bad programmer?

    1. >This puzzle doesn’t specify the size allowed for leap-second offset or seconds passed in the week. Since we’re allowed to create a table for this, I assume we can have any size of number in there.

      Correct. You may also assume that the leap-second counter never rolls over.

      >We just need to take the “correct” leap-second offset table from a UTC-based leap-second offset table, and then add (number of rollovers) * (seconds in a rollover) to the “correct” time. We can find the number of seconds in a rollover by looking at the bitsize of the satellite’s “number of weeks passed” value. That gives us the number of weeks. It is a trivial matter to convert that into seconds.

      There are multiple problems with this. One is that the satellite doesn’t tell us the bit size of its week counter – I didn’t specify that as an input to the algorithm, and I omitted it deliberately. In the future it might be possible to have a lookup table from satellite ID to week counter bit width, but we could never guarantee that the table wa up to date as new satellites came online. Second…where are you getting “number of rollovers”? Remember, the input time is after it’s been possibly clobbered by rollover; an input time could correspond to a countable infinity of real times, one in each future rollover period.

  5. I hate it when raisins start getting hysterical. It’s upsetting to the bran flakes.

  6. When you say “I came up with an evil, clever hack for getting around it.” do you mean “I’ll post this as an exercise on my blog and see what the readers come up with”? Cuz that would be evil indeed :-)

    ESR says: Er, no. I think I know what the best that can be done is. But I will praise and credit anyone who invents a better algorithm.

  7. @Phil R:

    I guess you mean “for hysterical raisins, the week counter is only 10 bits long”, right?

    Right. I was confused by that, too. That would have to be it. That’d mean that the max count is 2^10 == 1024 weeks. That would put rollovers 1 and 2 in 1999 and 2019, as esr states.

    @esr

    You do *not* get to assume that your table of leap seconds is current to date, only up to when you shipped your software.

    Why not? Isn’t there a table of leap seconds that we could scrape from the Web somewhere?

    1. >Uhm, if the data format changes (as it does when the week counter size changes), won’t that break just about every GPS receiver out there?

      The extra three bits are in a part of the subframe unused in older revisions, and the Block III sats will ID themselves to the firmware so it can tell whether that extension field is valid. However, the ID probably won’t be where our algorithm can see it – and in any case, older firmware (as in, every GPS now in existence) won’t know about Block III. So, for our purposes, we can’t even assume the 13-bit extension will be available.

  8. I am not sure whether I understand the situation correctly.

    The GPS device will receive week and second counts from several satellites and the current leap second offset.

    You have the week and second counts and the leap second offset at the date of production. Leap seconds can only be added twice a year, but are added only every year or so in reality. That would be around 7 leap seconds per decade (really, between 2 and 10).

    From the leap second offset difference you can estimate the number of years passed, at bi-decade precision. You will record week counts for 10 and 13 bit counters. You can predict the week counts for both rollovers (1024 and 8192 weeks) and you estimate the rough decade since production from the leap second offset difference since production. Obviously, things roll over after 2^23 weeks, and your leap second offset will be unreliable long before that. Unless, of course, they actually stop with adding leap seconds as they have planned.

    Sounds that after observing the two sets of week counts you should be able to determine exactly where you are in what decade.

    Summary:
    Leap second offset difference X/14 time two decades
    Calculate week counts for 10 bit (X = T mod 1024) and 13 bit (Y = T mod 8192) counters around so many decades and find which fit the observed two set of week counts.

    Where did I go wrong?

    1. >Where did I go wrong?

      See my earlier reply on year estimates from leap seconds. Because that estimate has about a two-year uncertainty, you can easily end up in a situation where your guess as to the last rollover date is wrong (like, if the input time is within two years of a rollover date on either side). As I said to Richard Garfield, you also don’t get to know which week counts are 10-bit and which are 13 – I didn’t specify that as an input to the algorithm, and I didn’t for good reason.

  9. @Winter: “Leap seconds are unpredictable”

    We don’t need to predict them. There is a bulletin put out on the Web every six months available from here

    Of course, this means we must be able to parse these bulletins somehow… not looking so easy.

    1. >We don’t need to predict them. There is a bulletin put out on the Web every six months available from here

      Yes, but you don’t get to update your table. It ends when your software ships. This was a condition of the problem,. remember?

      The reason for this condition is that our algorithm may be embedded in firmware in something like an Argus buoy that you drop in the ocean and then don’t touch for N years.

      >Of course, this means we must be able to parse these bulletins somehow… not looking so easy

      Parsing them is in fact easy. I already have code to parse the USNO history of leap seconds.

  10. A kludgey solution I realized might work, in most cases:

    1) After receiving a time update check that !(weeks(NewTime) >= weeks(OldTime)) and that LeapSecondOffset(NewTime) > HIGHEST_LEAP_SECOND_IN_TABLE . If true, you are quite likely to have a rollover.

    Note, that this does not detect rollovers where no leap seconds have passed since the last rollover.

    1. >After receiving a time update check that !(weeks(NewTime) >= weeks(OldTime))

      I didn’t specify that you could keep history in the algorithm. If you can, this test is sufficient by itself; you don’t need the leap-second clause.

      The reason I didn’t specify that is this case: Your software comes up, and is trying to interpret times from a receiver that was just cold-booted. The receiver firmware was shipped in 2010 and assumes that the week baseline is 1999. It is now 2020. You lose.

  11. @Morgan Greywolf
    “Of course, this means we must be able to parse these bulletins somehow… not looking so easy.”

    If you are on the Internet, just download UTC date and time. Why bother with determining leap seconds?

    ESR says: Right. You are not allowed to assume a reliable time source, just the two inputs.

  12. In formula language:

    Observe two classes of week counts: A1 and A2
    Either A1 + i*1024 = A2 + j*8192 or A1 + j*8192 = A2 + i*1024
    (i, j integers)

    => +/- (A1 – A2) = (8*j – i) * 1024
    This gives you two (signed) differences between i and j (mod 2^23)

    i*1024 must be ~ leap second difference * 20 /14 (?, or whatever is the average number of leap seconds)
    Just look for a matching set of values. Major roll over after 160 thousand years.

    Again, what do I do wrong?

  13. Can the software self modify? (or at least rewrite an input file) Updating the table of leap seconds from the satelite seems the obvious solution. I can think of a variety of ways to do this using more or less space vs calculation cleaverness, but they all revolve around recording new leap seconds when the satelite says they occur.

    This would fail if there’s ever a full rollover period (aligned or not) without observing a leap second.

    1. >Can the software self modify? (or at least rewrite an input file) Updating the table of leap seconds from the satelite seems the obvious solution.

      It would be…if the GPS time you got were reliable. It isn’t. That’s the exact problem; if there’s been a rollover, the input time your GPS gives you may be clobbered.

  14. I went to the linked Wikipedia page, and from there to a discussion of the history of time standards. Damn, if that’s not a case of “the nice thing about standards is that there are so many to choose from”, I don’t know what is…

    1. >I went to the linked Wikipedia page, and from there to a discussion of the history of time standards. Damn, if that’s not a case of “the nice thing about standards is that there are so many to choose from”, I don’t know what is…

      It’s not as bad as it looks. The only time standards that matter nowadays are UTC and TAI. GPS time is TAI with an offset and a counter-rollover problem.

  15. If you think that there is only a two year uncertainty in leap second estimates, then you’re basically done. Actually, if you think there is less than (rollover interval / 2 == 512 weeks) uncertainty in leap second estimates, then you’re done.

    You basically calculate the approximate year according to the leap seconds and then adjust to the correct week using the rollover counter.

    1. >You basically calculate the approximate year according to the leap seconds and then adjust to the correct week using the rollover counter.

      No. This won’t work.

      People keep coming up with “Use the leap-second to guess the year.” It’s no good. Even assuming you can get accuracy to within a year, think about what happens when there’s a rollover within that year. Which side of the rollover do you pick? And how – by flipping a coin?

  16. Speaking of epochs, I will correct you and state that you meant to say 2173AD, not 2173CE.

    If you’re going to use the Christian calendar, use it the way it was intended.

    Otherwise, leave our epoch alone and get your own.

    1. >Speaking of epochs, I will correct you and state that you meant to say 2173AD, not 2173CE.

      No, I intended to refer to Common Era. This is a correct usage widely employed by scholars who are not Christians. Jesus is not my lord, so “Anno Domini” would misrepresent my allegiances. “BCE” (Before Common Era) corresponds to BC.

  17. Assuming you can use the leap second data to generate an approximate week that is within +/= 511 weeks of the actual current week (which should be easy, according to your assertion of +/- 2 years), the following function will give you the correct week:

    def fine_adjust_week(approximate_week, week_lsb, lsb_width=10):
    ”’
    Parameters:
    approximate week — derived from leap second counter
    week_lsb — least significant bits from week counter
    lsb_width — number of bits in week counter
    ”’
    ring_size = 1 << (lsb_width-1)
    delta = (week_lsb – approximate_week + ring_size-1) % ring_size – ring_size + 1
    return approximate_week + delta

    def test():
    rollover = 1024
    worstcase = max((abs(fine_adjust_week(a,b) – a), a, b) for a in range(rollover*4) for b in range(rollover))

    print worstcase

    if __name__ == '__main__':
    test()

  18. Arrgh!

    Screwed that up, both the math in a stupid final edit, and the formatting.
    This may not yet be optimal, but I think it’s better.

    Will the preformatted tag work?

    def fine_adjust_week(approximate_week, week_lsb, lsb_width=10):
    ”’
    Parameters:
    approximate week — derived from leap second counter
    week_lsb — least significant bits from week counter
    lsb_width — number of bits in week counter
    ”’
    ring_size = 1 << lsb_width
    delta = (week_lsb – approximate_week + ring_size / 2 – 1) % ring_size – ring_size / 2 + 1
    return approximate_week + delta

    def test():
    rollover = 1024
    worstcase = max((abs(fine_adjust_week(a,b) – a), a, b) for a in range(rollover*4) for b in range(rollover))
    print worstcase

    if __name__ == '__main__':
    test()

  19. For the 10-vs-13-bit problem, just ignore the top 3 bits. Or, if you will sometimes get 13 bits, sometimes 10, you can use the top 3 bits, when you have them, to validate your estimates.

    (I’ll use “GPS eon” or “eon” to refer to any nearly-20-year period between GPS week counter rollovers.)

    The simplest solution that comes to mind is to store the (low-order 10 bits of the) last week counter in nonvolatile storage, and whenever you read a week counter with a smaller value, increment your GPS eon counter. (If you’re worried about, say, flash write cycles, then store bits 6-9 and only write when they change–gives you about 1200 years for 1000 write cycles.) Fails if your device is out of coverage for 20 years or your NV storage fails, but the first seems unlikely on a device anyone will still care about, and the second will give you bigger problems.

    If you need to deal with skipped eons or really, really long lifespan, or you have no nonvolatile storage, or you expect your device to time travel, you could use the leap second count to estimate year (be sure to factor in the gradual change of average frequency of leap seconds over time as the Earth’s spin slows and orbit changes). The eon border problem is a red herring–the estimated year doesn’t give you a specific GPS eon, it gives you an estimated time point, and you should take whatever eon value gives you a real time closest to that estimate. If the estimated time point is close to an eon boundary, a high week counter should result in a time in the earlier eon, and a low week counter gives you a time in the later eon.

    Also: keep in mind that if the “leap hour” people ever get their way, then all your leap-second-based estimates will go completely nuts.

    Finally, if you’re trying to figure out the time for logging purposes, then just log whatever info the satellite gives you in its most raw form and let the analysis software at the base figure out the real time post hoc from that.

  20. I don’t know much about how GPS devices are used. Can we not just require that the device will be turned on more frequently than once a decade? Just store the current date every measurement, and if the next reading is less than that, increment a rollover counter.

  21. Ok, I will stick my neck out and look like a fool immediatly after I post this.

    Assumed:
    Leap seconds is a constantly increasing value. Not
    an even rate, but never goes negative. I can
    probably live with an occasional negative leap
    second anyway.

    0 leap seconds was 1980
    leap seconds = 15 in 2010
    Next roll over in 2019, 2038, 2058

    leap seconds increase at approximately .48 per
    year.

    This is a very rough estimate, if I were to
    actually write software that was going to use this
    algorithm, I would spend a lot more time on
    getting that number.

    Since the rollover is ~20 years, we just need to
    determine the correct year to within 9 years or
    so. The existing table shows leap seconds being
    added at a sufficiently even rate to make that a
    plausible guess for the next 40 years or so.

    If you are going to postulate equipment that will
    last longer, without maintenance, than that, I may
    need to think of a different scheme.

    leapseconds = {15 : 2008,
    16 : 2011,
    17 : 2013,
    18 : 2015}

    Extend this table as far as you think even might
    be useful, probably would calculate it on the fly
    from the known value of 15 in early 2011, with
    around .7 seconds of the .9 second trigger for the
    next leap second already used up.

    I expect we’ll have another leap second either in
    June or Dec 2011 from the trends.

    From there, calculate/look up the guestimated year
    from the leap second number.

    Use that and the gps week number to tell which of
    the rollover blocks you are in.

    If leapsecond year is near a rollover block
    boundary, if gps weeks is low, you are past the
    boundary. If gps weeks is high, you are before
    the boundary.

    This algorithm will probably give fuzzy answers in
    50 or 70 years, but I doubt that as a practical
    engineering issue, that matters. There are very
    few machines running for that long that have not
    been maintained in the interim. The number may
    not be zero, but will be easily enumerated.

    Jim Hurlburt
    Yakima, WA USA

  22. If you can predict a leap second within two years, isn’t that enough? You can use the leap second offset counter to estimate the interval of time you are in. Take the predicted leap second dates for the start and endpoints, and then lengthen the interval both ways until it’s just smaller than 1024 weeks. Then check which interpretation of the weeks counter falls within your time range.

  23. Realized I don’t need the +/- 1 at all — they just decide which way the value at the limit is processed, and if you are at the limit, you’re basically screwed anyway. So the algorithm is basically:

    > def fine_adjust_week(approximate_week, week_lsb, lsb_width=10):
    > ring_size = 1 < delta = (week_lsb – approximate_week + ring_size / 2) % ring_size – ring_size / 2
    > return approximate_week + delta

    Also realized my test was insufficient. The algorithm needs to fulfill two constraints:

    (1) the output should never be more than +/- 10 years (e.g. counter size / 2) away from the value computed by the leap second algorithm; and

    (2) the LSBs of the output should always match the incoming week counter (because that should be correct in the LSB).

    > def test_fine_adjust():
    > rollover = 1024
    > testcases = ((a, b, fine_adjust_week(a, b)) for a in range(rollover*4) for b in range(rollover))
    > bad = [(a, b, f) for (a, b, f) in testcases if abs(f – a) > rollover / 2 or f % rollover != b]
    > assert not bad

    Although I have to say, an asteroid or a sufficiently sized nuclear explosion could render invalid the assumption that the year can be approximately computed from the leap second value…

  24. Boy, this formatting stuff doesn’t work worth crap no matter what I do:

    def fine_adjust_week(approximate_week, week_lsb, lsb_width=10):
    … ring_size = 1 < rollover / 2 or f % rollover != b]
    … assert not bad

  25. I give up. I cut and pasted it into email and sent it to you, and the second one looked reasonable.

    But the crux of the algorithm is (modifying slightly to remove anything that could be misinterpreted as formatting characters, I hope):

    ring_size = 2 ** lsb_width
    delta = (week_lsb – approximate_week + ring_size / 2) % ring_size – ring_size / 2
    return approximate_week + delta

  26. Why is is relevant that “there are two different ways your GPS date could be clobbered”? I assume this is referring to the device’s own idea of the date, but you seem to be specifying that we should ignore this completely and calculate the date from first principles using only the week count, the seconds-in-week count, and the leap second offset, which are all passed through directly from the satellite data.

  27. … ah OK, you specify that the week count is NOT passed through directly, but may be mapped to the device’s idea of what the actual number of weeks since 1980 is.

  28. I’m stumped. I do hope you post the solution.

    Part of me thinks we just shouldn’t be using the week counter, and rely solely on the seconds counter.

  29. (Writing in realtime) I’ll bet there’s a WordPress plugin for handling code in comments….what d’ya know, there is. :)

    ESR says: Installed and activated. Here’s a sample:

    #include <stdio.h>
    
    int main(int argc, char *argv[])
    {
        fputs("Hello, World\n", stdout);
    }
    

    Note that bare angle brackets in the program text will be eaten. Use the HTML literals for them.

  30. I think that “International Earth Rotation Service” has to be one of the best names ever. Every time I hear it it makes it sound like there is a Giant Lever somewhere in Geneva with the markings Faster and Slower…

  31. @esr:

    >You basically calculate the approximate year according to the leap seconds and then adjust to the correct week using the rollover counter.

    No. This won’t work.

    I admit to having been a bit ambiguous. When I said “adjust to the correct week”, I didn’t mean within the “approximate year”.

    Earlier, you said:

    Yes, you can actually get an estimate of the year to within about two years’ accuracy from the leap second.

    If you really meant that, then a 1024 week counter (which covers almost 20 years) can certainly be used to correct the approximate year derived from the leap seconds, as per the algorithm I subsequently gave. If you didn’t really mean that, then obviously it’s not the right way to do it…

  32. People keep coming up with “Use the leap-second to guess the year.” It’s no good. Even assuming you can get accuracy to within a year, think about what happens when there’s a rollover within that year. Which side of the rollover do you pick? And how – by flipping a coin?

    But you don’t need accuracy to within a year. You only need accuracy to within half of the size of your week counter, e.g. to +/- 512 weeks, or almost +/- 10 years. If you can accurately compute the year to within two years of the correct year based solely on leap seconds alone (as you appeared to assert), then you have ample safety margin with the 10 bit week counter.

    Note that, I don’t have enough knowledge to agree with or disagree with the assertion that leap seconds will get you there that accurately, but if the assertion is true, then certainly the right answer is to compute the year based on the leap seconds, and then adjust based on the 10 bit counter.

    1. >If you can accurately compute the year to within two years of the correct year based solely on leap seconds alone (as you appeared to assert), then you have ample safety margin with the 10 bit week counter.

      Not if you’re near enough one edge of the rollover period.

  33. Even assuming you can get accuracy to within a year, think about what happens when there’s a rollover within that year. Which side of the rollover do you pick? And how – by flipping a coin?

    Store the last known number of leap seconds somewhere and compare it.

  34. whoever heard of an electronic device that lasted 20 years without updates? screw the customer base!

  35. Actually, a slight restatement, just for the pedants:

    The correct method is to compute the approximate WEEK (not YEAR) starting from week 0 in 1980 or whenever it was by using the leap seconds, and then adjust the week based on the 10 bit week counter.

  36. @Morgan:

    esr stated no storage earlier. And if you had storage, you could always simply store the highest year ever seen or calculated and make sure you never rolled back below that. A very simple problem.

  37. As I wrote earlier, if you observe two satelites with different length counters, it is easy. Else I can only see to use the fact that after a rollover the leap second estimate will be 20 years off.

  38. Oh! I got it! /dev/rtc0 only gets timestamped at boot.

    You compare the leap seconds for the timestamp on /dev/rtc0 against the the leap seconds for the date you think it is… if it’s off by more than 10 or so it’s the other side of the rollover. :) No?

  39. >People keep coming up with “Use the leap-second to guess the year.” It’s no good. Even assuming you can get accuracy to within a year, think about what happens when there’s a rollover within that year. Which side of the rollover do you pick? And how – by flipping a coin?

    Considering the following scenarios. (I’m assuming we can get the number of weeks % 1024 from given data)

    It’s approx 1999.
    Week counter shows 1023 weeks. So it hasn’t rolled over yet.

    It’s approx 1999.
    Week counter shows 10 weeks. So it has rolled over.

  40. @esr:

    >If you can accurately compute the year to within two years of the correct year based solely on leap seconds alone (as you appeared to assert), then you have ample safety margin with the 10 bit week counter.

    Not if you’re near enough one edge of the rollover period.

    OK, then I’m still missing something.

    I assumed:

    a) an approximate week, calculated from leap seconds, that is accurate within +/- 511 weeks
    b) a week counter value that is the 10 LSBs of the actual week count. In other words, if the actual number of weeks since start of 1980 is 2049 weeks, the value of this counter would be 1. I guess from what you write, you might need to calculate this 10 bit value from the date given by the GPS device, but the date received from the GPS counter is sufficient to calculate this value.
    c) that if the week counter changes, the bit size will increase, and it will always be a power of 2.

    If these assumptions hold, then the algorithm I tried to post works fine. If not, then please elucidate about the finer point of the assumptions I got wrong. I will try again to post the code using the new plugin:

    def fine_adjust_week(approximate_week, week_lsb, lsb_width=10):
        '''
            Parameters:
               approximate week -- derived from leap second counter
               week_lsb -- least significant bits from week counter,
                                 derived from date received from GPS
               lsb_width -- number of bits in week counter
        '''
        ring_size = 2 ** lsb_width
        delta = (week_lsb - approximate_week + ring_size / 2) % ring_size - ring_size / 2
        return approximate_week + delta
    
    
    def test_fine_adjust():
        rollover = 1024
        for a in range(rollover * 4):
            for b in range(rollover):
                f = fine_adjust_week(a, b)
                assert abs(f - a) <= rollover / 2
                assert f % rollover == b
    
    if __name__ == '__main__':
        test_fine_adjust()
    
    
  41. Hmmm, doesn’t look like < is correct inside the

     tag.  Other than that, it looks right.
  42. Not if you’re near enough one edge of the rollover period.

    I think you are confused. You know the following things about the true time T (W denotes your week counter reading):

    1) It is one of the solutions for T in W = T mod 2^10, and
    2) it is within a given range T_r with some fixed cardinality R.

    Provided that R is sufficiently small, then there is exactly one solution for T for any permissible T_r, W. Since any larger counter can be reduced to an equivalent 10-bit counter, this solution suffices for all satellites.

  43. Ok, I just want to confirm I properly understand what’s available:

    – weeks since epoch, represented by some number of bits.
    – seconds of the week
    – leapsecond offset, which would be added in total to unix time, but which can’t be added directly to our seconds field, since each new leapsecond would have been added in the week that it officially ‘occurred’

    AND, presented epoch can be any of: GPS Epoch (1980 Jan 1 00:00:00), GPS Epoch + (2**$weeks_bits)*$week_rollover_count, or that + (( (2**$weeks_bits)*$week_rollover_count+1 ) – $ship_date_offset)

    My math is probably off there, but maybe my meaning comes across. Is the above all roughly correct? I think I have a solution of sorts if I’m not completely off base here.

  44. Of course, for there to be a solution, T_r must agree with some W. This is not a given from my previous description. If this isn’t the case then either your week counter or your estimation method is broken.

  45. Surely you could tell by which satellites you are in contact with which rollover had occurred as you know their orbits.

  46. I think Patrick Maupin has got it right, or at least as close as you would need for practical purposes. Here is my understanding of the solution anyway.

    1) Find a function which estimates the number of week rollovers based on leap seconds. Linear approximation will do, but there are probably higher order functions which may be more accurate.

    2) Use the leap second table to approximate the number of week rollovers since the last entry in the table.

    3) Now, look at the actual week counter value. If it is high, round your approximation down. If it is low, round it up.

    This will still eventually fail when your function from step 1 becomes inaccurate to more than 1 week rollover. However, step 3 solves the problem of what to do when your approximation places you NEAR a week rollover boundary. Obviously, if you’re a few weeks after the actual week rollover then the counter will be low – if you’re a few weeks before then the values will be close to the maximum.

  47. >If you can accurately compute the year to within two years of the correct year based solely on leap seconds alone (as you appeared to assert), then you have ample safety margin with the 10 bit week counter.

    Not if you’re near enough one edge of the rollover period.

    But you already know if you’re close to an edge what side of it you’re on.

    I see some others have tried to express the same idea; I’ll try as well.

    We are given L (leap-second total), W (week counter mod some unknown multiple of 1024) and S (seconds since beginning of this week)

    You have stipulated that there exists a function F(L) that gives the current date to an accuracy of two years. Let T(date) be a function that gives the total weeks passed between 1980/01/06T00:00:00Z and date (the non-modulated week number of that date)

    The number of weeks actually elapsed since 1980/01/06T00:00:00Z=
    int( T( F(L) ) /512) + W mod 512
    From that and S, you have your complete date/time

    I don’t see what’s so complicated here

  48. Or, does the GPS time data never, ever take into account the leapseconds other than to provide how many have occurred?

  49. If I’m understanding the problem correctly, than your known current week (according to the week counter) has to be one of a set of values about 19 years apart. (I’m assuming you can calculate the rollover dates.) If you can calculate a year to within 1-2 years with a leap second counter, can’t you use that to unambiguously find the time? (You objected earlier that you don’t know whether a rollover has happened or not if your calculated approximate year is right near a rollover. My proposed solution to this is to simply see if the week count is near 1024, hence before, or near 0, hence after.) This seems pretty basic, so I think I must be misunderstanding somehow.

  50. @esr:
    >>Since you’ll know more or less the margin of error of that calculation and you have a 2 or three really accurate time sources telling you how long it’s been since the last roll over, well, I think you’re done.

    >What accurate time sources? All you have is the time you were passed. You should not consider system time reliable for this purpose.

    The GPS timestamp stream. It’s accurate once you have a general idea of when you are.

    Ask your wife what time it is. I’m betting she doesn’t use military time, so you’re going to get something back like Nine Thirty. Not something like you’d get from time.time() right? And it’s not even ambiguous at noon–she says “It’s one ten” you know it’s afternoon. etc. etc.

    If you know it’s ~2019 and the week counter is <970 then it's before the rollover. If it's 30< then it's after.

  51. Damn cold-addled brain. I meant to say you have to check to see if the 10th bit matches between the estimated time from the leap-second computation and the week number. If they match, the above formula’s fine. If they differ, you add or subtract from the 11th bit accordingly.

  52. >>If you can accurately compute the year to within two years of the correct year based solely on leap seconds alone (as you appeared to assert), then you have ample safety margin with the 10 bit week counter.

    >Not if you’re near enough one edge of the rollover period.

    If leap seconds year estimate falls in the middle half or so of a rollover block, use that block.
    If it falls near an edge between two rollover blocks,
    if gps weeks is high (top half of the counter
    Use previous rollover block

    else (gps weeks is low)
    Use next rollover block.

    Unless I’m missing something major, that should do it. Only thing even slightly tricky will be to design a robust function to estimate the current date from the leap second counter, and given a known starting point, .48 — .49 seconds slip per year should remain close enough for longer than it matters for this discussion.

    As I mentioned in a previous post, I can’t imagine having to make gps receivers work for very many decades without maintainance — this algorithm should be fine for three or four anyway.

    Jim Hurlburt

  53. The true leap second counter is monotonically increasing (15,16,17,…) and always higher than the rollover count. Any time an implausible number is broadcast as the leap seconds then its real meaning is the rollovers (and assume for leap seconds whatever you had before).

  54. Perhaps my last answer didn’t have enough explanation, so here it is simply. You have a prospective time and location assuming no rollovers, in order to get that you need to have the intersection of all the sat signals and which sats they came from, as the sats are not in geosynchronous orbit that gives you the possible positions of the sats, now as the sats have known orbits if they are not where they are supposed to be at the non rollover time then you can try it again with one rollover, if that fails with two rollovers etc. until they match, and then you know how many rollovers there have been.

  55. @The Monster:

    Damn cold-addled brain. I meant to say you have to check to see if the 10th bit matches between the estimated time from the leap-second computation and the week number. If they match, the above formula’s fine. If they differ, you add or subtract from the 11th bit accordingly.

    @Jim Hurlburt:

    If leap seconds year estimate falls in the middle half or so of a rollover block, use that block.
    If it falls near an edge between two rollover blocks,
    if gps weeks is high (top half of the counter
    Use previous rollover block

    else (gps weeks is low)
    Use next rollover block.

    That’s correct, except you don’t need to explicitly do a check to do this. The whole fallacy here is thinking of these things as discrete “blocks.” You have a counter that goes on and on. Zero is no different than any other number.

    For any given value ‘x’ that the counter could be, there are a theoretically infinite number of weeks that ‘x’ could represent, which each possible represented week being 1024 weeks later than the last one. An approximate week derived from the leap seconds logic will either be equal to one of these values that ‘x’ represents, or will be between two of them. If it is equidistant between two of them (if there are 512 weeks one way to a valid week for ‘x’ and 512 weeks the other way to a valid week for ‘x’), you can’t calculate the correct actual value. In point of fact, if the leap-second estimate is anywhere close to being dead center between two possible year values from the 10 bit week counter, you will probably guess wrong.

    As several people have pointed out, you need to figure out which one of the possible week values that ‘x’ represents is closest to the estimate. However, you don’t need to explicitly do a check to calculate this. The example I gave does this correctly and implicitly.

  56. @Patrick Maupin

    You do have to do a check in my example, because I’m just masking off bits from the estimated week number and filling them in with the known 10 least-significant bits of the week number calculated from the time returned by the GPS device. In doing so, it’s possible that we’ll get a value that was near the “rollover” point but that 10th bit is the opposite in our two inputs. If that happens, a spurious carry has to be deducted, or a missed carry has to be performed. I have my wits about me well enough to recognize this lack in my original simplistic algorithm, but am having difficulty expressing the correction logic to make mine work right.

    Your approach is cleaner; it does the actual calculation that we’re all mentally doing when we understand whether to add or subtract to get to the nearer value that matches what the GPS is telling us.

  57. Just ask the user. A once every 19 year prompt isn’t really all that inconvenient…

    ESR says: No. Think about embedded deployments in telemetry packages. No user on the scene.

  58. “whoever heard of an electronic device that lasted 20 years without updates? screw the customer base!”

    In the late ’90s, the TV network I worked for decided to use the GPS system as our primary timesource. I’m sorry that I can’t remember the name of the manufacturer of the receivers that we bought and installed, but I do remember that in 1999 they sent us some new ROMs to take care of this problem.

    The people who do this for a living are certainly aware of all of this.

  59. @The Monster:

    Your approach is cleaner; it does the actual calculation that we’re all mentally doing when we understand whether to add or subtract to get to the nearer value that matches what the GPS is telling us.

    Thanks, but I realized upon re-reading it that it probably isn’t all that clear to some people. So I made a version that does the same logic, but in a different fashion. And I renamed the variable names to try to make it a bit more clear. Finally, just in case the earth starts speeding up, I assume that actual date will never be before the date reported by the GPS unit.

    
    def correct_week(estimated_week, date_week, counter_width=10):
        '''
            Parameters:
               estimated_week -- estimated week offset from first week
                                 of 1980, derived from leap second counter
               date_week      -- week offset from first week of 1980
                                 according to the date from the GPS unit
               counter_width  -- number of valid least significant bits
                                 in date_week
            Returns:
               Assumes that correct week must be >= date_week, and returns
               the greater of the estimated week corrected by the
               counter_width LSBs of date_week, or date_week
        '''
        ring_size = 1 << counter_width
    
        # Correction factor is simply the difference between the date_week
        # and the estimated week, modulo the ring size
        correction = (date_week - estimated_week) & (ring_size - 1)
    
        # Extend the sign bit of the correction factor
        correction -= (correction << 1) & ring_size
    
        # Choose either the given date_week or the corrected estimated_week,
        # whichever is later.
        return max(date_week, estimated_week + correction)
    
    
    def test_fine_adjust():
        rollover = 1024
        for a in range(rollover, rollover * 5):
            for b in range(rollover):
                f = correct_week(a, b)
                assert abs(f - a) <= rollover / 2
                assert f % rollover == b
        for a in range(rollover*2):
            for b in range(a, a + rollover*2):
                f = correct_week(a, b)
                assert f == b
    
    if __name__ == '__main__':
        test_fine_adjust()
    
  60. @ls:

    In the late ’90s, the TV network I worked for decided to use the GPS system as our primary timesource.

    Most cell phone basestations use GPS as the timesource. Synchronization is extremely important, especially in crowded areas where nearby stations might also be operating on the same frequencies.

  61. @patrick Maupin

    >That’s correct, except you don’t need to explicitly do a check to do this. The whole fallacy here is thinking of these things as discrete “blocks.” You have a counter that goes on >and on. Zero is no different than any other number.

    >For any given value ‘x’ that the counter could be, there are a theoretically infinite number of weeks that ‘x’ could represent, which each possible represented week being 1024 >weeks later than the last one. An approximate week derived from the leap seconds logic will either be equal to one of these values that ‘x’ represents, or will be between two of >them. If it is equidistant between two of them (if there are 512 weeks one way to a valid week for ‘x’ and 512 weeks the other way to a valid week for ‘x’), you can’t calculate . > the correct actual value. In point of fact, if the leap-second estimate is anywhere close to being dead center between two possible year values from the 10 bit week counter,
    > you will probably guess wrong.

    I disagree somewhat. It’s easer to reason about the problem if you think of blocks of 1024 weeks since 1980, your goal being to determine which block of 1024 is the current block.

    I explicitly stated that I would check to see if was in the middle half of a block of 1024 weeks. Not going to be able to estimate any closer than one year anyway.

    If so, figure that this is the correct block. Also note that I’m allowed to know that when I ship my software, , that leap seconds in early 2011 is +15 and current error is about 700 ms. Even if I don’t know about the 700ms, the +15 is legal info. I can, within the problem setting given by esr, state that +15 gives me the year of Jan 2009 as a start. We are working with fuzzy data and ugly algorithms here. So, my start is 2009, not 1980. Last leap second was end of dec 2008. I am allowed to know this.

    gps week blocks start with Jan 1, 1980 however in increments of 1024 weeks.

    If that calculation decides that you are within the last or the first quarter of a block of 1024 and gps week is > 512, assume that you are within the lower possible block.
    If gps week < 512, assume you are within the higher possible block.

    It doesn't occur to me how this algorithm would give an ambiguous answer. so long as == +- 200 weeks, should work. After some number of decades without a reset on current leapseconds this algorithm will fail.

    My guess is that this algorithm will give correct answers for more decades than is useful. If they abandon the leap second paradigm, that blows this entire scheme. If you assume they do maintainance on the equipment every decade or two, it makes it unnecessary. A 13 bit number for weeks will probably put the problem off for longer than makes any difference from an engineering standpoint, especially given that the half life of a gps satellite isn’t a large number of decades. Few real world pieces of equipment will function for more than two decades without maintainance.

    This is an example of a real world solution, that while being quite imperfect, and ugly, is most probably buy Jerry Pournelle’s standards.

    Jim Hurlburt

  62. Sorry, Word press ate my formatting. I used right and left angel brackets as delimiters for some of my text in the previous post. They vanished. Should have known better.

    One more try —

    @patrick Maupin

    >That’s correct, except you don’t need to explicitly do a check to do this. The whole fallacy here is thinking of these things as discrete “blocks.” You have a counter that goes on >and on. Zero is no different than any other number.

    >For any given value ‘x’ that the counter could be, there are a theoretically infinite number of weeks that ‘x’ could represent, which each possible represented week being 1024 >weeks later than the last one. An approximate week derived from the leap seconds logic will either be equal to one of these values that ‘x’ represents, or will be between two of >them. If it is equidistant between two of them (if there are 512 weeks one way to a valid week for ‘x’ and 512 weeks the other way to a valid week for ‘x’), you can’t calculate . > the correct actual value. In point of fact, if the leap-second estimate is anywhere close to being dead center between two possible year values from the 10 bit week counter,
    > you will probably guess wrong.

    I disagree somewhat. It’s easer to reason about the problem if you think of blocks of 1024 weeks since 1980, your goal being to determine which block of 1024 is the current block.

    I explicitly stated that I would check to see if (leap second year) was in the middle half of a block of 1024 weeks. Not going to be able to estimate any closer than one year anyway.

    If so, figure that this is the correct block. Also note that I’m allowed to know that when I ship my software, , that leap seconds in early 2011 is +15 and current error is about 700 ms. Even if I don’t know about the 700ms, the +15 is legal info. I can, within the problem setting given by esr, state that +15 gives me the year of Jan 2009 as a start. We are working with fuzzy data and ugly algorithms here. So, my start is 2009, not 1980. Last leap second was end of dec 2008. I am allowed to know this.

    gps week blocks start with Jan 1, 1980 however in increments of 1024 weeks.

    If that calculation decides that you are within the last or the first quarter of a block of 1024 and gps week is > 512, assume that you are within the lower possible block.
    If gps week < 512, assume you are within the higher possible block.

    It doesn't occur to me how this algorithm would give an ambiguous answer. so long as (leap seconds year) == (adjusted gps weeks) +- 200 weeks, should work. After some number of decades without a reset on current leap seconds this algorithm will fail.

    My guess is that this algorithm will give correct answers for more decades than is useful. If they abandon the leap second paradigm, that blows this entire scheme. If you assume they do maintainance on the equipment every decade or two, it makes it unnecessary. A 13 bit number for weeks will probably put the problem off for longer than makes any difference from an engineering standpoint, especially given that the half life of a gps satellite isn’t a large number of decades. Few real world pieces of equipment will function for more than two decades without maintainance.

    This is an example of a real world solution, that while being quite imperfect, and ugly, is most probably [good enough] by Jerry Pournelle’s standards.

    I can, if requested, put the above into a python function. Probably most here can read python about as easily as they can english text.

    Jim Hurlburt

  63. This is an example of a real world solution

    I beg to differ; rather, it is an example of a poorly conceived solution with a bad explanation. It is easy formulate this problem mathematically, and it is easy to tell that Patrick Maupin’s solution conforms to the obvious formulation. As demonstrated therein, the mathematical description can be trivially translated into run-time assertions. None of these things are true of your solution, which is filled with superfluous and muddled reasoning. You use the word ‘algorithm’ several times in this post, but a stream-of-consciousness story from the perspective of the gps device does not constitute such a thing.

  64. After staring at it for a while, I think something like Patrick Maupin’s approach can be made to work if we make one changes in the assumptions I specified: we ignore the possibility that the period might change to 13 bits.

    (Note that you can’t convert from UTC returned by the receiver directly to week/second, because the receiver’s conversion from week/second to UTC mixes in an inaccessible assumption about the start of the rollover period the firmware was designed to operate in. It also mixes in a leap-second offset.)

    I mixed things up by sometimes talking as though the input was a possibly clobbered UTC time, which is the function I’ve actually written. Without actually having the GPS week, my objection about the false hits when your year estimate is near a rollover boundary returns in full force – because you don’t know where the rollover period boundaries are relative to the receiver’s hidden epoch assumption. However…

    On reflection, I think it’s justified to assume that if you have leap second, you also have GPS week. Subframe data, if the GPS makes it accessible, reports both. Binary receiver protocols that give you one generally give you the other. So the way I stated it in the OP leads to a solution that is more useful than my existing code, if we assume that we know the receiver’s rollover period.

    The ugly solution I already have has the UTC input. What it does is check to see if the input UTC corresponds to a date with a known leap-year offset. If so, the table is used to consistency-check it. This check will correctly return a rollover indication for an infinite number of disconnected stretches of time after the end of our leap-second table, and an “I don’t know” for all other times.

    This is why I said “Embrace the suck”; before you can see to write it, you have to get past requiring that the algorithm return “rolled over” or “not rolled over” for all inputs. If you look back at the OP, you’ll see that I carefully phrased the requirements to include this possibility. I said the problem was a finger-trap for good programmers because I knew for certain that good programmers would wrap themselves around an axle trying to find a two-valued predicate function.

    The advantage of my evil three-valued test is that it doesn’t depend on knowing the GPS’s rollover period.

  65. > The advantage of my evil three-valued test is that it doesn’t depend on knowing the GPS’s rollover period.

    Given the current week counter modulo some power of 2 and the leap year offset we can calculate the number of rollovers. Just do mod 1024 on the week counter and then use Patrick’s algorithm. Hopefully no one is going to create a week counter that is not a whole number of bits.

  66. we ignore the possibility that the period might change to 13 bits.

    Yes, because even if it does change, it will still roll over on a 10-bit boundary. Since you said the leap-second value alone is good enough to get us the date to within a couple of years, we will know all but the last 8 bits or so, allowing for carry in the 9th-11th bits.

    What it does is check to see if the input UTC corresponds to a date with a known leap-year [sic]offset. If so, the table is used to consistency-check it.

    I think you meant leap-second offset there.

    Why even bother with this table of purely historical value? Any code you distribute today will receive a leap-second value of 15 or greater. Or do you anticipate a use case where the GPS falls through a wormhole and comes out in the year 1982?

  67. @esr: but your UTC check won’t work in case where the device is turned off across two rollovers. Then you have to fall back to Patrick’s solution, which would work anyway.

  68. @esr:

    After staring at it for a while, I think something like Patrick Maupin’s approach can be made to work if we make one changes in the assumptions I specified: we ignore the possibility that the period might change to 13 bits.

    If it remains a simple overflowing binary counter, it doesn’t matter if it is any number of bits >= 10 bits. The lower 10 bits will still overflow at exactly the same place.

    (Note that you can’t convert from UTC returned by the receiver directly to week/second, because the receiver’s conversion from week/second to UTC mixes in an inaccessible assumption about the start of the rollover period the firmware was designed to operate in. It also mixes in a leap-second offset.)

    But that starting assumption doesn’t really matter. (Actually, my last code post will directly use this starting assumption as a sanity check in the negative time direction, to make sure that the output is not less than the date reported by the GPS.)

    There is a fairly common mental stumbling block with thinking that a counter value of zero — right after it rolled over — is somehow special. If you are given some week, and told that the week was generated by adding some fixed offset to an ‘n’ bit counter, then what you know is that the week you are given might be correct, or it might be off by (x * 2**n), where x is an unknown number that is presumably non-negative (because you would not expect the GPS unit to erroneously report a date in the future). You do not need to know the offset value that is added to the counter, or the current value of the counter — those are actually irrelevant mathematically, even if conceptually important in starting to understand how the math works.

    If you have another piece of data (the leap second) which you are sure will give you the correct week within +/- 2**(n-1) weeks, then, as Roger points out, you have all the data necessary to figure out x in the equation (x * 2**n), and once you have x and the week reported by the GPS, you are done.

  69. There is a fairly common mental stumbling block with thinking that a counter value of zero — right after it rolled over — is somehow special.

    It is special if, as is often the case, a value of 0 is defined to have some special significance. Similarly, -1 or even all negative numbers may be given the meaning “error”.

    In my work, I have to deal with a common problem of older Unix systems that use 32-bit signed integers to represent offsets for tt>fseek. That sets the largest possible file size at 2^31 – 512 bytes (in some cases it’s – 1024 instead). It’s not that the counter “rolls over” to zero,; it’s that it “rolls over” to the largest possible negative signed 32-bit integer.

    It’s never a “stumbling block” to think carefully about rollover events. You just have to think it through, walking through each of the edge cases, and make sure that you are using < or <= as appropriate, consider whether an increment is being done before or after the reference, and how that impacts the calculation. With that in mind, you ought to walk -2 (in this case 1022), -1 (1023), 0, and 1 at a minimum.

  70. If you’re comfortable relying on some relatively steady rate of leap second additions to estimate the current time, then as others have pointed out, you probably don’t need to code the historical table of leap seconds into the software. Instead, simply use the historical table to generate a function (possibly linear, only requiring 2 constants) that will generate a coarse week value that you can use for your time estimate.

    Personally, I’m a bit of a wimp when it comes to prognostication, so if GPSD were my project I would worry if I released software that relied on leap seconds. What about that big asteroid? Or maybe we don’t really understand chaotic forces deep inside the earth well enough. I know I’ve played with unstable wobbly toys that can speed up and slow down.

    But one prediction I feel relatively comfortable making, Stephen Hawking aside, is that time will move forward.

    So, I would probably ignore all the fancy leap second stuff, and simply code the current week into each GPSD release as a lower bound on time — you know that the current week has to be this week or after. Then when you get the week from the satellite, if it’s less than the week coded into the software, you can adjust it. This way, every release of GPSD is guaranteed to work for almost 20 years with current receivers.

    while satellite_week < release_week: satellite_week += 1024

    And if a GPSD release fails after 19 years, a simple text edit will make it work for the next 19 years, without having to change anything else except the release_week constant.

    You could also add an optional module that relies on local storage to update a date in a file, and then use the file date for the minimum date if available, or the release date if the file date is not available. Since you know the rollover counter is 19 years, you could code it so that you only updated if the current week is, for example, six months after the week in the file. This way, you wouldn't generate too many writes, and the base time would be auto-correcting.

  71. @The Monster:

    It’s never a “stumbling block” to think carefully about rollover events.

    Sure, the stumbling block is in saying “oh, I hit zero and I’ve had issues with that before and I know that’s a special problem” without, as you point out, thinking carefully about whether it is, in fact special and whether it is a problem or not. Speaking of which, has anybody here ever implemented a cascaded integrator comb filter? You don’t need as many bits for that as you think you might :-)

  72. @esr:

    Thanks, but as you know, the coding is not the real problem. In fact, for the “make sure it works for at least 19 weeks after release”, the code really is as simple as “while satellite_week < release_week: satellite_week += 1024", once you have library functions to split into week/seconds, and then combine back again.

    No the real problem is testing, interface specification, etc. I don't even own a GPS unit, so while overflowing binary counters is well within my problem domain as a sometimes hardware designer, most of the rest of the software lies outside my current knowledge and well outside my current interest.

  73. Hmmm. Poking at the data on leap-seconds in Wikipedia . . .

    1) The leap-second count is not guaranteed to increase. While there have never actually been negative leap seconds, they are theoretically both possible and permissible, and so accordingly must be provided for. While it’s unlikely to come up and bite you on the ass, there is no actual guarantee the offset couldn’t decline.

    2) The leap second field in the GPS system is 8 bits. As it is valid in UTC for a leap second to be inserted or removed as often as once every month, you can, in fact, wrap the leap second counter all the way around and back to any given value in as little as 21 years and four months. This is unlikely, but possible.

    3) The rate of leap second accumulation is sufficiently erratic in the historical record (compare 11 accumulated 1972-1982 to 2 accumulated 1999-2009) that you couldn’t conclude from a value of +45 that there has been two 1024-week rollovers since the code was shipped; it could just as easily be three or four (maybe more), or just one. So even making the reasonably safe assumption there will be no negative seconds or positive ones frequent enough to cause a rollover, you can’t extract more date information from the leap-second field and a table of leap seconds than you can from a single stored data value marking the date of shipment as the lowest returnable date value.

    So, given the table leap-second table, I’d only use it as a record of when the software shipped, in order to establish my minimum date. The rest of the data is extraneous. If the week count seemed to return a date prior to shipping, I add 1024 weeks to get the correct date, blindly assuming there hasn’t been a second week number rollover. This works reliably out to 1024 weeks from when I ship, which is as long as any more complicated use of leap seconds could reliably return, and there’s further no way the algorithm can be confused by negative leap seconds or leap-second wrapping.

    . . . And looking up, I see I’ve duplicated Mr. Maupin’s solution.

  74. ESR meets his match… is he humbled?

    DateTime CalculateFromGPST(int week, int weekSeconds, int leapSeconds)
    {
        DateTime neighborhoodDate = new DateTime(1980, 1, 1) + TimeSpan.FromDays(30 * 18 * leapSeconds);
        DateTime windowStart = new DateTime(1980, 1, 1);
        TimeSpan previousCandidateOffset = TimeSpan.MaxValue;
        DateTime previousCandidateTime = new DateTime(1, 1, 1);
        while (true)
        {
            DateTime candidateTime = windowStart + TimeSpan.FromDays(week * 7) +
                    TimeSpan.FromSeconds(weekSeconds);
            TimeSpan candidateOffset = (candidateTime - neighborhoodDate).Duration;
            if (candidateOffset > previousCandidateOffset)
            {
                return previousCandidateTime + TimeSpan.FromSeconds(leapSeconds);
            }
            previousCandidateTime = candidateTime;
            previousCandidateoffset = candidateOffset;
            windowStart = windowStart + TimeSpan.FromDays(1024 * 7);
        }
    }
    
  75. WBTE, it’s been a long time since I’ve actually dealt with code (and that was at the poking-around amateur level), but it looks to me as if you’re assuming a positive leap second every 18 months. That’s not remotely reliable. That’s an average rate of accumulation in the historic tables, but it’s perfectly possible to go decades without any (net) leap seconds, or to accumulate them (say) every six months, in which case your algorithm fails by returning a neighborhoodDate in the wrong 1024-week epoch.

  76. 1) The leap-second count is not guaranteed to increase. While there have never actually been negative leap seconds, they are theoretically both possible and permissible, and so accordingly must be provided for. While it’s unlikely to come up and bite you on the ass, there is no actual guarantee the offset couldn’t decline.

    Yes, it is possible, but since the rules for promulgating leap seconds require an accumulated deviation over .6 second, that would have to be reversed for a negative leap second to occur. Since the long-term trend is for the year to get longer, that is particularly unlikely

    But even if one or two such negative leap seconds could occur, it wouldn’t be enough to push your estimated date outside the 1024-week window. The long-term trend is accurate enough for the forseeable life of any hardware configuration you’d set up that lacks the ability to update the leap-second info.

    And that raises an interesting question… We presume that we have no access to the Internet (or we’d run ntpd and be done with it) to update our leap-second tables… but do we have local storage to record the leap-second changes as we see them? That would allow several checks: Since a leap second is only allowed to happen at the end of a GMT month, if we have two consecutive readings with different leap-second values, and the end of a month falls between those two readings, we have excellent confirmation. OTOH, if it does not, that suggests a problem with the calculation. We can accumulate a table of these updates, extending our historical knowledge of when leap seconds have been previously applied, and can recalculate various trendlines based on this information. And, of course, we know that in the unlikely event of a negative leap second, that we do NOT need to go back to a past date.

    There is then no reason to believe that long-term operation of this algorithm is subject to failure, as the rotation period of the earth is sufficiently stable that the estimated date will never be more than 512 weeks off of the actual date.

  77. And that raises an interesting question… We presume that we have no access to the Internet (or we’d run ntpd and be done with it) to update our leap-second tables… but do we have local storage to record the leap-second changes as we see them?

    esr originally said no local storage. And if you have local storage, you don’t need leap second info, at least as long as the unit is switched on at least once every 19 years…

  78. the rotation period of the earth is sufficiently stable that the estimated date will never be more than 512 weeks off of the actual date.

    That’s not the conclusion I get from looking at the historical table. If you get (for example) a reported leap-second offset of +35 as your first receipt from the sat and a week counter value of +512 from the value as of the ship date, has it been approximately fifty years of two-seconds-per-ten-years accumulation (the leap second accumulation rate seen historically in the 1999-2008 time period) since the code shipped or approximately ten years of one leap second per year (as seen historically in the 1972-1981 time period) since the code shipped? Or 30 years of a median rate? You can guess, but I don’t see how you can consider the guess reliable. The 10-bit week counter’s going to report the same (reflecting ten years of increasing weeks, thirty of increasing weeks with a single rollover, or fifty years with two counter rollovers), and you don’t know the actual rate of leap second accumulation since you shipped.

    If you do have a continuous record and local storage, you can simply use the local storage to record each week counter rollover, and you never, ever have to derive the rollover count from leap seconds at all.

  79. esr originally said no local storage

    Strangely, when I search this page for the word “local”, and “storage” I find nothing in the original article nor any comment from ESR. I see you asserting that he said this. The only thing even close to it is this:

    For purposes of this exercise, you get to assume that you have a table of leap seconds handy, in Unix time (seconds since midnight before 1 Jan 1970, UTC corrected). You do *not* get to assume that your table of leap seconds is current to date, only up to when you shipped your software.

    That does not preclude the attempt to maintain the currency of the table, which would improve accuracy even if it is incomplete

  80. @The Monster:

    esr originally said no local storage

    Strangely, when I search this page for the word “local”, and “storage” I find nothing in the original article nor any comment from ESR. I see you asserting that he said this.

    In his initial post, esr said:

    You are presented with a GPS time (a week-counter/seconds-in-week pair), and a leap-second offset. You also have your (incomplete) table of leap seconds. The GPS week counter may invalid due to the Rollover of Doom. Specify an algorithm that detects rollover cases as often as possible, and explain which cases you cannot detect.

    Which I personally (and several others as well, obviously) took as a complete description of the inputs available. This only makes sense, really — if local storage is available, then the problem gets much easier.

    But in any case, my assertion was based on esr’s own words — when someone posted a solution that relied on local storage, he replied “I didn’t specify that you could keep history in the algorithm.”

  81. Have you read ICD-GPS-200C with IRNs 12345 ?
    Specifically paragraph 20.3.3.5.1.13 ?
    It defines a new 16-bit integer which will represent “Calendar Year”.

    The only problem is that paragraph 20.3.3.5.1.13 does not specify whether the
    16-bit integer representing “Calendar Year” is signed or unsigned.
    Sometime in the next 14000 years they’ll have to re-issue the ICD to resolve that.

  82. Don’t you mean that sometime in the next 30000 years they will have to solve that?

    The answer should be obvious at that time — we need negative years iff they’ve invented time travel :-)

  83. And so, on 13 Feb 2016, I met the Rollover of Doom.
    15 x GeoExplorer 3 GPS’s with a 1.2v Firmware date of June 14 2001 all crashed and left me with a Fatal Except report and a bunch of error codes.

    Any suggestion or sollutions will be welcomed

    1. >Any suggestion or sollutions will be welcomed

      Sorry, you’re screwed. Unless you have the firmware source and the ability to modify it those devices are now paperweights.

Leave a comment

Your email address will not be published. Required fields are marked *