The bug that didn’t bite in the night-time: an anti-disaster story

A very curious thing happened with GPSD this week. In fact it’s so odd I’m still having trouble believing it. In software engineering we often have trouble getting seemingly simple things to work reliably. How does one react when an incredibly complex, fragile piece of bit-twiddling code works – perfectly – after six years without real-world testing, during which the surrounding architecture underwent such massive changes that any rational person would have expected the feature to bit-rot into garbage?

No, really, this one is weird. Let me unfold to you the strange tale of The RTCM2 Analyzer That Shouldn’t Have Worked. Really. At All.

A principal source of errors in GPS position fixes is variable time delays in satellite signals induced by variations in ionospheric conditions. A way to compensate for this involves what’s called differential correction – receiving signals from a ground station at a fixed location and comparing their satellite pseudoranges with yours. Because the station’s location is known with high precision, your receiver can use its information to compute the local ionospheric delay (as long as you’re within a couple hundred kilometers of the station) and refine its fixes.

There are a couple of different ways you can get such corrections. The old-school method is to get a specialized radio that listens for broadcasts from differential-correction stations and reports them to you is a digital bitstream over a serial (RS232 or USB) link. The new-school way is to get that bitstream from servers that make it available over the Internet.

If all you want is to correct your GPS, all you actually need to do is shove that bitstream at the GPS’s serial or USB port as it comes in (though some GPSes want you to push the data into a special-purpose auxiliary serial port). Many GPSes simply interpret it on the fly. If yours claims to have DGPS support, it will have this capability. You don’t actually have to know what’s in the stream of corrections.

Actually understanding the corrections as they come in – making something from them that a human can see sense in – is much more difficult. But there are reasons you might want this. If you’re doing atmospheric physics, the network of differential correction stations is, in effect, a huge distributed ionosphere observatory. If you’re concerned about really high-precision geolocation, you may need to monitor the health of the differential-correction network in order to know when your confidence intervals are nice and tight.

Or you could be like me, annoyed beyond reasonableness at the idea of data flowing through your software that is opaque to you. In mid-2005 I decided to try to actually analyze the correction data flowing through GPSD.

This was no small undertaking. The protocols used for differential corrections are nasty. The most widely used of them is called RTCM2. It actually has two layers, each deeply hideous in its own special way.

The lower layer is the downlink protocol used by GPS satellites. It’s a bitstream, not a bytestream, and that raw bitstream (segmented into 8-bit characters along boundaries that don’t mean anything) is what you get from an RTCM2 source over a serial or socket link. This lower layer doesn’t formally have a name, but because it’s specified in a famously headache-inducing document called ISGPS200, those unlucky few of us who have had to deal with it tend to think of it as the ISGPS layer.

You start with your ISGPS bitstream. Your first challenge is to turn it into data frames. This involves sliding it bit-by-bit into a 36-bit buffer and looking for two fixed header bytes checked by six bits of parity information. When you sync successfully, you get a thirty-bit data word.

I’m making this sound simpler than it is. I started with an RTCM2 decoder in C written by two guys named Wolfgang Rupprecht and John Sager around 1999 (they’ve both since disappeared off the net), and my first job was to pry the RTCM2 layer loose from the ISGPS bit-sync code. I did it – you can look in the results in isgps.c in the GPSD codebase – but all by mechanical refactoring steps. Full of magic numbers, shifts, and inversions; I didn’t understand it then and don’t now. Oh, and it broke GCC’s optimizer. Twice.

And that upper layer? This is the RTCM2-specific part. Once you have your sequence of 30-bit data frames, you get to slice packed bitfields out of them according to one of a bunch of different message formats, identified by a type field in the first or header word. Then you apply scaling divisors because many of the data fields are float quantities.

This upper layer is simple in principle, but getting any bitfield length wrong reduces all following to garbage. Debugging this sort of thing is tedious and painful; is it the lower layer, the upper layer, or yet another damned optimizer bug?

The simplest(!) way to tackle it turned out to be to define a big ugly C union full of bitfields, cast the 30-bit-buffer pointer to a pointer to that union, and read the fields. Would fail on architectures that force padding between bitfields, but since modern CPUs all have barrel shifters I was reasonably sure I could say #pragma pack(1) and have the right thing happen.

(I lied. There are actually two big ugly unions with bitfields – one for big-endian machines, one for little-endian. Look in driver_rtcm2.c in the GPSD codebase. If you’re not frightened of that code, you are perhaps blessed in your ignorance.)

The only reason I had any confidence in the results was because Sager’s code included a short segment of RTCM2 and an ASCII dump of same. At every refactoring step I checked that my code still gave the same output from the input. But I wasn’t really confident…because the Sager sample only included one type (type 9) and I therefore had no way to test the decodes of types 1, 3, 4, 5, and 16 (and later, when they upgraded the protocol, 13, 14 and 31).

Later, I got a different check for the ISGPS layer. We were able to use it to decode a special GPS message called a ‘subframe’ and see the current leap-second value we expected at the place we expected it. This did nothing to verify the RTCM2 layer, though.

One of my continuing headaches over the next six years was that I couldn’t get my hands on a decent test pair for this thing. What I needed was a pair of files like the Sager sample and its dump, but with a larger set of message types in it. High and low did I search the Internet, but in vain.

And then, last week, a guy who’s been working with us on our support for Ntrip (a differential-correction service popular in Europe) finally handed me a test pair consisting of a raw RTCM2 bitstream segment and an ASCII dump of it made by some ugly proprietary Windows program. The dump included reports for message types 1, 3, 14, and 31, but no type 9s as in the Sager sample. (This was OK, as the type 1 and type 9 message formats are effectively identical).

I had what I’d been looking for for six years…but to say that I feared actually collating that dump with GPSD’s output would be to wallow in understatement. So thick was the murk in the depths of my decoder that I had no freaking idea if I’d even be able to reconcile any differences.

And indeed it didn’t look good, initially. The first reports appeared to match, except that a field called the zcount (a sort of timestamp) was ever so slightly off. Groan. Eventually I figured out that the time boundaries of the binary and the ASCII dump didn’t quite coincide. If I looked a few reports after the first in the dump I saw something that looked like a match.

But it still didn’t look right. There were sentences of type 14 and 31 in the binary data that my decoder didn’t interpret because they weren’t defined six years ago. Groveling through some obscure documentation, I added more complexity to my huge pile of bitfield declarations and parsing code. Better, but…

I still thought I had lossage…until I noticed that all those wrong satellite ID fields in the type 31s were wrong by a constant offset of 40, and realized that this was because the author of the other program had mapped GLONASS satellite numbers upwards so as not to collide with the GPS satellite IDs in the type 1 messages.

I looked at my JSON dump. I looked at the ASCII dump from Windows-land. Four message types I’d never tested before. All. Matched. Perfectly. Even down to the low digits on the float quantities.

My jaw dropped open. “That just isn’t possible!” I thought to myself. “Something here has to be wrong. What are the odds?”

I looked again. And it was still right. Every bit of it.

Anybody who isn’t a pretty hard-core programmer probably died of boredom about a thousand words ago. But in case you made it this far, and don’t know much about what writing low-level bit-twiddling code is like…this is freaky. Once-in-a-blue-moon freaky. You’re-full-of-shit-I-don’t-believe-you freaky. Not so much like dodging a bullet as like dancing stark-naked through aimed fire from a platoon of machine-gunners.

So blindingly unlikely is this, in fact, that my skill level doesn’t enter into the odds. OK, I think I’m pretty good, and there’s objective evidence to back that up…but if I were Ken Thompson, Alan Turing, and Thoth Trismegistus rolled up into one ball of pulsating awesome I still wouldn’t have expected right-first-time on code like this.

So, what actually happened here?

I can isolate two things. First, the ISGPS layer, fearsome black art though it is, must have gotten better test coverage from the Sager sample and the odd subframe decode than I realized during those six years. That code terrifies me, and it will terrify you if you ever read it (“You are not expected to understand any of this.”) but in retrospective fact the only things that ever broke it were GCC optimizer bugs.

Second, hanging on to the invariant that the Sager sample decoded correctly through all the changes in the rest of the codebase was a very good idea, even if it was only one sentence type. It did nothing to verify the rest of the decoder, but at least it ensured that the framework around the rest didn’t get subtly bent out of shape while I wasn’t looking.

And I got really, really, really, really lucky.

So lucky that I’m still feeling a bit dazed by it. And I don’t think I can draw any lessons from this. It’s too random and weird. But we have so many legends of software disaster that I thought a story of anti-disaster would be worth sharing anyway.

60 comments

  1. There is a lot of advice out on the internet about things a programmer should never do. Top of the list is to never rewrite from scratch.

    http://www.joelonsoftware.com/articles/fog0000000069.html

    This post is a perfect example of why you should never rewrite debugged code from scratch. It is even better than “it compiled and ran perfectly the first time”. It also bashes my motto: Not tested == not working.

  2. When things like this happen to me, if they ever do, I usually discover that I mixed up the test. Such as typing “diff expected-results expected-results” instead of “diff expected-results test-results”. But that’s me.

    1. >Such as typing “diff expected-results expected-results” instead of “diff expected-results test-results”.

      That would be a very shrewd guess normally, but couldn’t have happened in this case. The ASCII dump I was given was in a format quite unlike the JSON that GPSD generates; they could only be compared by eyeball. Samples, from the ASCII dump first:

      Thu Mar 10 17:05:22.688 2011
      MT03 Stat 688 Status 6 Sec 1650.0 Seq 5 Frame 4
      MT03 Stat 688 ZCount 1650.0 X 3842290.920 Y 663782.760 Z 5030690.320
      

      And this is the GPSD JSON:

      {"class":"RTCM2","type":3,"station_id":688,"zcount":1650.0,"seqnum":5,
      "length":4,"station_health":6,"x":3842290.92,"y":663782.76,"z":5030690.32}
      
  3. Which just begs the question, why design such a convoluted protocol?

    The lower level sync system, maybe I can understand as a way to synchronize with an incoming serial bit stream when the bit stream is transmitted constantly and the receiver may only intermittently have reception.

    But then why design such an awful upper level data layer to carry the content?

    1. >But then why design such an awful upper level data layer to carry the content?

      Because you’re an EE.

      One of the things I’ve truly had my nose rubbed in, working with GPS and related tech, is what protocols designed by people who aren’t software engineers look like. Binary with tightest possible bit-packing, of course. Irregular field widths. In one particularly horrible case (AIS message type 22) there’s a discriminator bit that distinguishes between two variant formats of a bit span – located after the span.

  4. >But then why design such an awful upper level data layer to carry the content?

    Paraphrasing from Monty Python,

    [singing]
    Every bit is sacred. Every bit is great. If a bit is wasted, God gets quite irate.
    Every bit is wanted. Every bit is good. Every bit is needed In your neighbourhood.
    [/singing]

    Seriously, that’s how EE’s think about everything. And, frankly, when you’re moving data over a link that provides 100 bits per second, and your messages are constrained to no more than 30 seconds in a 60 second window, 4 times a day, that’s what you have to do. But you don’t have to like it.

  5. Which just begs the question, why design such a convoluted protocol?

    No, it raises the question. To “beg the question” is to assume the answer supports the conclusion you’re trying to prove.

  6. In one particularly horrible case (AIS message type 22) there’s a discriminator bit that distinguishes between two variant formats of a bit span – located after the span.

    Whoever designed that needs to be duct-taped to an office chair and forced to watch the output of a random-number generator, which he’s been told is actually a proprietary protocol that he’s supposed to reverse engineer.

  7. Ah, this brings back the memories. Allow me to reminisce about something similar I once had to work on…

    The company I was working for had to decode a protocol used by military communications. The format was eerily similar to the ISGPS format: packed bit fields of irregular size, with the added wrinkle that most fields were optional and were preceded by a “field presence bit”, and many fields could be repeated a variable number of times, indicated by a “repeat count” whose bit count itself varied. There were hundreds of message types and thousands of different fields we had to handle. We had gotten a supposedly working implementation from somewhere, but it was a C++ template monstrosity, which nobody understood and which wasn’t usable by us since we needed to stick to ANSI C.

    I had one thing going for me: we had a copy of the spec, and the spec was written in an extremely regular format. (If military communication does one thing properly, it’s anally regular formatting of internal documents.) So in a rather heroic weekend, I converted the spec to txt, then wrote a perl script that munged the document and spit out C structs that described every message and field. The struct wasn’t a bit-packed representation of the message itself (which wouldn’t work, given the optional and repeatable fields), but rather the metadata: size, optionality, repeat counts, and pointers to metadata structs for contained fields and following fields, paired with a struct for actually manipulating the data in code. Then I wrote a rather simple interpreter that could pack and unpack the messages from the normal structs by reading the metadata structs. My colleagues were rather impressed when I checked in some 10,000 lines of .h files over one weekend.

    It worked beautifully on about 95% of the message types immediately, and the remainder could be easily debugged because the metadata structs had a simple, one-to-one correspondence with entries in the spec and could be easily understood and modified by hand–something you absolutely couldn’t say for the C++ monster we had originally gotten. I’m still rather proud of the thing, despite the fact that the project later tanked and the code is now sitting unused in a dark corner of someone’s hard drive. Such is life.

  8. Don’t trust it.

    It’s broken.

    In some subtle way that will totally hose you just as soon as you forget the details again and depend on the code you thought was working.

    The dread god Finagle is just warming up his special vengeance on you. His mad prophet Murphy is capering in the aisles, dancing with glee between slugs of Guinness.

    If you’re at all smart, you’ll run away. Fast.

    1. >The dread god Finagle is just warming up his special vengeance on you. His mad prophet Murphy is capering in the aisles, dancing with glee between slugs of Guinness.

      Hey, whaddya think my second reaction was? “This can’t be right. If it is, I’ve used up all my luck for the next seven years. Oh, no, the dread god Finagle’s gonna have it in for me now…”

  9. > isgps.c

    Believe it or not, I’ve seen code that looks just like this buried in some old MS-DOS C code for an ancient, outdated GPS-based land navigation system. Maybe they started from the same code base (Sager, et al.)?

    But, from personal experience, I know the chances of getting something like this right the first time are near-zero. I agree with Jay; the odds that it’s broken in some subtle way are very high.

    And the designer of ISGPS200 deserves to be assigned the task of direct cryptanalysis of a file symlinked to /dev/random.

  10. Can you introduce errors (parity, perhaps?) into the bitstream input to see if the code catches the error? I realize it’s murky stuff and confidence of predicting the error would be low. Still, it might be a way to scratch this particular itch.

  11. @Monster

    > Whoever designed that needs to be duct-taped to an office chair and forced to watch the output of a random-number generator, which he’s been told is actually a proprietary protocol that he’s supposed to reverse engineer.

    Using a 10-y/o proprietary Windows tool with no documentation. I mean, if we’re going to get evil, let’s get EVIL >:)

    I’d like to add that this sort of design anti-philosophy is not limited to non-software folks. By all accounts, the software that runs wafer processing equipment made buy certain, not-to-be-named-but-industry-leaders is brain-damaged. Sure, the tools run (usually), but the file formats for things like process recipes and equipment constants are written in the same, dense-packed binary format that can only be read by VERY expensive OEM software. All attempts to decode them have left me cursing…

  12. Never trust code that compiles and runs right the first time.

    Interestingly enough, I did write a quick hack in C to properly initialize a proprietary piece of hardware connected via Ethernet to a Linux machine at one Fortune 500 manufacturer.

    I wrote while A) connected from remote, B) ssh’d in via a very old cygwin or mingw or whatever ssh client running in B) DOS command window, running in a Microsoft RDP remote desktop session, so C) due to the brain damage of the aforementioned “terminal”, I typed it in, very carefully, from the top of my head, using ‘cat >hack.c’. The program compiled and ran correctly the first time. As far as I know, this program is still in use.

    Of course, all it had to do was open a socket connection to the appropriate IP and port, and send a specific, but computed, initialization string. Fairly trivial. I would have done it in Python or Perl, except that due resource constraints, this particular Linux box had neither of them installed.

  13. Eric, is it possible that the software on the device that originated the dump used gpsd for it’s testing under development? If so, then THAT would explain why it works perfectly, it was DESIGNED to work perfectly.

    Just a thought.

    1. >Eric, is it possible that the software on the device that originated the dump used gpsd for it’s testing under development? If so, then THAT would explain why it works perfectly, it was DESIGNED to work perfectly.

      You had me really frightened there for a moment. Then I remembered that I implemented type 14 and 31 decoding after the Windows dumper did. So, no.

  14. Bitfield, high-density protocols aren’t all evil; IP and TCP have much of the same properties, even though they are mostly byte-aligned. Older protocols like ATM have similar properties as well. Taking hardware into account is important – one of the considerations for the NIST crypto competitions is complexity and performance when implemented in raw hardware. In cases where you have a very limited data transfer capacity (such as radio communications to submarines), packing data is very important.

    However.

    That doesn’t excuse a lack of sanity or documentation. ASN.1 notation may be evil and ugly, but it is well-recognized evilness and ugliness for which there are definition compilers, validators and so on. At least with a definition along those lines you can validate the input more readily and generate as many test cases as you’d like by hand.

  15. I am not that hardcore (fortunately), but I have also had “I can’t believe that actually worked without hours and hours of grief” moments, myself.

    They’re satisfying and pleasing, once you’re sure things really are working.

  16. I am not that hardcore (fortunately), but I have also had “I can’t believe that actually worked without hours and hours of grief” moments, myself.

    They’re satisfying and pleasing, once you’re sure things really are working.

    You are insufficiently experienced in the ways of computing. They should evoke nothing but raw, naked terror.

  17. @ Jay Maynard

    I think esr is older than, and has been programming longer than, you. ;-)

  18. Now, I might be crazy. (I know I’m weird, but that’s required in this industry.)

    Once upon a long time ago, I looked at some Kermit code that used something quite like ‘lex’ (or ‘flex’ to you newbies) to construct its legal-packets finite state machine (a different type of FSM)

    Hearing this description of putrid horror makes me think of a custom lexer that returns each bit as a token, along with a byacc/bison parser-from-hell that recognizes each field in turn, based on the preceding bits. Or, given look-ahead, from the *next bit in line*.

    I see several issues with this, not the least of which is code bloat, but as a ‘debugging tool’ I wonder if it wouldn’t be useful.

  19. I was taught in an environment that despised tests. The CS department was in love with functional programming languages, doing some of the major work on developing ML and Haskell. The rationale was that a test could never prove that a program was correct. It could only prove that it was incorrect, so tests were useless.

    While it is true that tests only can prove that a program is incorrect, the conclusion that they are useless is totally unwarranted. It still amazes me to this day that apparently smart academics stuck to this dogma.

    I can safely say that if I had been taught about testing practices, I would have had many more finished projects under my belt. I had to learn the virtues of testing the hard way. tests are the only working tools we have to increase the likelihood of a program working correctly after refactoring.

  20. One time I wrote a program on my commodore 64.

    It was:

    10 print “Hello!”
    20 goto 10
    run

    It worked the first time I typed it in!

  21. It still amazes me to this day that apparently smart academics stuck to this dogma.

    It shouldn’t. The word “academic” has a notable connotation of “irrelevant for practical purposes”. It earned that sense because many in academia feel that “pure science” is superior to money-grubbing practical applications of that science. Why, if you stoop to doing practical work, you might get some dirt under your fingernails, and ruin your nice manicure!

  22. I think esr is older than, and has been programming longer than, you. ;-)

    @Catherine Raymond:

    The dirty little secret is that esr has been programming longer than most of us, and we’re just here trying to gain his wisdom by osmosis. :-P

  23. Hearing this description of putrid horror makes me think of a custom lexer that returns each bit as a token, along with a byacc/bison parser-from-hell that recognizes each field in turn, based on the preceding bits. Or, given look-ahead, from the *next bit in line*.

    In what is essentially a device service daemon? Why don’t we just build in a copy of Emacs while we’re at it? ;)

    1. >In what is essentially a device service daemon? Why don’t we just build in a copy of Emacs while we’re at it? ;)

      You laugh, but in fact there is an FSM in gpsd that behaves a lot like a compiler’s lexer. It’s the “packet sniffer” that recognizes framed and checksummed data sentences from the sensors. The trick it pulls is recognizing any of the 17 packet protocols GPSD knows about and invoking the right analyzer on the packet. This is how the daemon autoconfigures itself to handle anything from a cheap NMEA mouse through an AIS radio to an RTCM2 source stream on the fly without type switches or config files.

      I’m quite proud of this design. I think it might well be patentable if I were inclined that way – and not a junk patent, I’d still be able to respect myself in the morning. I arrived at it by thinking about the way autobauding worked on old-school modems (and indeed gpsd autobauds as an intended side-effect of the technique) but it was a bit of a conceptual leap from autobauding up to a multi-protocol FSM and as far as I’m aware nothing quite like it exists elsewhere.

      That’s actually an interesting question for my commenters. Has anyone here encountered another case of a finite-state machine used for discrimination among multiple dissimilar packet types coming over the same wire, including syncing to the bitstream frames?

  24. > Has anyone here not heard the story of IEFBR14, the one-instruction program that had a bug as originally written?

    I plead guilty, although I’ve read the Wiki page now…

  25. @Daniel Franke:

    Right. You could do it with lex/yacc but you wouldn’t want to it. You’d want to write your own bitstring parser, much as Mr. O’Sullivan has done in Haskell.

  26. “Has anyone here not heard the story of IEFBR14, the one-instruction program that had a bug as originally written?”

    It is a bit of received computer science wisdom that *any* program has at least one superfluous line and at least one bug. Apply this rule recursively and you can see that *any* program can be reduced to a single line with a bug in it.

  27. Jay: As I said, once I’m sure they’re really and honestly working they’re pleasing.

    They’re deeply unsettling before then.

    (Not “terror”. Because, hell, I’m not designing reactor control systems or medical firmware, just a point-of-sale system.

    The absolute worst case for the biggest screwup I could conceivably make [even if testing never caught it, somehow], is some loss of sales income.

    I just can’t get myself into “terror” for that.

    Now, with ESR’s code in gpsd, a subtle bug in that could eventually possibly get someone killed *directly as a result of trusting his code’s output*, in a very unlikely worst case scenario.

    That’s some sobering stuff.)

    1. >Now, with ESR’s code in gpsd, a subtle bug in that could eventually possibly get someone killed *directly as a result of trusting his code’s output*, in a very unlikely worst case scenario.

      Yes. Don’t think this hasn’t bothered me on occasion.

      It used to bother me more. Then I realized that if I weren’t doing this job, the odds are overwhelming that it would be in the hands of someone less competent. And much less careful about testing and verification.

      Note: The above does not constitute a claim that the set of designer/programmers more competent than me is null. Rather, it’s a consequence of the fact that the guts of GPS nav just have never seemed to attract really capable software people. The protocols are badly designed and poorly documented, the closed-source software I’ve seen seems brittle and shabby and poorly maintained even compared to closed-source code in other technical specialties…it’s a mess out there. A hacker from our culture wouldn’t even have to be near as capable as me to be doing work way above local standards.

  28. After this, I strongly recommend just staying in the house and avoiding any kind of sharp or potentially lethal object. Oh, wait …

  29. I just realized something. If test data is so hard to come by, how do you know there’s anyone in the entire Universe who uses this protocol and needs this feature? I mean, if there’s someone who actually uses this, surely that person would be able to give you some test data and you’d have found him already?

    1. >I just realized something. If test data is so hard to come by, how do you know there’s anyone in the entire Universe who uses this protocol and needs this feature?

      Never underestimate the power of cultural and historical barriers. RTCM2 is in use, all right – there are profit-making entities that specialize in collecting and delivering it to people doing survey-grade GPS (that’s accuracy down to the centimeter). But open source simply hasn’t done much in this space. The customers are all using awful proprietary crap and don’t know any better; actual software engineers are thin on the ground.

      Here’s an index of how bad it is out there. In just under two years of sporadic effort I have written what I am now reliably informed is the highest-quality Marine AIS decoder in the world. (My source is a marine-science research group that specializes in GIS analysis. They know way more about the competition than they want to.) I wasn’t aiming to beat the world, it happened because the domain experts are all second- or third-rate software engineers and all the first-rate software engineers other than me were off doing something else.

  30. Incorrect GPS navigation data has killed at least one member of the tech community I’m aware of; though that was a mapping error, not a location error.

    If being used for water nav, incorrect output could really screw up your boat, with possibly fatal consequences, in a somewhat likely circumstance.

  31. After seeing all the incredible stuff done with what turns out to be utter crap GPS software, I’m eagerly looking forward to what will be created after Eric cleans up the system.

    I suspect the lack of quality people is due to them taking one look at the mess and saying “Screw you guys, I’m going home.”

    1. >After seeing all the incredible stuff done with what turns out to be utter crap GPS software, I’m eagerly looking forward to what will be created after Eric cleans up the system.

      I should observe that, sadly, my ability to clean up the entire GPS system is limited to nonexistent. I can’t fix the horrible protocol designs. All I can do (what I have done) is make getting data off the sensors and to the apps as painless as possible.

  32. Regarding the much earlier discussion regarding moderation of the comments (possibly a year or two back):

    hanzie Says: Your comment is awaiting moderation.
    March 20th, 2011 at 3:47 am
    After seeing..

  33. After seeing all the incredible stuff done with what turns out to be utter crap GPS software, I’m eagerly looking forward to what will be created after Eric cleans up the system.

    Apparently it doesn’t make much to amaze you. Even playing with the GPS and navigation features on my EVO 4G, I’ve noticed plenty of odd GPS glitches that I suspect are due to crap GPS hardware and/or software.

    I suspect the lack of quality people is due to them taking one look at the mess and saying “Screw you guys, I’m going home.”

    It’s due to the race to make GPS hardware and GPS-enabled devices cheaper and cheaper. Instead of spending money on quality software development processes, they toss the work programmers living in 3rd world countries making $10 a day who attempt to compensate for their lack of competency by throwing more bodies at the work.

    1. The low margins on GPS devices are only part of the problem. It is true that nobody in the industry seems to want to pay for top talent, but I think the larger problem is cultural. The industry is run by people who don’t understand software, don’t get open source, and don’t even comprehend open standards.

  34. Say, Eric, have you found GPSTk (http://www.gpstk.org/) at all useful in your work on gpsd? GPSTk is an open source project sponsored by our laboratory (http://www.arlut.utexas.edu/, we’ve hosted you at the lab for CACTUS), and it was created by a number of GPS engineers working on the ground station network for the GPS constellation. I believe our engineers are reasonably competent, and it’d be interesting to know if you have evaluated that work.

    Best,

    Jon

  35. Ah, never mind, Eric. I did what I should have in the first place, and googled for the intersection of gpsd and gpstk, and read past discussions on the gpsd list.

  36. ‘Three things to be wary of: A new kid in his prime,
    A man that knows the answers, and code that runs first time.’
    Duane ELms – Threes 1.1

    Eric, You have my sympathy. The NMEA stuff is bad enough, but differential and RTCM are the stuff of nightmares, as you discuss.

    I think the points raised above about engineers and protocols etc are valid (and very true), but we should remember that this stuff wasn’t produced or developed as the consumer product it has become.
    There were some very hostile, noisy and unpleasant environments involved.

    The NMEA side of things was to connect in to marine type navigation systems, and the differential muck was to enable higher accuracy for survey applications, as you noted.
    This particularly was a very specialised and small market, with VERY high prices for all the equipment, and lots of incentive for arcane and very proprietary and poorly documented protocols, extensions and hardware.
    Allowing your customer to buy an add-on base station receiver from Yakamoto TooHotToTouchee Chain Drive GPS Corp definitely wasn’t in the marketing plan.

    Not saying that that’s a good thing, it just is what it is. Hopefully, with the work you’ve done, the situation will improve, but I’d have to say that, in my experience, using the data after it’s decoded is just as much of a challenge as getting it in the first place.

    For your test data, I seem to remember there are fixed base stations dotted all over the continental USA e.g. the CORS network, Coast Guard etc. Could you not record some data streams from them to file and use as necessary for testing?

    1. >For your test data, I seem to remember there are fixed base stations dotted all over the continental USA e.g. the CORS network, Coast Guard etc. Could you not record some data streams from them to file and use as necessary for testing?

      The raw data streams are useless by themselves. As I tried to explain in the post, for test purposes I needed not just the binary data but an eyeball-readable dump made by a known-good decoder.

  37. Not sure that it’s quite the question you’re asking, and I haven’t used it for several years, but Waypoint Consulting used a similar approach in their GPS post processing software.
    Not cheap to buy,and usually used for GPS survey data with single and multiple base stations for precision positioning.

    Short desc – you give it base station(s) and ‘rover’ files and it auto detects the makes and models being used and loads the appropriate decoder to convert the manufacturer/protocol specific gps data to an intermediate ‘Waypoint’ GPS format for further processing.

  38. >Short desc – you give it base station(s) and ‘rover’ files and it auto detects the makes and models being used and loads the appropriate decoder to convert the manufacturer/protocol specific gps data to an intermediate ‘Waypoint’ GPS format for further processing.

    Sounds pretty similar. I wonder if they used my code? :-)

  39. It took a while to read this post. Congrats to all who wrote the code.

    In my early career, straddling HW & SW design, I used to write the microcode for 32 bit CPUs – before they became monolithic – several multi-wire board made 1 CPU. (~$1,000,000 a copy) You had to know register, memory and bus latencies and clock cycles to get it right. (It was much more FUN than it sounds.)

    Anyway, I was assigned to Mission Control to build telemetry stream simulators before TDRSS which meant there were 4 different radar streams pumping data to MC. Naturally, some were octal, some were hex. I wrote a bit-stream routine to process octal data streams in a hex native CPU – ugliness & slowness abounds in getting 12-bit octal quartets built/massaged in a 16-bit native universe.

    So, one Saturday morning I built a working assembly language routine to replace the scientist’s FORTRAN, then in a buzz of excitement, microcoded it Sunday morning.

    It. worked. flawlessly. the. first. time.

    That was fun, fun, fun.

    Never touched that microcode again…

    And never wrote a piece of code that worked the first time so flawlessly.

  40. Since this came up in the RSS feed and mentions optimizer bugs, I was wondering: Since optimizers have gotten much more aggressive in recent years (for example, some compilers will optimize any loop whose end condition “can’t happen” because it can prove it would imply undefined behavior such as an integer overflow or dereferencing an invalid pointer later on, into an infinite loop), have you seen an increase in optimizer bugs and/or in behavior that looks like an optimizer bug but the code can’t be proven correct?

    1. >have you seen an increase in optimizer bugs and/or in behavior that looks like an optimizer bug but the code can’t be proven correct?

      Optimizer bugs are uncommon enough in my experience that a rate change would be difficult to notice unless it were really dramatic. I don’t think I’m seeing that.

Leave a comment

Your email address will not be published. Required fields are marked *