GPSD 3.12 has shipped – bulletproofed from below

I’ve been radio silent the last couple of weeks mainly because I’ve been concentrating furiously on getting a GPSD release out the door. This one is a little more noteworthy than usual because it may actually have fixed a well-hidden flaw or vulnerability of some significance.

Regular readers may recall from back in 2013 that I published a heads-up titled No, GPSD is not the battery-killer on your Android! addressing a power-drain bug reported from a handful of Android phones.

I believed at the time that the proximate cause of the bug was in the kernel serial device-drivers somewhere specific to particular hardware on those phones. I still believe that, because if it had been a purely GPSD problem the error would likely have been much more widespread and I’d have been flooded with complaints.

However, I’ve been concerned ever since that GPSD might not have been doing everything it could to armor itself against bugginess in the layers below it. And a couple of weeks ago I found a problem…

The problem was that the particular combination of options I was using led tty input to be presented to the select loop a character at a time. Oops! This is perfect for minimum fix latency but not so good for avoiding power drain – it means that in the normal course of things the main loop could spin pretty fast.

I had CPU-usage measurements indicating this hadn’t been a problem, including on low-power ARM processors like they put in phones. On the other hand, there were those scattered reports of excess power usage.

So I set out to change this, going to blocking I/O in which data from the device would be accumulated in the kernel tty layer and released to GPSD’s main loop in bursts, meaning it wouldn’t spin as often.

That change in itself was not difficult, but it produced some problems in my regression-test framework that took a while to iron out. Nothing difficult, but a lot of fussy detail work.

Result: 3.12 works, it’s testable, and I think I may have removed the vulnerability that made the battery-eater bug possible.

30 comments

  1. Well done. Not that I own a smartphone or any other GPS-enabled device, mind you; but I do appreciate your continuous work, which I’d like to reward. (Besides, I occasionally ride a cab–and they do use GPS.)

    A couple of corrections: in the second paragraph, you wrote “back in back in in 2013” and “a a handful”.

    Anyway, keep up the good work. Oh, and here’s your reward. Like the song goes, “I know it’s not much, but it’s the best I can do”. ;-)

  2. Eric, was it a question of “what is the question?”

    How does lower-layer bugginess interact with this type of software, not just in this particular case (which you’ve detailed), but in a broader scope?

    Somewhat OT – when are we going to see more Great Beast build videos? It’s been nearly two months.

    1. >How does lower-layer bugginess interact with this type of software, not just in this particular case (which you’ve detailed), but in a broader scope?

      That is too general a question to have a well-defined answer. The best you can do is notice that a lower layer you’re depending on is prone to be buggy in a particular way, but you can avoid triggering or work around the bug.

      >Somewhat OT – when are we going to see more Great Beast build videos? It’s been nearly two months.

      Good question. I’ll nudge the TekSyndicate guys.

      Note that there probably won’t be another video focused on the build itself, but rather a more general “interview with ESR” kind of thing. That is, judging by the kind of footage they wanted.

  3. @Alex: I work for a company which does some of its own hardware design and I get to interact with the hardware team on occasion. The best thing to keep in mind is that all computers are analog devices.
    If your underlying support levels fail frequently in random ways, there isn’t much that you can do. However, if the lower levels don’t fail in random ways but in predictable ways or categories of ways, you can write special handling routines for those cases. It might involve retry attempts, re-initialization of subsystems, closing and reopening sockets, etc.

  4. ESR, Garrett, thanks heaps for your quite patient responses to a rather ill-defined question.

    So it’s basically keeping in mind ‘HERE BE DRAGONS, THOU ART CRUNCHY AND TASTE GOOD WITH KETCHUP” and stepping sufficiently carefully?

    Re: Great Beast videos – rock on, keep ’em coming, whether it was “how we dunit” or “ESR-piece Theatre, with your host, Wendell Cookie”

    Sorry for lack of updates on my stumbling along the path described in How To Become A Hacker – it’s pretty much been bubbling along, with non-trivial architectural changes now seeming quite easy, and even test-infecting some other devs (one won’t write new tests, but will use existing suite as backstop – eh, I’ll take what I can get) and non-dev technical types (one of whom really liked how quickly, smoothly and regression-sparse one project I’ve been helping him with has proceeded).

    1. >So it’s basically keeping in mind ‘HERE BE DRAGONS, THOU ART CRUNCHY AND TASTE GOOD WITH KETCHUP” and stepping sufficiently carefully?

      Alas, yes. :-)

  5. So when do I see my Nexus 6 running this? As an almost factory build, I presume it’s currently running a previous version of GPSD?

    1. >So when do I see my Nexus 6 running this? As an almost factory build, I presume it’s currently running a previous version of GPSD?

      I presume, too. But nobody tells me when they upgrade.

  6. >So when do I see my Nexus 6 running this? As an almost factory build, I presume it’s currently running a previous version of GPSD?

    Theoretically, there’s nothing to stop you from building a custom OS/distro/whatever for a smartphone that uses the latest GPSD.

    In practice I expect there to be many reasons which will make this hard and I expect to run into a great many of them when I get fed up with smartphone/tablet interfaces.

  7. > tty input to be presented to the select loop a character at a time.

    Heh. Back in 1983, I wrote a menu-based program to drive a PROM programmer in Z-80 assembly language on CP/M. I wanted to use a higher level language, but since I’d been hired to create Z-80 firmware for them I couldn’t complain too bitterly when the first assignment was to write an assembly program, even if it wasn’t for an embedded system.

    Fast-forward to 1985 or so, and I got to port it to a Vax. Using FORTRAN. I knew that character-at-a-time operation would be sub-optimal (hey, the bulk of my work was communications programming), but even back then, I was of the “make it work, then make it fast” school, and I had lots of other program pieces to worry about porting and getting correct, like the menu and the parsing and chopping up of bytes (and sometimes even nybbles or bits) for the various PROM formats, so I used really stupid character-at-a-time I/O, figuring out I’d fix it soon.

    I fixed it even sooner than soon, because it would soak up 90% of the CPU on the Vax, effectively DOSing any other users on the machine…

  8. AFAICT, the way to request features is via their bug tracker at: https://code.google.com/p/android/issues
    I’m not sure if that’s the ideal way when the software is already in Android, nor can I discern where the Android gpsd source resides (they usually have a git mirror for each package).

  9. One of the questions that interests me about software is this one: when I find a defect in my software what could I have done differently to have prevented it from entering in the first place?

    Plainly, what you describe is not a bug, but it is arguably a defect, a feature that is less than satisfactory, though let’s not argue about the semantics of the word. How could this have been made visible before it ended in production?

    In this particular case there are a couple of things worth considering. Firstly, I think there is a great deal of value in running profilers on code even if you don’t have any particular performance concerns. It allows you to see what your software is doing in a more dynamic sense. In this case you might have seen that there was an inner loop being called far more than you would expect.

    Secondly, I make it a habit to step through all new code in a debugger. The tool I use integrates the editor and debugger, so whenever I enter new code I automatically drop a breakpoint in the new code. By making it a habit to step through all new code you get a much better feeling for its dynamic, and you see stuff like this. Certainly I think this would have been readily apparent if you had done so.

    Anyway, not meant to be a criticism at all, these things are tricky, but just to offer a few thoughts on practices in development that I have found helpful.

    1. >Firstly, I think there is a great deal of value in running profilers on code even if you don’t have any particular performance concerns.

      Indeed. But that didn’t raise a flag in this case because I never myself profiled it on a system with the buzz problem. Even if I had, the code is so dominated by I/O either with the defect or without it that I’m not sure I could have picked up the problem.

      >Secondly, I make it a habit to step through all new code in a debugger.

      Over here in Unix-land, where we still do a lot of work from terminal emulators rather than GUIs, the equivalent best practice is to instrument the hell out of your code with print statements conditioned on debug level.

      In gpsd, I have 10 debug levels ranging up to “show me every read and select wait”. Intermediate debug levels show me things like all traffic to/from client sessions, or all state transitions in the packet recognizer. To characterize a bug, often all I have to do is crank up the debug level until an anomaly jumps out at me.

      One advantage of our method is that if the the print statements are written well enough to be useful they implicitly document the code around them.

  10. esr
    >Over here in Unix-land, where we still do a lot of work from terminal emulators

    Thanks for the feedback. I would recommend that all programmers spend a lot of time stepping through their code in gdb or whatever you use, irrespective of gui or not. Living the code by stepping through it gives an insight that is hard to obtain any other way. Debuggers are useful for a lot more than debugging.

    FWIW, I also think that for new programmers, people who are learning to code, they need to spend a lot more time in debuggers stepping around the code. It offers than a deep, practical insight into how code flows that is hard to obtain any other way, irrespective of the quality of teaching or abstraction. Acting like the computer executing the program gives one a very deep understanding and connection to the code.

    If you have only ever used a debugger for debugging I suggest you take some familiar code and give it a try. I changes the way you connect with code.

  11. @ Jessica Boxer
    >I also think that for new programmers, people who are learning to code, they need to spend a lot more time in debuggers stepping around the code.
    Thanks for the advice. I’ll keep it in mind. :)
    I also liked your metaphor of “living the code”, which sounds somewhat Zen. Didn’t our host write a koan about becoming the master? Both are nice.
    Since you talk about getting insight into the code: what’s your take on tools such as the Source Insight IDE or the SrcExpl plugin for Vim (whose author credits the aforementioned IDE as its inspiration)? (While the question is geared toward Jessica Boxer, I’d appreciate anyone’s thoughts on the matter.)

  12. @Jessica Boxer:

    FWIW, I also think that for new programmers, people who are learning to code, they need to spend a lot more time in debuggers stepping around the code.

    As a pedagogical tool, the debugger is a most excellent set of training wheels. (And here, pedagogical can be about programming in general, or about how a particular program works in a particular environment.)

    However, I have met far too many programmers who never take the training wheels off.

    Which actually sort of works OK in simple single-threaded scenarios. I have to admit to sometimes developing Python in a similar loosey-goosey way — try it and see what sticks.

    But the training wheels often fail to hold you up in deeply embedded stuff with multiple threads and interrupt levels going on. At that point, you need to take them off and learn how to use the debugger that (hopefully) sits on top of your shoulders.

    Not that debuggers can’t be useful for really hard problems, but I’ve seen people spend 4 days trying to set up a debugger to catch something that was really obvious by half-assed code inspection.

  13. “but I’ve seen people spend 4 days trying to set up a debugger to catch something that was really obvious by half-assed code inspection.”

    Yeah, but don’t forget the times that you’ve looked over a piece of code with a bug that’s obvious to another pair of eyes but you keep missing…we’ve all been there and done that.

  14. @Jay Maynard:

    Yeah, but don’t forget the times that you’ve looked over a piece of code with a bug that’s obvious to another pair of eyes but you keep missing

    Usually, that doesn’t happen, because I go home and then look again with my own fresh eyes the next day :-)

    One time, at a previous company, there was a code red because a major customer had an issue with a chip. IIRC, I had been on vacation or something — by the time I got sucked into the issue, it had been pretty much all hands on deck for two weeks — literally 15 people writing and running simulations, validation tests, etc.

    Since I was late to the party, I looked at what everybody else was doing, and felt (as usual) the best thing for me to do was to start looking at code, especially because nobody else seemed to be doing that. I looked all over the chip. I found and logged 4 bugs for the memory controller, but none of them matched the symptoms, so I kept looking. I finally found the bug in the arbiter. The bug existed because both the designer and the verification engineer independently mis-read the ARM specification about split bus transactions.

    This was over about a week — pretty much all of the other engineers had been working on the problem for about 3 weeks by that time.

    I couldn’t convince the arbiter designer that my interpretation of the ARM spec was correct, but I did come to an agreement with him that if I could show that a certain sequence of events made the ARM core assert a signal to the arbiter under certain conditions, that would be a Bad Thing.

    So I worked late into the night. It required a DMA cycle at exactly the right (wrong?) time, and I couldn’t figure out how to make one happen naturally, so I finally had to artificially create one at exactly the right time. Created a simulation that showed the problem; went to bed around 5:00AM.

    The designer picked it up in the morning, agreed it was a problem, nicely documented it with waveforms and diagrams that showed that it took 39 steps for things to go wonky, and had a meeting at 1:00 where everybody agreed that it was a problem and almost certainly the problem.

    Well, everybody except one validation guy, who apparently argued vociferously that I hadn’t actually proved anything, since I used a verilog “force” statement to get the DMA to occur where I wanted it to (never mind that in real life, a DMA request could come in on any clock.)

    The designer later expressed to me that he desperately wanted to reach across the table and strangle the validation guy; I was happy that I was still sleeping off my late night coding session when the meeting occurred, because I probably would have.

    Next up was the fib (like a blue wire, but on a chip) to prove that really was the problem. I came up with the simplest possible test — one cut and add to lie to the arbiter about the memory address being requested by the ARM, so that everything wound up at an aliased address, and there were never any collisions. Proved the problem, spun the chip.

    So, just like I’m not the best guy with a software debugger, I’m not the best guy with the simulator/waveform viewer, either. But that doesn’t matter — from not knowing that much about the code we wrote, or the code it interfaced to, I ramped up and found a problem in a week that 15 other guys — probably all of whom are better with that tool than me — couldn’t find in 3 weeks.

    Am I that much smarter than them? No, some of them are smarter than me. But tools warp your thought processes — you start thinking about ways to approach problems that your tool can support.

    It would seem that if you know a fancy tool or two really well, that would be a good thing, but it actually seems to constrain peoples’ thinking more than having a general idea (a) of what different tools are available and what they can do for you; (b) that you can always create a new tool if those around you are insufficient; and, of course, (c) that most of the time you don’t really need a tool other than your brain.

  15. While leaving carefully managed printf tracks, for new and/or unusual paths I add always printed records. Then during test cases, when the condition ocurrs and verifies I downgrade the track to a lower level like Detail. During the test case cycle, a source scan displays the paths not tested yet.

    I have also used these debug outputs in post processing to generate performance and advisory reports real time and in support… which were undreamt of by the original designers.

    John Alvord

  16. @Patrick Maupin
    > However, I have met far too many programmers who never take the training wheels off.

    Programming is extremely hard. I’d recommend keeping the training wheels on. After all, the whole point of most of the tools we use as programmers is to layer on abstractions to help us deal with the ungodly complexity of even the simplest programs.

    > Which actually sort of works OK in simple single-threaded scenarios.

    Sure, but just because every problem isn’t a nail doesn’t mean hammers aren’t useful. And I’d propose that the reason you say this is because you haven’t had much experience with really good debuggers. Really good debuggers have tools to help with really difficult cases such as multithreading, low level debugging and remote debugging.

    And oftentimes when you are working on these really low level modules the best place to use your debugger is in isolated unit test cases where you can solve most of the key problems via isolation using dependency injection or other inversion of control techniques.

    Of course they can’t solve everything. I once debugged a program with an oscilloscope, where it turned out my interrupt service routine was taking longer than the the period of the clock interrupt. There are few easy ways to solve this problem except raw focused hard work.

  17. @Jessica Boxer:

    Which actually sort of works OK in simple single-threaded scenarios.

    And I’d propose that the reason you say this is because you haven’t had much experience with really good debuggers.

    Not at all. It’s because certain problems are not really all that amenable to being caught with regular software debuggers. Like problems that happen once every 3 days that are only detectable after post-processing the output audio stream. In general, I find that instrumenting the code to spill its guts about what it is doing out a side-channel (aka the Unix method) is much more productive than trying to make a hardware breakpoint happen and then trying to rewind all your threads 2 million cycles to figure out what was going on when the bad thing occurred.

    And oftentimes when you are working on these really low level modules the best place to use your debugger is in isolated unit test cases

    Color me crazy, but IMO one of the best rationales for good test cases is reducing/removing the requirement for a debugger.

    I once debugged a program with an oscilloscope, …

    I use an oscilloscope probably 3 times a month (it sits on my desk), a debugger probably once every 3 years. If I were actively developing firmware, I’d use a debugger more often, but mainly as (a) a mechanism to load the code, and (b) for the pedagogical purposes discussed — learning what the code is doing.

    Now, maybe it’s because I have a reputation of being able to solve hard problems, but probably most of the problems that people have given me would be much more difficult to tackle with a debugger than via other methods — in other words, I have spent a lot of time dealing with problems that others have failed to solve with debuggers…

  18. @Patrick Maupin
    > Not at all. It’s because certain problems are not really all that amenable to being caught with regular software debuggers.

    That is true, but you are changing the subject from threaded programs, and good debuggers go a LONG way to making multi threading easier to debug (not easy, easier.) FWIW, good design makes multi-threading easy to debug. Nah, let’s go with good design makes multi-threading easier to debug too. That stuff is a bitch no matter how much you swaddle it in cotton balls.

    > Like problems that happen once every 3 days that are only detectable after post-processing the output audio stream.

    Sure, sometimes heavily instrumented code is the only way to go when you have pulled all your hair out, but these sorts of bugs are less than 1% or typical bugs. Moreover, the first rule of debugging is “make the bug reliably reproducible”. Now some people would say that is unrealistic, but if that is true there are generally two reasons:

    1. The code is poorly written, and doesn’t have the most neglected property of good quality code namely debugability.
    2. They didn’t try hard enough.

    Really, in my opinion, you can’t say you have fixed a bug until you can reliably reproduce it (and then demonstrate that the reliable method no longer produces the bug on the corrected software.)

    Of course out toward the dragon seas there are some that need the approach you mention, but there are vastly more of those if the code is poor, and particularly if it is poor in the particular quality of debugability I mentioned above.

    As a general rule, I am not a fan of excessively manually instrumented code. It clutters up the code with junk (even if the junk is descriptive.) Good software is concise, and in easily digestible (and easily testable) chunks. Really, although this is impractical, filling more than a page on the editor for a function makes it considerably harder to grok.

    I like automatically instrumented code, where meta data is added by tools on top of the code. As in, for example, the profiler I mentioned before, which is useful not so much for timing but call counts and call graphs. And similarly debuggers. And there are lots of others.

    > Color me crazy, but IMO one of the best rationales for good test cases is
    > reducing/removing the requirement for a debugger.

    Sure, if you only think of a debugger as a tool to find and remove bugs. They can do a lot more than that.

  19. @Jessica Boxer:

    but you are changing the subject from threaded programs,

    Not at all. I originally contrasted “simple single-threaded scenarios” with “deeply embedded stuff with multiple threads and interrupt levels going on.”

    Sure, sometimes heavily instrumented code is the only way to go when you have pulled all your hair out, but these sorts of bugs are less than 1% or typical bugs.

    And my belief is that you can substitute “a debugger” for “heavily instrumented code” in that statement and it will still be true.

    Moreover, the first rule of debugging is “make the bug reliably reproducible”.

    Agreed that that is a good rule of thumb for proving you have killed a bug. But for some bugs, this is a chicken and egg problem, in that you can’t make it reliably reproducible until you understand exactly what it is, and for other bugs, well see below…

    The code is poorly written, and doesn’t have the most neglected property of good quality code namely debugability.

    Annnnnd… Frankly, this completely contradicts your previous statement. E.g. last fall I was handed some really ugly code that didn’t work right and had way too many states and variables. I rewrote it to make it beautiful by my standards, and then it just worked.

    Now why the heck would I go back and try to prove exactly how the original was failing, when even looking at it gave me a headache? I have better things to do than prove that really ugly shit that has 27 obvious ways to break can also break in this 28th way that is being seen in real life but is not immediately obvious by code inspection.

    As a general rule, I am not a fan of excessively manually instrumented code. It clutters up the code with junk (even if the junk is descriptive.) Good software is concise, and in easily digestible (and easily testable) chunks.

    There are two types of manual code instrumentation — instrumentation that is designed to be kept with the code, and instrumentation that is added temporarily in order to help an immediate debugging problem.

    Instrumentation of the first kind code properly doesn’t change conciseness or testability. At all. Ever. If it does, you’re doing it wrong.

    Instrumentation of the second kind can do whatever the hell you want. You’re going to throw it away anyway.

    I like automatically instrumented code, where meta data is added by tools on top of the code.

    Automated instrumentation is great in a lot of scenarios. But when the bandwidth to your debugger is limited compared to the amount of code you are executing and compared to some of your other paths (no, I’m not changing the subject — I’m still on deeply embedded stuff), it’s usually better to roll your own instrumentation, which may be manual or may be automated — let’s face it, it isn’t really all that difficult to emit location snapshots like a profiler does — or some combination of the two.

    > [Debuggers] can do a lot more than [finding and removing bugs].

    Perhaps you missed where I understand that completely and still think that they are usually a seductive waste of time. If I fully learned every piece of debugging software and every piece of debugging equipment that was in my building, then… well, actually, I couldn’t. Updates happen too frequently. But, as I said previously, understanding what they are all capable of actually is important, so that I can bone up when one of them is directly applicable to the problem at hand, because root cause analysis actually is in my job description.

  20. OT: Has anybody else ever managed to render their machine unbootable and subtly but pervasively corrupt their backups in one fell swoop, or have I just set a new record for human stupidity?

  21. @Jon Brase –

    Can you boot from a USB stick or CD / DVD? If so, can you still get to (at least some of) your HDD files? If you boot from an ‘install medium’, can you use it to repair your HDD environment? Worst case, copy off the files you need and re-install. (Yeah, I agree – YUK!)

    > … or have I just set a new record for human stupidity?

    Not even close. For example, I once was zeroing out a bunch of USB drives with the *nix command
    dd if=/dev/zero of=/dev/sdb,
    and I made the fatal error of typing “sda“, my HDD. Realized it a second or two later, and interrupted the command. All I had managed to smoke was my Master Boot Record and part of my /boot filesystem (i.e., the on-disk image of the running kernel). The box stayed up and behaved normally – but I dared not shut it down. Spent the next three hours or so trying to repair the damage, with no success. Finally had to just copy my home directory and a few other key bits onto a USB drive, and reinstall. That was a waste of a perfectly good weekend.

  22. @Alex: “HERE BE DRAGONS, THOU ART CRUNCHY AND TASTE GOOD WITH KETCHUP”

    Well, yes. Here are some examples of things which I’ve encountered directly or passed on from colleagues. Mitigating these issues is an exercise left to the reader:
    * Disk drives will tell you data was written to disk when it is only in the cache.
    * Said drives will return cached data if requested, even if it hasn’t made it to disk.
    * Drives writing when power loss occurs may continue to write out garbage while
    spinning down.
    * Drives with extra capacitors to allow for flush-on-power loss may have their
    cache zeroed if the power comes back online too quickly, writing out all zeros.
    * Disk drives may successfully write out data to the wrong sector.
    * ~20 years ago I had a floppy disk drive view all media as corrupt because of
    a bad modem (bad crystal was probably interfering with the ISA bus).
    * People will power sensitive computer equipment with electrical supplies that
    make you yearn for 3rd world quality.
    * Any device attached to your system can fail arbitrarily.
    * Context switches can occur at any time.
    * Systems can run out of memory at any time.
    * Most shipped bugs occur in the error-handling paths because testing is hard.
    * PCI bus resets can occur at any time.
    * Any RPC call you make can take an arbitrary amount of time to complete.
    * Any RPC call you retry will inevitably have the first attempt start running just as
    the retry request is queued.
    * Any bug in a distributed system will require the state of the machine from which
    you didn’t get a core dump.
    * Users will reconfigure their network while using your product and then demand
    to know why things aren’t working.
    * Users will always fail to configure your product according to the directions.
    * Users will want to do absolutely insane things with any product you make.
    * Even if you tell them upfront that this isn’t supported.
    * “It used to work” is a bug in a new version of your software, even if the old
    behavior was bad.
    * Or due to the broken behavior of a completely unrelated product.
    * Users will never provide you the correct answers to the questions you ask.

    1. >Well, yes. Here are some examples of things which I’ve encountered directly or passed on from colleagues.

      I have encountered many of these. Your list suggests that my users are less crazy and stupid than yours, a difference for which I am profoundly grateful.

  23. Fortunately enough, the corruption of the backup consists merely of a constant string appended to every filename on the backup volume, so it should be possible to just batch rename everything back, but the backup is not useable until that happens.

  24. @Garret: *wibble*

    And here I thought I was nuts for trying to wring the ever-loving crap out of error handling code and try to make sure what it did and what I thought it did were within cooee of each other.

  25. @Jon Brase on 2015-02-26 at 00:27:23 said:
    > OT: Has anybody else ever managed to render their machine unbootable and
    > subtly but pervasively corrupt their backups in one fell swoop,

    Or just not have backups to begin with. Or reformat the drive with the backups on it.

    Nope, not me. Never.

    > or have I just set a new record for human stupidity?

    Have you been paying attention to the US Congress and President the last 6 years?

Leave a comment

Your email address will not be published. Required fields are marked *