Never let an invariant go untested

I’ve been blog-silent the last couple of days because I’ve been chasing down the bug I mentioned in Request for help – I need a statistician.

I have since found and fixed it. Thereby hangs a tale, and a cautionary lesson.

Going in, my guess was that the problem was in the covariance-matrix algebra used to compute the DOP (dilution-of-precision) figures from the geometry of the satellite skyview.

(I was originally going to write a longer description than that sentence – but I ruefully concluded that if that sentence was a meaningless noise to you the longer explanation would be too. All you mathematical illiterates out there can feel free to go off and have a life or something.)

My suspicion particularly fell on a function that did partial matrix inversion. Because I only need the diagonal elements of the inverted matrix, the most economical way to compute them seemed to be by minor subdeterminants rather than a whole-matrix method like Gauss-Jordan elimination. My guess was that I’d fucked that up in some fiendishly subtle way.

The one clue I had was a broken symmetry. The results of the computation should be invariant under permutations of the rows of the matrix – or, less abstractly, it shouldn’t matter which order you list the satellites in. But it did.

How did I notice this? Um. I was refactoring some code – actually, refactoring the data structure the skyview was kept in. For hysterical raisins historical reasons the azimuth/elevation and signal-strength figures for the sats had been kept in parallel integer arrays. There was a persistent bad smell about the code that managed these arrays that I thought might be cured if I morphed them into an array of structs, one struct per satellite.

Yeeup, sure enough. I flushed two minor bugs out of cover. Then I rebuilt the interface to the matrix-algebra routines. And the sats got fed to them in a different order than previously. And the regression tests broke loudly, oh shit.

There are already a couple of lessons here. First, have a freakin’ regression test. Had I not I might have sailed on in blissful ignorance that the code was broken.

Second, though “If it ain’t broke, don’t fix it” is generally good advice, it is overridden by this: If you don’t know that it’s broken, but it smells bad, trust your nose and refactor the living hell out of it. Odds are good that something will shake loose and fall on the floor.

This is the point at which I thought I needed a statistician. And I found one – but, I thought, to constrain the problem nicely before I dropped it on him, it would be a good idea to isolate out the suspicious matrix-inversion routine and write a unit test for it. Which I did. And it passed with flying colors.

While it was nice to know I had not actually screwed the pooch in that particular orifice, this left me without a clue where the actual bug was. So I started instrumenting, testing for the point in the computational pipeline where row-symmetry broke down.

Aaand I found it. It was a stupid little subscript error in the function that filled the covariance matrix from the satellite list – k in two places where i should have been. Easy mistake to make, impossible for any of the four static code checkers I use to see, and damnably difficult to spot with the Mark 1 eyeball even if you know that the bug has to be in those six lines somewhere. Particularly because the wrong code didn’t produce crazy numbers; they looked plausible, though the shape of the error volume was distorted.

Now let’s review my mistakes. There were two, a little one and a big one. The little one was making a wrong guess about the nature of the bug and thinking I needed a kind of help I didn’t. But I don’t feel bad about that one; ex ante it was still the most reasonable guess. The highest-complexity code in a computation is generally the most plausible place to suspect a bug, especially when you know you don’t grok the algorithm.

The big mistake was poor test coverage. I should have written a unit test for the specialized matrix inverter when I first coded it – and I should have tested for satellite order invariance.

The general rule here is: to constrain defects as much as possible, never let an invariant go untested.

36 comments

    1. >mathematical illiterates? Innumerates.

      Oh, no. Not the same thing at all. A person can be numerate (good at arithmetic, good at estimating magnitudes, decent grasp on precision and accuracy issues) but mathematically illiterate (no grasp of anything like, in this case, matrix algebra – or other higher mathematics). Or vice-versa.

  1. Matrix algebra is taught at Higher Secondary schools (pre-university level) in India. I wonder at what level it is taught in the US. Having said that, I have forgotten a lot of the concepts involved except the basics. Probably a revision would help.

  2. Yes, yes and yes. The word “refactor” has morphed from its original meaning to mean “change code”. It does not. Refactor in its original design was absolutely predicated on high coverage regression test code. As far as I am concerned you are not refactoring if you don’t have high coverage regression test code that you are running as you refactor. If you don’t you are just changing stuff.

    Refactor means to change the code’s structure without changing its external function, and so if you have no way to reliably test its external function you have no way to know if your refactor was successful.

    One of the other things I have done a lot (and this is due to a very old fashioned book called “Writing Solid Code” which comes from a Microsoftie, so I doubt you have read it.) I litter my code with assertions and assertion type statements, including object state validation.

    This is a kind of built in unit testing tool. In Microsoft development tools it is normal practice to have two types of builds: a debug build and a release build. The assertion framework and tools are built to be excluded during the release build so that your debug build is heavily auto tested, without a corresponding cost in the release build. I know DEBUG #ifdefs are common but I don’t know if that sort of framework is used as ubiquitously.

    I had a professor in college who used to say “there is no such thing as a constant, only a SLOW variable.”

  3. > I was refactoring some code – actually, refactoring the data structure the skyview was kept in. For historical reasons the azimuth/elevation and signal-strength figures for the sats had been kept in parallel integer arrays. There was a persistent bad smell about the code that managed these arrays that I thought might be cured if I morphed them into an array of structs, one struct per satellite.

    Well, it is not code smell if you are interested in high performance data parallel implementation – because of memory access patterns struct of arrays is faster than array of struct for such code (both on CPU due to locality of caching, and on GPU due to parallel coalesced read). But with AFAIK max 12-16 satelites at location, and 32 overall (excluding GLONASS) – there is no much call for such premature optimization; readability and maintability is more important.

    Though in C++ you can abstract the difference between AoS and SoA away.

    1. >Well, it is not code smell if you are interested in high performance data parallel implementation

      That is a reasonable point, but the latency of a software chain with a GPS at one end is so dominated by the device latency that in this context it never arises.

  4. > The general rule here is: to constrain defects as much as possible, never let an invariant go untested.

    Or *assert* invariants (and for testing purposes build with assertions enabled), which doubles as documentation of code.

  5. I had a professor in college who used to say “there is no such thing as a constant, only a SLOW variable.”

    Heh…nice :)

    It is astonishing how many supposedly experienced programmers actually believe that declaring something static/const is an invulnerable guarantee that it will never change. Making them watch a hardware breakpoint soon disabuses of them of such silliness.

  6. PS. Hey ESR, do you incorporate valgrind into your dev cycle?

    I bless the day (many solar cycles ago) that I discovered its utility as part of my everyday routine.

    1. >PS. Hey ESR, do you incorporate valgrind into your dev cycle?

      Generally, yes. I don’t often use it on GPSD because of the no-malloc rule. It’s not really very good at spotting static overruns; Coverity does that better.

  7. Matrix inversion is typically part of second-year algebra in American high school, which we take in the last or next-to-last year before college. However, it’s one of many topics covered in that year, and prone to get dropped from the syllabus if the class is having trouble with earlier lessons.

    UT Austin had a linear algebra course that got much deeper; math majors tended to take this in their sophomore or junior year.

  8. >Matrix inversion is typically part of second-year algebra in American high school, which we take in the last or next-to-last year before college. However, it’s one of many topics covered in that year, and prone to get dropped from the syllabus if the class is having trouble with earlier lessons.

    My son had it in 7th and 8th grade in a public (charter) school.
    Then, had it again in 10th and 11th grades in a public high school.

  9. “…It’s not really very good at spotting static overruns…”

    I’m not sure what you’re criticizing here….I’d love to see an example code snippet that valgrind fails to catch. Whether dynamically or statically allocated, it should catch any read/write overrun. My experiences with it have been rock solid. If there are holes, I’d like to know about them :)

    1. >I’m not sure what you’re criticizing here

      Disappointing experiences with memcheck years ago. Happens I’m looking at an IRC channel where a hacker I know to have clues just said “valgrind memcheck can’t [spot static overruns]; there’s a different valgrind tool that can, but it’s experimental”. I’ll take another look if Coverity’s public server goes poof.

  10. Personally, I like to let asserts be compiled even in release builds. Of course, depending on how critical crashing is or on how much it could affect performance, it may be impossible in your business.

    Some asserts check trivial stuff and have little overhead. Some asserts are in functions that are always called sparsely, and can be run without any noticeable delay. Some invariants are more costly, but I prefer to compile them in anyway, disabled by default, so that a command-line flag can easily enable them.

  11. Ah. Your hacker buddy is probably referring to sgcheck…it isn’t perfect, but I have had good results.

    I remember reading in the valgrind FAQ about the memcheck limitation regarding global/stack arrays….I wish I understood the problem better. I should dig into it when I have the luxury of some free time :)

  12. So happy you got it figured out!

    I don’t know if the all-integer matrix/inverse pair/generation script was helpful to you, but it was a fun exercise for me, and I will be adding that pair to my matrix unit tests at work — I’m hoping with an all-integer test case I can assert full == between doubles, rather than just within tolerance.

    Also, I’m delighted that my vocabulary now includes the phrase “hysterical raisins”!

    1. >I’m hoping with an all-integer test case I can assert full == between doubles, rather than just within tolerance.

      Yeah, that’s why I wanted it, too. GPSD has had troubles with “within tolerance” before.

  13. As others have pointed out, matrix inversion is a standard part of the high school curriculum in the US, at least for people on a higher math track.

    However, neither the teachers nor the students have any idea why they are doing this, which limits its effectiveness.

    The best explanation the students tend to get is that it makes Gaussian elimination easier than doing via manual substitutions, to which the next question is “Why would I ever be solving a set of equations anyhow?”, for which the math teacher will have no good answer beyond yelling “Math!” and deploying vigorous jazz hands.

    Sigh. Lockhart’s lament etc. etc.

  14. >> “Why would I ever be solving a set of equations anyhow?”

    I’m probably well into the one percenters on math, but I live in algebra, trig and geometry. I have occasion to do simultaneous equations a time or three per year, don’t recall the last time I needed to go beyond three.
    Those who do have interest and need to get into that kind of may will be few and far between in the normal population. This blog, maybe 1 in ten or maybe more of you than that.

    I sometimes wish I had taken the time to learn more statistics, but while I have taken classes in calculus and linear algebra — and did well — my life in a window manufacturing plant and former life as a machine shop operator hasn’t needed them much. I recall one time that calculus answered a professional question that I needed an answer to. In nearly 60 years as a maker.

    Jim

  15. Jeremy Bowers
    > The best explanation the students tend to get is that it makes Gaussian elimination easier than doing via manual substitutions,

    Well duh! This is just an indication that the teachers suck. Obviously the answer is “It means that when you blow stuff up in a FPS game, that the chunks of flesh fly and rotate through the air realistically.”

    “Cool Miss, tell us about eigen vectors again…”

  16. @ Jeremy Bowers – “Why would I ever be solving a set of equations anyhow?”

    Allow me to introduce you to the wonderful world of engineering. Finite element analysis anyone?

    As an aside Eric, does variance in CPU clock cycles introduce unaccounted for error in passing invariant values through to memory?

    1. >As an aside Eric, does variance in CPU clock cycles introduce unaccounted for error in passing invariant values through to memory?

      I don’t understand the question. Unpack it, please?

  17. I see many people missing my point. My point is not that I can’t explain what they are good for… my point is that math teachers can not.

    If you don’t know what I meant by “Lockhart’s lament etc etc” you might want to read it… I figured it would be well-trod ground, but maybe not.

  18. @ ESR – “Unpack it, please”

    Are error checking and correction protocols robust enough to reveal an erroneous invariant value that is unknown until calculated?

    1. >Are error checking and correction protocols robust enough to reveal an erroneous invariant value that is unknown until calculated?

      I think we’re using the word ‘invariant’ in different ways. What I intend by it isn’t a kind of value but a testable predicate on the state of computation that has to be satisfied in order for it to be correct.

  19. @ ESR

    If I understand now, the invariant is a logic state and not a calculated value intended as a constant. Not sure how a statistical analysis plays into a logic state determination.

    1. >Not sure how a statistical analysis plays into a logic state determination.

      All code, including statistical calculations, has logical invariants. In this case, one of them was “the DOP outputs are independent of the order of the satellite list”. Think of an invariant as a predicate you might test with an assert, like “at this point in the image processing, a specified buffer should be empty”.

      To bring this together with another subthread, logical invariants characterize program behavior in rather the same way that eigenvectors characterize matrix transformations.

  20. Gah!
    I had to sit though classes and classes on linear algebra (started in my final year of high school, through several as a part of my engineering degree). It was mostly tedious without any practical application. (Yes, solving simultaneous solutions is great though it doesn’t require matrices to do so). I *hate* eigenvector calculations. I think I spend a whole semester do little else.

    It could be worse. It could be LaPlace transforms, which I learned to do mechanically without actually understanding what was going on. I still don’t understand what they do, though I understand what they are used for.

  21. > I *hate* eigenvector calculations.

    Stability of calculations (e.g. solving differential equations for physics simulation) is about eigenvectors and eigenvalues. Quantum mechanics is half about eigenvalues and eigenvectors (the static part) – there is even formulation by Heisenberg that uses infinite matrices. Principal component analysis, used in many places for example in image analysis and recognition is about eigensystems.

  22. Stability of calculations (e.g. solving differential equations for physics simulation) is about eigenvectors and eigenvalues. Quantum mechanics is half about eigenvalues and eigenvectors (the static part) – there is even formulation by Heisenberg that uses infinite matrices. Principal component analysis, used in many places for example in image analysis and recognition is about eigensystems.

    My fine art degree has left me woefully unable to understand that.

  23. > the answer is “It means that when you blow stuff up in a FPS game, that the chunks of flesh fly and rotate through the air realistically.”

    From now on, this is how I explain the importance of matrix algebra.

    (I keep a mental list of interesting ways to explain theoretical math: bullet ballistics for classical mechanics, gray-goo for exponential growth…)

  24. @Garret:

    I actually have somewhat of the opposite problem: There are plenty of things I have an interest in, and even a good intuitive feel for the dynamics of, that I don’t have the mathematical background to analyze rigorously. For example, general relativity.

  25. > I actually have somewhat of the opposite problem: There are plenty of things I have an interest in, and even a good intuitive feel for the dynamics of, that I don’t have the mathematical background to analyze rigorously. For example, general relativity.

    @Jon Brase: you are in good company ;-) AFAIK Albert Einsten had problems with mathematical formulation of general relativity, even after using Minkowsky contribution to mathematical description of space-time in special relativity. It was solved by Hilbert… who didn’t think that his contribution was important – all the physics intuition came from Einstein.

    IIRC you can read about it in very good and quite comprehensible “Black Holes and Time Warps: Einstein’s Outrageous Legacy” by Kip Thorne, which I wholeheartily recommend.

  26. @ Jakub Narebski
    >IIRC you can read about it in very good and quite comprehensible “Black Holes and Time Warps: Einstein’s Outrageous Legacy” by Kip Thorne, which I wholeheartily recommend.

    Thanks. Added to my wishlist.

  27. A classic and typical example of the type of invariant Eric is getting at is worth understanding as part of program correctness theory: the loop invariant. This is a statement that is both true at a given point at every iteration of a loop, and demonstrates a bug in the loop if it’s ever false.

    For example, if you’re sorting an ordered array of people records into male and female and “don’t know” lists, and it’s indexed by integer (call it “i”), then at the end of every iteration, the sum of the sizes of the three lists should always be i+1. (Or i exactly, if you consider i to be incremented right before the end.)

    That’s an easy example, of course. Harder ones include the loop invariant when inverting a matrix, sorting a list, compressing image data, encrypting anything, and so on. Last I checked, there were reams of doctorate theses to be had from analyzing various iterative problem types and coming up with a loop invariant computation algorithm for them.

Leave a Reply to Jakub Narebski Cancel reply

Your email address will not be published. Required fields are marked *