When you see a heisenbug in C, suspect your compiler’s optimizer

This is an attempt to throw a valuable debugging heuristic into the ether where future Google searches will see it.

Yesterday, my friend and regular A&D commenter Jay Maynard called me about a bug in Hercules, an IBM360 emulator that he maintains. It was segfaulting on interpretation of a particular 360 assembler instruction. But building the emulator with either -g for symbolic debugging or its own internal trace facility enabled made the bug go away.

This is thus a classic example of heisenbug, that goes away when you try to observe or probe it. When he first called, I couldn’t think of anything helpful. But there was a tickle in the back of my brain, some insight trying to break into full consciousness, and a few minutes later it succeeded.

I called Jay back and said “Turn off your compiler’s optimizer”.

Compiler optimizers take the output stream from some compiler stage and transform it to use fewer instructions. They may operate at the level of serialized expression trees, or of a compiler intermediate representation at a slightly later stage, or on the stream of assembler instructions emitted very late (just before assembly and linking). They look for patterns in the output and rewrite them into more economical patterns.

Optimizer pattern rewrites aren’t supposed to change the behavior of the code in any way other then making it faster and smaller. Unfortunately, proving the correctness of an optimization is excruciatingly difficult and mistakes are easy. Mistaken optimizations that almost always work are, though rare in absolute terms, among the most common compiler bugs.

Optimization bugs have a strong tendency to be heisenbugs. Enabling debugging symbols with -g can change the output stream just enough that the optimizer no longer sees the pattern that triggers the defective rule. So can enabling the conditioned-out code for a trace facility.

When I told Jay this, he reported that Hercules normally builds with -O3, which under GCC is a very aggressive (that is to say somewhat risky) optimization level.

“OK, set your optimizer to -O0,”, I told Jay, “and test. If it fails to segfault, you have an optimizer bug. Walk the optimization level upwards until the bug reproduces, then back off one.”

I knew of this technique because I’ve been in this kind of mess myself more than once – most recently the code for interpreting IS-GPS-200, the low-level bit-serial protocol used on GPS satellite-to-ground radio links. It was compromised by an optimizer heisenbug that was later fixed in GCC 4.0.

This morning Jay left a message in my voicemail confirming that my diagnosis was correct.

I said above that optimizer bugs have a strong tendency to be heisenbugs. If you are coding with an optimizing compiler, the reverse implication is also true, especially of segfault heisenbugs. The first thing to try when you trip over one of these is to turn off your optimizer.

You won’t hit this failure case very often — I’ve seen it maybe three or four times in nearly thirty years of C programming. But when you do, knowing this heuristic can save you many, many hours of grief.

93 comments

  1. Believe it or not, this happens more often than you think. Back when I had time to sit around and wait for my entire OS to compile (before I got married), I ran Gentoo, in the very early days of the distro. On the forums there (which, at one time, were filled with highly skilled hackers instead of the trolls, complainers and whiners that seem to fill its forums for the last few years), one of the common memes on the Gentoo forums was that if you were experiencing random segfaults or lockups with certain packages, removing or adjust any -O? options from your CFLAGS would fix it, especially if they were set -O3. The common advice of the day was to never do more than -O2 optimization globally, unless you were prepared to deal with problems.

    Of course, nowadays, I don’t even bother: 64-bit Ubuntu is nice and tightly integrated these days, and everything “just works” out of the box. Kinda takes all the fun out of it. :)

  2. As it happens, there are at least two optimizer bugs tickled by the offending (?) code. Building with -O0 works. Building with -O1 breaks. Building with -O1 -fno-guess-branch-probability works. Building with -O2 -fno-guess-branch-probability breaks.

    I “solved” the problem by sticking in one trace call that made the problem go away. That kind of fix makes me vaguely nauseous, but ugly working code beats pretty broken code every time.

    I should have known this was likely to be the problem. Hercules has proven to be a stress test for gcc’s optimizer in the past, and, for the longest time, the Hercules configure script explicitly set its own gcc optimization flags for different architectures to ensure that it didn’t trip over any of them.

    Eric, a discussion of why optimizers have such a hard time being provably correct might be enlightening to your readers, as well…

  3. For the sake of completeness, one should observe that compiler optimizers can appear to break code when in fact the code makes invalid assumptions, and was therefore a latent bug there rather than in the compiler. It would be illustrative to have Jay explain the particular problem he found in this case.

  4. I’m thinking now about writing a web resource called “Stalking the Wild Segfault” on debugging tactics for C programmers. Things to go in there, besides this story:

    * Why your intuitions are so often wrong

    * Localize, characterize, fix: The three stages of debugging

    * Bisection searches, importance of

    * On a segault, try gdb first.

    Thoughts on other useful topics?

  5. There are also categories of bugs that show these symptoms but are bugs in the actual code. Less expert programmers are more likely to trip over these than compiler bugs. Given the level of expertise of the analysts here, I’m inclined to believe that the compiler-bug diagnosis is correct.

    One of my “favorite” cases (which kept me up all night once because it was affecting a 30,000-user IRC network) was a one-byte buffer overrun that exhibited as an infinite loop. When optimization was off, or debug information turned on, the stack ended up being arranged in a way such that the overrun didn’t affect the behavior. With -O1 and without -g, there was no padding inserted between variables, and that one-byte overrun reset the counter to zero for a loop that contained the overrun.

    Just to be clear, the “less expert programmer” responsible for the bug in that case was me. Now that I have been through that experience, and have more years of programming experience in general, I doubt I would make the same mistake again.

  6. Frank, I’m not sure what more I can say without posting the (fairly lengthy) section of code that failed. The sufficiently curious can obtain the Hercules source code from the Subversion repository at svn://svn.hercules-390.org/hercules/trunk. The failure was in the block beginning at line 4158 (at this writing) of the file esame.c.

  7. The gdb manpage is about as useful as the gcc manpage. You might want to consider a quick cheat sheet for the debugging options most useful for arrowing in on a bug.

  8. Eric, a discussion of why optimizers have such a hard time being provably correct might be enlightening to your readers, as well…

    Actually, the formal proof of such a refinement is not so horrendous…proving the mapping between the formal statement and concrete executable is somewhat more challenging.

    That’s why God gave us Z & SPARK Ada ;)

  9. Had something similar happen just a couple weeks ago. Spent hours and hours trying to figure out what was causing a crash in the field. The crash dumps were proving to be useless, so I turned off optimization of the compilation unit in question, and the bug went away.

    However, in our case, it turned out that turning on or off optimizations only exacerbated a race condition. It was a rather amusing case of thinking the problem is the fault of one thing, only to find out it’s the fault of something else.

    I’ve seen two compiler optimization bugs in the last three years, verified by looking at a disassembly of the resulting code, but I spend most of my time in C++, which throws a monkey wrench into things all on its own. (More common are template processing bugs that prevent perfectly valid code from compiling in the first place.)

  10. Of course, I am anally-bound to voice my queasy objection to using compiler optimization…ever.

    The horrors that cannot be unseen….. ;)

  11. Thoughts on other useful topics?

    Tips for finding known and unknown library and/or kernel bugs. I know that that one’s bit me in the ass a few times. For instance, I have one network performance benchmarking tool I wrote a long time ago that oddly will segfault on some older Linux kernels, especially with certain NICs, due to a kernel bug that, at the time, seemed to affect only certain network card drivers (the venerable 3COM 3c905b “Boomerang” cards being among them).

  12. It’s not always an optimizer bug even if changing the -O option changes the behaviour.

    At work we ran into a situation where our code would crash or not depending on the size of the compiled binary. If the size hit one of a set of certain multiples of 4, it would crash. The root cause was a combination of five “almost always works” bugs in the first-stage bootloader that loaded the next-stage loader over a serial port. Nothing wrong in the compiler, just bugs in our code.

    In that case, changing the -O setting would change the binary size, avoiding or triggering the problem randomly.

    Dan, completely skipping optimizations *is* completely anal. That’s because the compiler output before any optimizations is completely anal. Of the code generated for a small function, easily 50% of the instructions may be useless — less for longer functions but still way too much. I’m talking about dead simple peephole stuff; the less trivial optimizations are well known to cause problems. I’d like them to be hidden behind long and clumsy option names so they’d not be used lightly. They have less effect anyway.

  13. Dan, completely skipping optimizations *is* completely anal.

    Easy tiger ;) I do plenty of code optimization…I specifically object to *compiler* optimization.

    I realize this casts me as a luddite in the eyes of many.

  14. Frank Ch. Eigler Says:

    compiler optimizers can appear to break code when in fact the code makes invalid assumptions, and was therefore a latent bug there rather than in the compiler.

    Agreed, one of the hardest debugging lessons for beginners or non-programmers to understand is:

      Bugs are not guaranteed to be harmful.

    A bug can hide inside software for years, hurting nothing from the user’s point of view but still doing something clearly wrong to the program state. Then you come along and make a perfectly reasonable change to the software. But you were assuming the program state your changes depended on was valid, which it wasn’t. The software breaks, sometimes in expensive or embarrassing ways, and you are blamed. Frantic, you recheck your changes over and over, but find nothing wrong.

    Now everyone not only thinks you’re so incompetent that you broke the software, they also believe you’re incapable of to fixing your own mistakes. Ah the joys of software development.

  15. Frankly, I think this is bad advice. Compiler bugs are very rare in practice, as you say: you, a heavy user, have a rate of 0.1 per year. People are far better using other debugging techniques to find their bug, including assembly level debugging. Please note the very important point that TOK makes which is that turning off optimization and seeing the bug go away is not a valid solution. It is perfectly possible that the bug is just being hidden, and could readily reappear with a minute change in an apparently unrelated place.

    Good developers don’t just make bugs go away, they find out what caused the bug, and fix the cause. If the optimizer does have a bug then I would want to debug at the assembly language level to convince myself that the compiler was producing different incompatible code with and without the optimizer before I would accept the claim that “it is a compiler bug.” Preferably, I’d like to see a ten line example that exhibited the bug (and then send it to the compiler writer.)

    I might add that this is particularly bad advice for newbie programmers, who may use it to blame every unpredictable bug on the compiler.

  16. Jessica, I tried looking at the assembler output from the routine in question with and without the trace calls that made the problem go away. The program logic was sufficiently buried that I couldn’t make heads or tails of it – and there were enough differences in other ways that a direct line-by-line comparison wasn’t possible.

    I spent a solid week trying to chase down this problem. It was only reported on one platform; the identical test case run on four other platforms did not fail. Combine that with the fact that turning off the optimizer made the code work correctly on the failing platform, and the evidence is overwhelmingly a compiler bug.

  17. +1 to a detailed technical discussion on this stuff. You’ve been in the systems programming mines, meta-maps are always useful

    (I’m a sysadmin, not a programmer. But I deal with enough software that almost builds and/or works that more tricks and tips for the toolbox are always valuable.)

  18. If the bug is in GCC, please provide a patch to the source.

    That’s what I was always told whenever I suspected the compiler as the source of my heisenbugs. I say ‘my’, because every time I tried to locate the actual bug in GCC, the bug would always be in my code.

    Have you run valgrind & friends over your code?

  19. >It’s not always an optimizer bug even if changing the -O option changes the behaviour.

    But if the behavior does change then there is an optimizer bug, even if the problem you’re trying to fix is not caused by said bug. Optimization is not supposed to change the behavior, if the behavior does change, there is a bug there.

  20. > Easy tiger ;) I do plenty of code optimization…I specifically object to *compiler* optimization.

    And that’s precisely my point. A human would never write as thoroughly stupid code as what a compiler outputs before any optimizations. I’m talking about completely useless instruction sequences.

    The thing is, the compilation phase is usually written so that it only thinks about outputting the stuff necessary to, say, begin a function, perform arithmetic on input, and return from a function. Each phase expects certain things in certain registers and leaves things in a place where the next phase will find them. Taken together, this causes tremendously stupid sequences like values being copied from register A to register B for the next phase, which begins by copying B to A. Etcetera.

    Trivially simple optimizations are *heavily* needed for compiler-produced code. They are easy to implement without bugs. Complex optimizations may get some cases wrong, they are a different story. But it’s not at all reasonable to decline *any* optimization on code directly out from the code-generation phase of a typical compiler.

    Jay’s case looks like an optimizer bug, though. Given that a compiler with optimizations on produces broken code from a certain source, most likely the optimization bug depends only on the contents of the function where the bug-triggering source is. Small unrelated changes outside that function do not make the bug appear or disappear. In our case instead, while it at first seemed that adding a simple “if (expression) { do_something(); }” caused our program to crash, we could in fact get that modified function to work perfectly well by adding or removing a few bytes elsewhere to offset the binary size away from a “bad” value. Add another 8 bytes and the bug bit again. So it clearly wasn’t the compiler breaking on the source code, but something else that depended on the total size of the binary. For a compiler bug, changes elsewhere don’t change things. (This is an oversimplification of course, and subject to complex interactions between a function and its surroundings, but sane developers tend to minimize these interactions.)

    Frank’s point must also be kept in mind. Optimization options also tend to elicit more warnings from the compiler, as the compiler notices during optimization that something is never used or something similar, but optimizations may also reveal bugs in the source by making the program crash instead of printing a warning at compile time.

  21. William B Swift Says:
    February 13th, 2010 at 2:35 pm

    But if the behavior does change then there is an optimizer bug, even if the problem you’re trying to fix is not caused by said bug. Optimization is not supposed to change the behavior, if the behavior does change, there is a bug there.

    Not true. An optimizer’s primary purpose is to change the timing behavior of your program, right? A timing-dependent bug inside the program could easily be triggered by the differences in timings between the unoptimized and optimized versions of the program.

    The optimizer is also free to change program behaviors that are supposed to be hidden from the users, but are revealed by a bug in the program. For example, optimization could change the memory layout/initialization order of a program’s variables, turning a benign buffer overflow bug in the program into a segfault+core dump.

    Contrary to your statement, and contrary to the jist of ESR’s blog entry, turning compiler optimizations on and off can easily hide and/or reveal bugs in the program being compiled. I agree with Jessica Boxer; the compiler is generally better-tested than anyone’s code, so don’t blame the optimizer until you’ve reduced the bug to a very short test case that you know is correct. When in doubt, assume any bug is your own software’s fault.

  22. I might add that this is particularly bad advice for newbie programmers, who may use it to blame every unpredictable bug on the compiler.

    Excellent point. 99.999% of the time, it’s really not a bug in compiler, especially if you’re just learning. The popular compilers like gcc are very mature and have been well-tested on thousands of different systems and configurations. True compiler bugs are far more likely to be discovered by a well-seasoned cranky old-man programmer like ESR or Jay Maynard. :)

  23. I spent a solid week trying to chase down this problem. It was only reported on one platform; the identical test case run on four other platforms did not fail. Combine that with the fact that turning off the optimizer made the code work correctly on the failing platform, and the evidence is overwhelmingly a compiler bug.

    Another good point is in there: compiler optimization bugs usually don’t manifest on every platform or on every version of the compiler. They usually work perfectly on several platforms, with major fail on one, maybe two. Also, if it works perfectly on your platform, but fails on several others, you’re probably just not doing things in a cross-platform way; maybe you have an endian problem or you’re assuming that an API function you’re calling behaves the same way on all platforms. There are very few times in life you’re likely to truly encounter an actual compiler bug vs. it being a bug in your own code.

  24. Lots of generalizations there TOK ;) Needless to say, I don’t agree with your observations at all.

    Admittedly, my view is shaped by my backgrund in formal methods and the fact that I have been through the wringer while writing my own compiler & VM – I have a pretty darned thorough understanding of this aspect of software technology, right down to the metal.

    My approach is likely far closer to Jessica’s than yours…..or most programmers, I imagine.

  25. Jay– the latest gcc gives some fairly alarming warnings when compiling esame.c from hercules 3.06. It’s not necessarily correct but these would be worth looking into.

  26. The -O0 people should consider that some versions of gcc (this was in the 2.x days I believe) have more reliably produced correct output at -O1 than -O0, since almost nobody uses -O0.

    When debugging embedded software, I find that a moderate degree of compiler optimization is extremely helpful because most of the really stupid instructions have gone away, leaving the core program logic. -O0 from most compilers is very cluttered and hard to read. The single optimization that makes debugging most hellish is function inlining, and it’s easy to turn that off.

  27. I had a funny compiler bug in my compiler that shocked and scared me because I caught it so late.

    I had optimized the CMP instruction with immediate values…

    if (x>5) worked
    if (5>x) did not work

    When I optimized, I negated the logic of the compare.

    “(5>x)” is not “!(x>5)” it’s “!(x>=5)”

    I didn’t catch it for years because I happen to code my compare instructions consistently.

  28. There are very few times in life you’re likely to truly encounter an actual compiler bug vs. it being a bug in your own code.

    In general, this is the case. We tend to trust our tools implicitly. Eric’s point is that, sometimes, the tools don’t live up to that trust.

  29. Its worse than this.

    We’ve now reached an age where we have computers with large (>= 4GB) non-ECC memories, and such a machine is essentially guaranteed to have a memory error every three days. http://lambda-diode.com/opinion/ecc-memory

    Here is an older article by Dan Bernstein which states that computers with as little as 256MB of non-ECC memory will have several memory errors a year. http://cr.yp.to/hardware/ecc.html

    and Jessica, as you gain more experience with compilers (especially the mound of shit that is gcc) you’ll find they generate errors (bad code) all the time. Very few of these will cause segfaults though.

    I’m not aware that esr has done any significant compiler work though.

    1. >I’m not aware that esr has done any significant compiler work though.

      I’ve written a couple of retrocompilers. I cheated by having them compile to C, though, and letting the low-level code generation be someone else’s problem.

  30. This isn’t a bad debugging tool of changing the flags about, but you should really check your expectations because usually it is your fault anyways.

  31. >But if the behavior does change then there is an optimizer bug, even if the problem you’re trying to fix is not caused by said bug.

    This isn’t true – the best counter-example is an incorrectly synchronized multi-threaded program. (Hercules appears to be multithreaded.) Compilers can aggressively reorder reads and writes in a way which produces identical behaviour for a single thread of execution, but causes race conditions to occur much more often than they did before.

    In addition, values read from main memory may also be re-used for much longer timescales on high optmisation levels. The canonical example is looping while a variable is true, with a different thread falsifying the condition variable at some point, without either thread inserting memory barriers. On an unoptimised build, the looping thread probably executes a read on every iteration, so it will break out; the optimised version reads once, then spins forever.

    This is totally legit behaviour on most processors’ memory models. It’s a specific instance of the general issue: if your code has entered the land of undefined behaviour, even if it works, the next optimisation level or update to your compiler could break it.

  32. Cheater cheater pumpkin eater ;)

    Actually, if you can support the constructs of your source language with another, using an established compiler this way is definitely ‘working smarter’

    Sadly I didn’t have that luxury…..and no, I didn’t provide any “-O” features.

  33. Dan,

    “I am anally-bound to voice my queasy objection…”

    Please, try to keep the discussion non X-rated.

    The word you, and 99.999+% of the population meant to use was anile, and yes, it is a “homophone.” ;-)

    I suspect that is why so many people get it wrong.

    It is a small peeve of mine; please consider this just a small effort to (make like a diode and) rectify the situation.

    -John

  34. A year and a half ago ( http://boston.conman.org/2007/10/18.1 ) I squashed a heisenbug that wasn’t due to a compiler bug, but a race condition in a non-threaded non-multi process program (if you handle signals, you have a multi-threaded program; end of story). It only took me a month to track that down.

    In my twenty years of programming professionally, I’ve only found one (I think) compiler bug in my time. Either I’ve been really lucky, or I haven’t written enough C code.

  35. # William B Swift Says:
    > But if the behavior does change then there is an optimizer bug,

    This is incorrect. The compiler is required to preserve the defined behavior, not the undefined behavior. For example:

    int i; if(i == 0) seg_fault();

    I can think of scenarios where the optimizer would compile this differently than a standard debug build. For example, some debug builds set default values for all local variables, but that code might be omitted (validly so) in an optimized build. (Microsoft’s compilers have behavior like this — though I didn’t test the above code.)

    The bug here is in the code, not the compiler. And of course this is in your face, but bury this in fifty lines of code, and make it only occur when some rare path occurs through that code, and you’ve got yourself a very difficult bug.

    Of course a good compiler will issue a warning after the DFA, but plenty of people turn them off, or ignore them, which although a common practice, always struck me as an insanely stupid thing to do.

  36. Hercules is very heavily multithreaded, but we do use interlocks to keep accesses to shared structures from stepping on each other. (We use several housekeeping threads, as well as one thread per emulated CPU and some number of threads from 1 to as many I/O devices as are defined, depending on some configuration parameters, to achieve overlapping of I/O and CPU activity in the same way the real iron does.)

    When we’ve had threading issues, though, the problem has manifested itself as unreliable execution and bugs that don’t manifest themselves the same way every time. This was not the case this time. The problem was reliably repeatable, and broke at the same place in the emulated program every time in the same way. This doesn’t feel like a threading bug.

  37. Strongly recommended advice for everyone coding in C:
    1) Crank warnings way up- enable all warnings!
    2) Fix all warnings.

    Almost all “bugs” in the optimizer come from the optimizer having different behaviors for code constructs with undefined behaviors than the non-optimized code has. And C has a lot of undefined behaviors. However, most C compilers these days (including both GCC and Microsoft) have warning settings that will warn you when you use these undefined behaviors. However, most programmers disregard these warnings- and thus leave themselves wide open to depending (silently) on undefined behaviors that can change under different optimizer settings, and will change under different platforms.

    It is much easier to fix all the warnings- even the truly unnecessary ones- than it is to track down one of these Heisenbugs.

    Especially once you stop coding the warnings into the code in the first place.

    Oh, and if you find yourself having to typecast things all over the place, consider whether your variables are the correct type. For example, most loop variables should be of type size_t, not int.

  38. Jay, the suspect section of code is rather pointer-intensive. With alias analysis, it is conceivable that the hercules code is in fact not kosher. OTOH, gcc 4.2 was a bit of a dud release, IIRC. What would help decide one way or another is the (annotated) assembly code for the function, contrasting a working version from a non-working one. If this problem still occurs on modern gcc, it would be very important & helpful to report it in some detail, so the experts can diagnose it and fix gcc (or perhaps suggest how to fix your code, depending…).

  39. >I said above that optimizer bugs have a strong tendency to be heisenbugs. If you are coding with an optimizing compiler, the reverse implication is also true, especially of segfault heisenbugs. The first thing to try when you trip over one of these is to turn off your optimizer.

    this is bad advice. neither the standard nor common implementation practice give any guarantee that the compiler will always produce the same behavior for the same program. there is, for example, no guarantee regarding the order of execution of expressions in c aside from across sequence points.

  40. @johnc – No, “anile” isn’t the word I was looking for, but thanks for teaching me a new word-of-the-day :)

  41. Agreed Dan, nothing to become anile about ;-)

    These usages are simply shibboleths, if you will; practised since the Latin era, at least.

    Their point was to discriminate the literati from the mere intelligentsia.

  42. I have had a compiler bug (well, compiler behavior inconsistency; it was really my fault) only once. This in a toy OS kernel that I wrote for fun; I had some inline asm that referred directly to a certain register while using GCC’s give-me-a-register notation as well, and didn’t list my directly-used register in the clobberlist. Turned out GCC used the same register for its %0, so it clobbered the value in the register. The tutorial code I got that particular bit from used the same syntax, but something about it was different enough to make GCC use a different register. Moral: Never assume anything, and also compilers are weird.

  43. Apropos those who have seldom run into compiler bugs, I’ve run into plenty, starting with the MS Pascal compiler that passed doubles down the call stack as singles. Back in the day it was difficult to use a MS compiler more than a week or so without uncovering a bug. A more recent MSVC compiler bug involved the incorrect optimisation of a logical expression that failed when the numbers were nans. I also ran into a hardware bug on the 80186 where it got the wrong sign dividing two negative integers when both were in registers, but did just fine when one was in memory. So on and so forth. That said, things are much improved these days.

  44. Prime Directive for debugging: take every bug to root cause.

    Far too many engineers have no idea what this really means. Knowing root cause means that you can say, with complete specificity, exactly what went wrong and why. Memory location X had this value, register R had that value, then we executed this instruction, etc.

    If you change the optimization level and the bug goes away, that does *not* prove that there’s a bug in the optimizer. It is far more likely that you have an uninitialized variable and the optimized code just happens to use different registers. To claim that you *know* it’s an optimizer bug you must be able to disassemble the code, annotate it, and identify the particular instruction does not compute what the source code says it should.

    The anatomy of a correct bug fix goes like this:

    (1) You determine the root cause of the bug.

    (2) You develop a test that can reproduce the bug at will. This can be hard to do, especially with multithreaded code, but if you really know what the root cause is then you should be able to figure out exactly where to insert delays (e.g. right after dropping a relevant lock) to open up the window for race conditions. You may have to build a custom kernel, libraries, etc to help in this task.

    (3) You develop a fix, based on your understanding of the root cause.

    (4) You can explain *why* your proposed fix should address the root cause.

    (5) With the fix applied — and nothing else changed — your test runs indefinitely without hitting the bug.

    That is real debugging. It’s demanding, but anything less is voodoo.

  45. The terminally curious may now download a zip file of the assembler output of compiling the version of esame.c under discussion (also included) with and without the PTT() calls that made the code work and at several different optimization levels. I found the generated assembler nearly impenetrable, but others may not. The file may be obtained at http://www.hercules-390.org/esamebug.zip .

  46. Oh, a couple more notes: the version of esame.c in the zip file corresponds to revision 5627 of the Subversion repository. (It’s unchanged as I write this, but that probably won’t last as long as this discussion will.) The function in question is in the generated assembler as z900_load_multiple_long. Each instruction in the emulator may be generated as many as three times, to correspond to the three different architecture modes hercules supports, and the name of the instruction is prefixed with the architecture mode to build the actual routine name that is used.

  47. And, Eric, this brings up another topic for your resource: a guide to reading generated assembler. You can assume that your audience knows the assembler in question, but not anything about how gcc generates assembler code or what kids of code it tends to generate.If someone like me can strip away the boilerplate and get to the meat of the generated code, it would make finding this kind of problem simpler.

  48. Back in the day it was difficult to use a MS compiler more than a week or so without uncovering a bug.

    Which is why, back in the day, there were those of us using Borland’s compilers. All the MS-DOS programs I ever wrote (gak!) were written and compiled using Turbo Pascal. When I started coding on Unix in 1989 (SysVR3.2), I immediately realized that it was a platform created by programmers for programmers. :) When the first Linux distros hit the scene in the mid 90s, I switched platforms and never looked back.

  49. As the other thread’s discussion notes – a quality toolkit (and the knowledge/discipline to use it) is invaluable.

    When I use gcc, it is conjoined with valgrind. “Cleanliness is next to godliness” is a heuristic I put great stock in. Elimination of sloppiness (good housekeeping) really does help in significantly reducing the clutter than can impede bug hunting.

  50. BTW, this discussion reveals a deeper truth: the various changes in modern programming languages are a good idea, because they tend to eliminate these problems. I think, for example, it is very sad what has happened to C++. It was a leader in the language world, but has now become so moribund in the politics of language design, and the limited vision of Stroustrop, that it is absolutely an also rand now. The reality of C++ is illustrated in the fact that the C++0x standard will never in fact be 0x. Not to mention that the changes proposed are anemic. I know a lot of people use it, and that it is probably one of the most widely available languages, but the fact is that its lack of features have made it significantly less productive than more modern languages like Python and C#. Just as one trivial example, it is illegal to use an uninitialized variable in C#, and all the necessary data flow analysis is built into the language itself (if you doubt this consider the difference between a ref parameter and an out parameter.) This eliminates, in one stroke, a significant class of bugs.

  51. I think I found the problem.. And – well – it appears it is NOT a compiler bug – although it exhibits all its characteristics!

    (for the enthusiast, the code in question is at labels L2488/L2494 and L2493/L2497 in Jay’s provided esame.O3 listing).

    One of the constraints of the computer architectures the hercules project implements is that under certain conditions, 8 byte aligned fetches & stores must appear atomic to other processors in the configuration.

    Unfortunately, under ia32, there is only *ONE* way to do this (besides locking out all architecture memory accesses for all threads) : and that is the “cmpxchg8b” instruction. Since there is no simple way to do this in C, when we are under the auspices of gcc, we make some very light use of the __asm__ construct.

    Another thing now comes into play. When compiling PIC (that’s Placement Independent Code), the various ia32 ABIs we use (ELF, Mac OSX) mandate that the EBX register is reserved to hold a GOT (Global Offset Table) pointer – so it can’t be used for other purpose (lest you get gcc complaining VERY loudly !).

    Ok.. So on one hand we want to use this cmpxchg8b instruction – which make an implicit use of the EBX register – but that register is reserved. So the code we implemented takes a detour – and before and after the instruction saves the EBX register into another register we’ve reserved before hand – so that the whole unit of operation preserves EBX – and makes gcc happy.

    And that’s where we hit a snag with *this* particular version of the compiler (but it could hit us again any time) : For this particular unit of work, the compiler decided it didn’t really need EBX any more after all before it returns to the caller (and restore EBX to its original value) – so the EBX register suddenly became free – so it decided to use it as part of the __asm__ code for *another* purpose : as a memory pointer for cmpxchg8b – BAM ! we clobbered EBX as part of our attempt to protect it – so basically, EBX which was holding a pointer to the memory to access now holds part of the value to exchange in memory – leading to .. well.. FAIL!

    How we’re going to solve this – is another matter – but we’ll do it !

  52. Just to round out the thread, here’s the code from GPSD that I had in mind in the original post:

    #define isgps_parityok(w) (isgps_parity(w) == ((w) & 0x3f))

    #if 0
    /* 
     * ESR found a doozy of a bug...
     *
     * Defining the above as a function triggers an optimizer bug in gcc 3.4.2.
     * The symptom is that parity computation is screwed up and the decoder
     * never achieves sync lock.  Something steps on the argument to 
     * isgpsparity(); the lossage appears to be related to the compiler's 
     * attempt to fold the isgps_parity() call into isgps_parityok() in some
     * tail-recursion-like manner.  This happens under -O2, but not -O1, on
     * both i386 and amd64.  Disabling all of the individual -O2 suboptions
     * does *not* fix it.
     *
     * And the fun doesn't stop there! It turns out that even with this fix, bare
     * -O2 generates bad code.  It takes "-O2 -fschedule-insns" to generate good
     * code under 3.4.[23]...which is weird because -O2 is supposed to *imply*
     * -fschedule-insns.
     *
     *  gcc 4.0 does not manifest these bugs.
     */
    static bool isgps_parityok(isgps30bits_t w)
    {
        return (isgpsparity(w) == (w & 0x3f));
    }
    #endif
    

    In this case, the final confirmation that I had run into a real optimizer bug was the fact that GCC 4.0 fixed it.

  53. “In this case, the final confirmation that I had run into a real optimizer bug was the fact that GCC 4.0 fixed it.”

    Eric, that by itself in no way confirms this. There were many changes between 3.4 and 4.0 that interjected a whole new layer of optimization (SSA/GIMPLE), which pessimized some code.

    What you need is real root cause analysis. If you are not able to interpret the assembly fully (use -fverbose-asm and internal-state-dumping options) combined with the more arcane aspects of the language rules, bring the issue to the attention of the gcc guys with the right expertise.

  54. “In this case, the final confirmation that I had run into a real optimizer bug was the fact that GCC 4.0 fixed it.”

    This proves only that that GCC 4.0 sublimated the error – not that the error existed in the optimiser.

    This is the basics of deductive logic, you statement is far too categorical.

  55. Frank and Brett are exactly right.

    You can claim it’s a compiler bug when you can show the incorrect generated instruction sequence.

    Otherwise you’re just guessing.

  56. Just to chime in briefly…I’ve been developing compiler backends for around 16 years.

    1. 99.99% of the time (if not more often) when people turn on the optimizer and something breaks, it’s due to bugs in their code, not the optimizer. That’s my experience of having many bug reports where once we actually dig into the generated code with the user, it turns out to be that an uninitialized variable in their code ends up with different random values with & without the optimizer, or something similar (writing past the end of an array, using a compiler option that basically says that you are promising you never violate the aliasing rules of the language, etc.).
    2. Totally agree with people who say to turn on warnings. As many as you can tolerate. It’s likely to find a lot of these things. Likewise, static analysis and dynamic analysis (like the /RTC1 option on MS VC++) are likely to find many more of these things that seem to be “optimizer bugs”.
    3. Any sane development team will have a rule that you have to do a release build with optimization on and pass tests before you check in, and will have daily test runs that run on debug builds, optimized debug builds (i.e. asserts and other checks left in), and release builds. These same teams should require actually understanding failures that show up when optimizations are enabled rather than just blaming the optimizer and turning it off for a section of code or a file.

    The quickest “dumb” way to track down optimizer bugs (or supposed bugs) in a statically compiled language is to binary search the object files by generating a set of object files that are not optimized, and a set of object files that are optimized. After swapping objects in until you can determine the one object that has an issue, you can use whatever vendor-specific pragma enables/disables optimizations for a given function and binary search the functions in the object file. This doesn’t work so well when you have inlining enabled, so the first thing to try is disabling inlining with your compiler to confirm that the issue still exists with inlining off. If it doesn’t, then it will be harder to track down. Once you determine the problematic function, you can focus on the code that is generated to try to determine if it’s really an optimization bug or a bug in your code.

    One other thing, I haven’t used gcc in years, but if it’s true that enabling generation of debug information actually changes the code that is generated, then it’s horribly broken. It shouldn’t. That’s just stupid design.

  57. I’ve found a lot of compiler bugs in my time, so those of you who haven’t should count yourselves lucky. However, most of them have been in the frontend rather than the optimizer — code that should have been accepted wasn’t, or code that should have been rejected was accepted.

    Though optimizers do have bugs, they also tend to expose bugs in the code being compiled — the code makes an assumption that is not guaranteed to hold, but happens to be true for low optimization levels, but not for high optimization levels. This often corresponds to things such as use of uninitialized variables. I’ve also seen cases where optimization changes the ABI (e.g. an enum which fits in 8 bits is an 8-bit variable at high optimization and a 32-bit variable at low optimization). This can break code where different compiler options are used for different source files.

    Warnings can help, as can static analysers such as lint, and things like valgrind, but if you’ve followed all their recommendations and you still have a bug then you really need to pin down what the generated code is doing, and how it differs from what you expect. It might still turn out that your expectations are wrong.

  58. >> I’m not aware that esr has done any significant compiler work though.
    >>
    > I’ve written a couple of retrocompilers. I cheated by having them compile to C, though, and letting the low-level code generation be someone else’s problem.

    So:

    a) you didn’t state the reference (http://catb.org/retro/charter.html)

    b) my point stands.

  59. Just to wrap up: Ivan was correct. He supplied a patch that used a different register for the assembler assist than had been used, and the problem disappeared.

    Personally, I believe that the optimizer’s job is “first do no harm”. If code breaks due to the optimizer level, then it’s violated that rule. Now, with that said, Hercules is pushing the bounds of the C standard if not outright breaking them by using the assembler assist. It’s utterly necessary, since the purpose of the assist is to guarantee correct behavior of the emulated machine by correctly guaranteeing the integrity of an emulated multiprocessor system and emulated I/O on a multiprocessor host system, so that emulated memory locations do not change in the middle of an emulated instruction as defined by the mainframe’s Principles of Operation manual. This means using the host processor’s interlocked storage instructions (in this case, “lock cmpxchg8b”), and since there’s no way to do that in C, we had to do it ourselves.

    The problem came when the C compiler decided to use one of the registers that the assembler assist thought was available for its own use for something else. We wound up changing the register we used, and the problem went away.

    I don’t know if there’s a better way to have found this problem. If it weren’t for this discussion, we’d have left it as it was and depended on the extra trace call to keep the compiler from screwing things up. That would have worked. Would it have broken later? Got me. Probably.

  60. There really needs to be a post about why “magic button” features (like optimization) in compilers are bad.

    It’s not that the optimizers per se are bad – lots of clever people have written some great code to implement these optimizers – so don’t confuse my antipathy for optimizers with derision directed at their work.

    From a formal perspective, optimizers kill any hope of making affirmative assertions about the integrity of your code. It is literally changing the recipe, in unknown ways, as the plate is headed out the door to the customer. I may have formally documented my system from the highest abstraction down to the source code, then the rug is pulled from under me by the optimizer.

    To me, it is unacceptable to hand my code over to some ‘black magic’ in the hope that it will make it run faster and/or occupy less memory. I designed a system upon a set of interactions – in what way can I now guarantee the integrity of those interactions? If I understand my code so well, and the optimization mechanisms also, why am I not encoding such optimizations into my system directly – where I can formally analyze their impact?

    To use ‘magical’ compiler optimization is an admission of failure. A cop-out. Laziness. An implicit declaration of the limits of ones understanding of ones own code – “I don’t know how to fix it, but if I push this magic button, it’ll run faster”.

    So when I hear of problems like this, I think “well, that’s what happens when you don’t understand your own code”. It’s also tangentially symptomatic of the tragic state of software ‘engineering’ in general.

  61. >To use ‘magical’ compiler optimization is an admission of failure. A cop-out. Laziness. An implicit declaration of the limits of ones understanding of ones own code – “I don’t know how to fix it, but if I push this magic button, it’ll run faster”.

    One wonders what you’re going on about here. Optimizers are not ‘magical’; as a general rule, correctness of optimizations has a very definite formal definition. Furthermore, optimizing out instruction-level inefficiencies has nothing to do with system design. I suspect that you don’t know a thing about ‘formal’ analysis of programs.

  62. Roger, I suspect you don’t know what a ‘formal approach’ to software dev actually is, or you wouldn’t say such silly things.

    Hence, I’m sure you do indeed “wonder what I’m going on about”

  63. For the record, I don’t think that Dan, esr or Jay know what they’re talking about in this thread.

  64. >Roger, I suspect you don’t know what a ‘formal approach’ to software dev actually is, or you wouldn’t say such silly things.

    Wrong answer. I work with formal methods every day.

  65. If the issue is as Jay describes, then it’s neither an optimizer bug nor a coding bug per se — rather, it’s the fact that in-line assembly generally does not have a well-defined interaction with generated code. It’s one thing to write an entire routine in assembly — as long as you follow the ABI you’ll be fine. But if you try to stick a bit of assembly in the middle of a C function, it’s rarely maintainable because you have no control over which local variables are in which registers. Personally I think it’s a bug that compilers let you do this at all. If you really need to use assembly, write a .s file.

  66. @Dan — I’m not sure what you’re getting at. Unless you want to write everything in assembly, you’re going to be using a compiler. The compiler’s job is to produce a mathematically correct mapping of your source code onto the target instruction set. This isn’t magic, it’s entirely deterministic.

  67. The compiler’s job is to produce a mathematically correct mapping of your source code onto the target instruction set.

    This would be wonderful if it were true. Sadly, most compilers have been informally ‘worried’ into shape over the years. Please point me at the formal proof of a popular compiler…I’d love to make use of it.

    This isn’t magic, it’s entirely deterministic.

    Of course it is. My use of the term “magic” was to describe the manner in which I have often witnessed such features being used – “just hit the magical optimizer button and my code will perform better”

  68. @Dan: The reason for the two phenomenon you point out: that popular compilers lack formal mathematical proof and the use of the optimizer buttons as “magic” are simply artifacts of the realities of software development in the real world. In academia, computer science is very concerned with correct, formal mathematical proofs; in the real world, we have no time for such things. There are deadlines to meet, code to write, bugs to be fixed.

    This lack created a discipline known as “software engineering” whereby engineering principles are applied to software development. From this we get the Software Development Lifecycle (SDLC), waterfall diagrams, formal requirements documents, formal specifications documents, and metrics that compare the progress of development with the specifications. The problem was the same: we have no time for such things: there are deadlines to meet, code to write, bugs to be fixed.

    So then we come to the de jure popular development model — extreme programming. Extreme Programming, a type of “agile process,” adopts some of the best ideas from the bazaar open source model, but then attempts to twist it into contortions to make it palatable to managers, who understand nothing about software development.

    At the end of the day, no matter how it’s done, I often see programmers within business organizations kind of “defaulting” to the less formal, more chaotic methods of the bazaar model anytime the best “managed” software development models seem to fail — which happens in almost all software projects.

    It is my contention that the bazaar open source model of development is actually the most natural software development methodology, and that programmers will naturally tend towards it if left to their own devices.

  69. I assure you, Morgan, that I am well aware of the realities of our profession :) I have witnessed many fashions come and go over the past couple decades. I don’t dispute any of this, nor indeed do I particularly object…you do what ya gotta do to get the job done.

    My whole emphasis in this thread was my specific personal objection to the use of “magical” optimization…sure, in certain circumstances, where the effect is understood (and can be constrained) even I might conceivably opt to use them – but then the “magical” aspect will have evaporated due to the level of understanding.

    Generally, no, I don’t use them. I don’t need to. My approach to software engineering is heavily influenced by my background in formal methods. I have cultivated a pragmatic balance between hacking and formality that accommodates the real-world time & money constraints that face business. You are correct in asserting that the “Ivory Tower” academic formal methods approach is unmarketable. I strove to understand _why_ that was so, and what could be usefully/productively transferred across to the private sector.

    I believe I have succeeded. I write clean, consistent, efficient code using a measure of formalism at higher levels of abstraction, and quality tools to support my emphasis on construction correctness. I take no longer than might be considered ‘reasonable’ to do it, I don’t lean on “magical” features, and the longer term payoff is realized in drastically reduced maintenance.

    With respect to the various “managed” development methodologies that have won & lost favor over the years, one of my favorite essays is this. Amusing.

  70. Not really, Tom…sorry….I didn’t learn it from anybody. At least, at the time I was figuring this stuff out for myself, there was only heavyweight formal theory being taught. I am actually working on an eBook that I would like to self-publish online for free (with a PayPal ‘hat’ left out for voluntary contributions!), but it seems that this dude has beaten me to the punch. I have yet to read his book, but it seems like it will be familiar to me.

    My formal notation of choice is Z. I make use of Z/EVES & PowerProof. I’m a huge fan of Praxis and their SPARK toolkit for constructing Ada code. But I’m pretty language-agnostic really…C/C++, Ada, Java, Flex…whatever fits best.

  71. Would that engineering principles were applied to software development! Then software engineers would give mathematical assurance that their program functions as designed, under pain of going to prison and/or being fined a lot of money, the way civil engineers do for their roads and bridges. We would have drastically lessened the chances of a three-peat of the Therac-25 horror…

  72. Jeff,

    Texas has (or had) professional software engineers by law. Don’t know if it has helped, since people are still allowed to write software without being professional software engineers. They just can’t call themselves engineers.

    Yours,
    Tom

  73. That’s interesting, Tom.

    But asides from applying a standard before being legally allowed to adopt a label, are their any liability consequnces for being/not being an “engineer”?

    Coz that’s where the rubber really meets the road…

  74. I ran into a bug once that was caused by a compiler generating code that was *logically* correct as optimized, and would have worked correctly in most cases, but was incorrect in the context of low-level programming of the processor. I was accessing processor peripheral registers that had to be written with 16-bit writes, but the compiler *always* optimized to 8-bit writes if it determined that only half the register was being modified.

    The vendor changed subsequent releases of the compiler so that anything declared as a volatile 16-bit value was only accessed with 16-bit operations, but I had to use assembly in my code in order to make my release date.

  75. Jeff:

    Ontario (and several other Canadian provinces) recently changed their engineering licensing as well as education certification to include Engineering as a licensed engineering discipline. I was part of the 2nd cohort of students to graduate from such a program. The Therac disaster was one of the leading reasons for making that change.

    That having been said, you can’t assume that just because the profession exists that all of your problems will go away. Even in construction that’s not true. In most states you don’t need to have an engineer sign off on work to install a residential concrete driveway, for example. (Yes, you may need meet building codes and get a permit, but that doesn’t mean you need to have an engineer sign off on the design). A 30-story high-rise is going to need engineer certification.

    The basic mantra of engineering is that “people could die”. This means that a lot of software engineering methodology and analysis is needed for things like flight control software or medical devices. Large amounts of paperwork, possible formal methods and so on. Your word processor isn’t going to kill someone if it fails. Your television control software won’t (unless you have to worry about software-controlled over-voltages, or something, but that’s becoming less of a worry as time goes on, not more). In short, most of the software you interact with on a day-to-day basis doesn’t really matter. If it doesn’t work your life becomes inconvenient, but no actual harm will come. Under such conditions, faster, cheaper development is more important than formal proofs.

    One of these days I’d love to see someone do a formal proof for Windows Notepad or grep on Linux. It would be pointless, but at least somewhat entertaining.

    – Garrett

  76. …One of these days I’d love to see someone do a formal proof for Windows Notepad…

    Now that would be a form of torture that would make even this dour hermit blanche.

    I just threw up a little in my mouth…

  77. From ancient days when I worked on compilers for a living, I will never forget the complaint we received from someone when we came out with our new compiler that actually did serious optimization. The customer had code that,. I am not making this up, pretty well matched *the* standard example of undefined behavior in K&R,. i.e.

    a[i++] = i;

    The customer did not feel that he should have to find all the places his code did things with undefined results–and it could be argued that our compiler should have pointed it out. (I do not know what the ultimate response to the customer was.)

    I am glad that others have pointed out that optimizers can trip up code whose behavior is officially undefined–and also glad that the true cause of the Hercules issue has been found.

  78. I agree with everything in the article except the statement that optimization bugs are on the whole rare. I actually during the years i wrote C full time, found them to be quite common. Personally, I prefer to turn off optimization. I find it reduces heisenbugs to zero.

  79. Optimization bugs are _not_ rare. After finding too many problems due to structure mis-alignments in code compiled with MS Visual Studio, we turned off optimization permanently in our production code. It’s just not worth the trouble.

    1. >Optimization bugs are _not_ rare.

      They probably aren’t, in Microsoft compilers. Open-source compilers have a better record.

  80. Dan, thanks for using the hyphen properly on the word “anally-bound.” That is the correct way to spell that.

Leave a Reply to josh reich Cancel reply

Your email address will not be published. Required fields are marked *