Off with their header files!

I released a new software tool today. The surprise about this one is that it turns out to be consistently more useful than I expected. And thereby hangs a tale.

The tool is deheader. What it does is find, and optionally remove, #include files that their C source file does not actually require to compile. It does this by the brutally simple method of repeatedly attempting to recompile each source with more #includes removed than on previous passes. There are some bits of cleverness about the order #includes are test-deleted in, and it avoids some cases likely to lead to false positives – notably, includes within the scope of #if/#endif. But, basically, it just brute-forces its way through.

The benefit of removing these unused includes is threefold. First, it reduces build time – sometimes by a lot, especially on C++ projects. Secondly, it may cut the runtime size of your binary modules. Third, by removing noise from #include lists it makes clearer what each module’s dependencies actually are.

The first surprise is that even on quite large projects this check doesn’t take forever. On modern hardware, single-module C compiles are very fast. I’ve learned from watching deheader runs on a couple of largish C projects that, out of total build time, the compiler consumes a smaller fraction – and the linker a quite a bit larger fraction – than I would have thought beforehand.

C++ is another story; due to templates, the average of compile times is greater and the variance in compile times is much larger – but the gains from removing unused headers are proportionately larger, too. Notwithstanding templates in the mix, complete deheader runs have (surprisingly) tolerable time to completion even on projects the size of Battle For Wesnoth.

The second surprise is that there a lot more unused includes out there than I expected. When I ran deheader on the gpsd sources this afternoon, I expected maybe a couple dozen hits. I got over 200. Running the first prototype on Battle For Wesnoth back in late 2008 turned up a preposterous pile of them, I think over a thousand.

Evidently programmers not only drop a lot of #includes in place by reflex, but are very poor at noticing when code refactoring has made them unnecessary. It’s a quiet error, relatively harmless, and we have other things to pay attention to.

I use a guillotine as the deheader project’s logo (due props to hacker/artist sirea, who released it under CC-by). Off with their header files!

If you maintain a C or C++ project, I recommend running this tool on it. I predict the amount of #include cruft you clear out will surprise you.

169 comments

  1. The fact that you can get away with not including a system-wide header on one system doesn’t mean that the resulting code is conformant.

  2. >How well does it tolerate build systems with lots of compile-time flags?

    It ignores compile-time flags by default. This is less bad than it sounds, since the only ones it has to be concerned about are the ones that control #if and #ifdef inclusions. Those you can wedge into the build command it uses by employing the -m option.

    Any errors it makes will become very obvious when you do a normal build, so (providing you’re using a VCS) experimentation is safe.

  3. >The fact that you can get away with not including a system-wide header on one system doesn’t mean that the resulting code is conformant.

    In principle, no it doesn’t. But in practice, you’re on pretty safe ground if your deheader build command uses -Werrors -Wfatal-errors, which I recommend doing on the manual page.

  4. I’ve wanted something like this for a long time, so much so that I hacked up something in perl that accomplished just this, once a long time ago. Never bothered to put in the work to make it public, though. Fortunately, we have ESR for that.

  5. Back in the dark ages of Windows programming, especially if one was stuck using MFC, you’d often include a bunch of headers you didn’t need, because it would actually be a net win on compilation time with precompiled headers, and a pretty significant one at that. I haven’t been stuck doing that for the last decade or so, but I suspect that machines are fast enough now that that’s no longer the case, but I certainly haven’t benched it.

  6. I know you open source guys hate to hear this, but Microsoft solved this problem fifteen years ago with pre-compiled headers.

    1. >I know you open source guys hate to hear this, but Microsoft solved this problem fifteen years ago with pre-compiled headers.

      OK, that’d help the compilation time…but unless Microsoft was cleverer about it than I’m expecting, it would increase module bloat.

  7. I was under the impression that in Java, importing more than was actually necessary (using java.util.* rather than java.util.ArrrayList, etc.) would actually add to compiled file size, but C unnecessary includes did not. Under what circumstances would they?

  8. You’ld be surprised (ok, probably you wouldn’t) at the amount of HTML pages being pushed around the web that include large numbers of javascript functions they never call.

    1. >You’ld be surprised (ok, probably you wouldn’t) at the amount of HTML pages being pushed around the web that include large numbers of javascript functions they never call.

      And writing an optimizer to remove that crap automatically is exactly the sort of thing I do. That was a hint, wasn’t it?

  9. It’s worse than that with pre-compiled headers more often than not a clean rebuild was required to notice changes losing any gains in having them pre-compiled. I haven’t tried using them recently so latest Visual Studio may deal with that a bit better, or not.

    1. >It’s worse than that with pre-compiled headers more often than not a clean rebuild was required to notice changes losing any gains in having them pre-compiled.

      Jessica: Thus, this is not a solution, just another source of problems. I fail to be in the least surprised.

  10. # esr Says:
    > OK, that’d help the compilation time…but unless Microsoft was
    > cleverer about it than I’m expecting, it would increase module bloat.

    Indeed, skinny was never a major goal of Microsoft. However, if you are interested they have some remarkable features in their linker, including, for example, a profiler tool that can monitor they usage of a program, then feed the data back into the linker to rearrange the physical layout of the executable file to minimize page swapping.

  11. i think (speaking as a guy who’s probably written less than 10 KLOCs of C in his life….) that one of the big problems is incremental build-up of #include’s as you work on a file. ok, i start with stdio; hmm, this function requires sys/types, I’ll add that; this other function requires unistd, I’ll add that–who remembers to check whether unistd includes sys/types or not?

    who wants to be first to try this on a kernel… ;-)

  12. I’m sorry, but as someone with experience in porting software between different UNIX platforms, I can say with confidence that this is the wrong approach to remove headers that are unused.

    In reality, if your source file includes headers “A” and “B”, you may get away with removing header “A” from your source, because as it may happen, header “B” may include header “A” in some circumstances (e.g. in your particular C library).

    However, this may only happen on *your* specific platform and/or with the exact flags you used to compile the software and/or the compiler that you are using, possibly even the compiler *version* that you are using.

    For example, look at the manpage of printf(3) (first function that came to mind, I didn’t even have to think much). If you look at the manpage, you’ll see that the vprintf() function conforms to C89 and C99 and that it requires including “stdarg.h”.

    But, as it happens, if you are compiling your code in Linux and include “stdio.h”, but *don’t* include “stdarg.h”, then your program that uses vprintf() will compile just fine.

    However, if you remove the include of “stdarg.h”, now it’s very likely that your code will not compile in many other C89/C99-conformant platforms.
    This is not just a standalone example, I can assure you that given two different platforms (such as Linux and Solaris), this will happen for many combinations of functions and headers (i.e. where you can get away with removing one header in one platform, but you can’t in another).

    1. >I’m sorry, but as someone with experience in porting software between different UNIX platforms,

      I wrote the first book on this topic. In 1985, published ’87. You are earnestly explaining to grandma how to suck eggs.

      >I can say with confidence that this is the wrong approach to remove headers that are unused.

      Er. What’s the right approach, then? Fumbling around by hand?

      Of course a tool like this can hurt you if you apply it stupidly. Power tools are like that. Not stupid plus power tool is better than not stupid without.

  13. Er. What’s the right approach, then? Fumbling around by hand?

    If you’re not worried about precision you could mine the manpages of multiple systems for the header requirements for different calls, and ensure the smallest set of headers covering all systems are present. That way the tool could double as a rough checker for header conformance.

  14. Another example of why “Premature optimization is the root of all evil”.

    There are many very intelligent methods to perform this function efficiently, eg, using the most modern AI. And they would all fail. Yours is “provable” correct.

    I am impressed.

    Of course, now we want it to be more intelligent to speech things up. ;-)

    Maybe you could be more clear in explaining why pre-compiled headers are evil? I can think of many reasons, most centering around software updates and a fast development cycle. All involve the above maxim about “premature optimization”.

    1. >Maybe you could be more clear in explaining why pre-compiled headers are evil?

      Basically because they’d be hard to reference-check – just as hard as non-precompiled headers (same problem, really). Unless you solve that problem, what you’re going to end up doing is adding the precompiled blob to the module whether or not it’s ever used. Now, that happens with non-precompiled headers during the preprocessor phase; they generate C which becomes input to the compiler front end. But there’s a crucial difference.

      The difference is that with non-precompiled headers, the header contents becomes part of the parse tree, which gets a whole bunch of well-understood and effective pruning and optimization techniques applied to it. The parts that aren’t used go away and have no runtime space cost. With precompiled headers the picture is murkier. The most obvious ways to implement them would require the linker to do the same sort of pruning, assuming it’s done at all — which it probably isn’t.

      Then there’s the issue of the compiled headers not being updated properly when the sources change. That sort of failure would produce subtle bugs that would be stone bitches to debug.

  15. > I was under the impression that in Java, importing more than was actually necessary (using java.util.* rather than java.util.ArrrayList, etc.) would actually add to compiled file size

    It might trivially slow down the compiler, but “import” is just syntactic sugar, and the final class file fully qualifies every type used.

  16. Is it -m CFLAGS=’-Werror -Wfatal-errors’, or -m “make CFLAGS=’-Werror -Wfatal-errors'”?, according to:


    -m

         Set the build command used for test compiles. Defaults to ‘make’.

    It says “command” here, not “options”.

    ESR says: Right. Your second form is correct; you build a command that includes the options. The man page is slightly wrong, I will fix.

  17. … because http://trac.webkit.org/browser/branches/old/safari-3-2-branch/WebKitTools/Scripts/find-extra-includes was too difficult to use? Hmm, Eric?

    OK, it only finds redundant includes, not un-needed includes.

    There are (of course) a couple issues with a simple non-parsing approach to this problem for C++, neither of which are solved by (ab)using the compiler in this way. In particular:

    Overload Sets:

    It’s possible that an overloaded function has declarations that come from different files. It might be that removing one header file results in a different overload being chosen rather than a compile error. Eric’s solution appears to have this problem, too. The result will be a silent change in semantics that may be very difficult to track down afterwards.

    Template specializations:

    Similar to the overload example, if you have partial or explicit specializations for a template you want them all to be visible when the template is used. It might be that specializations for the primary template are in different header files. Removing the header with the specialization will not cause a compile error, but may result in undefined behaviour if that specialization would have been selected. Eric’s solution appears to have this issue, as well.

    For example:

    Suppose I have fileA.h which declares a class classA with template function SomeFunc(). This function is implemented directly in the header file (as is usual for template functions). Now I add a specialized implementation of SomeFunc() (like for SomeFunc()) in fileA.C (ie. not in the header file).

    If I now call SomeFunc() from some other code (maybe also from another library), will it call the generic version, or the specialization?

    Survey says: Who knows!?!? It’s an error to have a specialization for a template which is not visible at the point of call. Unfortunately, compilers are not required to diagnose this error, and can then do what they like with your code (in standardese it’s “ill formed, no diagnostic required”).

    It should be noted here that PC Lint can solve this problem (and does) via a more sophisticated approach… but that tool isn’t FOSS.

    Eric’s statements in the post above (about pre-compiled headers) are laughable. (Dude, did you ever take a compiler class?) Here is a good reference to how to performance tune compile times using pre-compiled headers.

    This is a decent book that has a 1/2-chapter on the problem (and potential solutions for avoiding the problem): http://www.amazon.com/dp/0201633620

    > I wrote the first book on this topic. In 1985, published ’87. You are earnestly explaining to grandma how to suck eggs.

    Assuming you’re referring to this one: http://www.amazon.com/Portable-System-Programming-Prentice-Hall-Processing/dp/0136864945

    technically, you co-authored it. (At least 3 authors that I know of.) And it’s not a very good book, it turns out.

    1. >OK, it only finds redundant includes, not un-needed includes.

      You disappoint me. Why so determined to find fault that you attack me for not knowing of and using a tool that doesn’t actually do the job?.

      >Overload sets [and] Template Specializations

      Thank you. I will document these as potential failure cases that an intelligent user of the tool should know about.

      I am far from surprised that such cases exist. C++ with templates is notoriously difficult for human or machine reasoning; it’s full of spiked pit traps like these. “Stupid” tools can’t avoid them; “smart” ones can’t either, because the interactions between “smart” and C++ semantics creates so many different edge cases where things can be subtly broken. I frankly won’t believe without very strong evidence your claim that PC-Lint “solves” this problem, because I judge that it’s beyond solution by anything short of what Charles Stross calls a weakly godlike AI.

      In fact, the prototype of deheader I wrote in 2008 tried to be a lot cleverer – for example, by reasoning about includes in #if/#endif contexts. After some experiments, I concluded that this was a losing game and wrote a stupid, brute-force checker instead. The trouble with “smart” is that it proliferates complexity and error cases. Rising complexity of core algorithms entails a tradeoff between the gain from handling more edge cases and the loss from making operation in the simple 90% of cases potentially buggier. This is why Ken Thompson famously said “When in doubt, use brute force.”

      Sometimes, one must embrace the suck and use complex, fragile algorithms anyway, because you can’t get enough input case coverage from simple ones. This was the situation when I wrote doclifter, which despite not exactly being a compiler is the most compiler-technology-intensive program I’ve ever written. In other cases, writing something simple will cover enough of the problem space to be useful even if it’s not theoretically perfect, and have an acceptably low defect rate where the “smart” approch would risk an unacceptably high one. deheader is such a case.

      >(Dude, did you ever take a compiler class?)

      No, I’ve only written four or five compilers. Clearly, with all that practical knowledge and experience under my belt, I cannot possibly know what I am talking about compared to someone who has (gasp!) taken a compiler class.

      >Here is a good reference to how to performance tune compile times using pre-compiled headers.

      I’m sure such techniques exist, and if I read that book I’m betting I’d find I reinvented several of them while I was thinking about the problem last night. But you are not thinking like a systems architect – your reasoning about the tradeoffs is not extending past the code into the ecconomics and organizational politics of its context. What are odds that Microsoft, given its features-at-all-costs agenda and its record of poor architectural discipline, would in fact implement those techniques correctly?

      Too low to be worth considering. And if I needed confirmation of this, that talk of writing code to time-optimize the linker layout would have been sufficient. That, Lizzard, has the exact smell of a half-smart half-assed attempt to micro-optimize away the problems created by the stupidest possible glue-the-blob-to-the-runtime approach. And I’m not using “stupid” in an approving way this time; this is a stupid-really-is-stupid case.

      >technically, you co-authored it. (At least 3 authors that I know of.)

      You are excused for believing this on the evidence available to you. In fact, I wrote pretty much the whole thing – had notional co-authorship forced on my by the company I was working for, who thought it positioned them better or some bullshit like that. I was young and they were paying the bills. Took me years to get over being angry at being screwed like that.

      >And it’s not a very good book, it turns out.

      Your evidence for this is? At the time, Brian Kernighan thought it was pretty good and told me so. You’ll pardon me, I think, for trusting Brian’s evaluation of a Unix book more than yours. Did you write a better one?

  18. @winter>
    There are many very intelligent methods to perform this function efficiently, eg, using the most modern AI. And they would all fail. Yours is “provable” correct.

    I think I’ve shown that it is not.


    I am impressed.

    You shouldn’t be, but like fanboys in other areas (iPhone, Android, …) you can’t see the flaws for all your genuflecting hero-worship.

  19. @esr> Of course a tool like this can hurt you if you apply it stupidly. Power tools are like that. Not stupid plus power tool is better than not stupid without.

    Not stupid would not write a stupid tool, and deheader is a stupid tool.

  20. @TJL
    “You shouldn’t be, but like fanboys in other areas (iPhone, Android, …) you can’t see the flaws for all your genuflecting hero-worship.”

    Thanks for the compliment.

    @TJL
    “I think I’ve shown that it is not.”

    You have very carefully shown that if you use a static type-checking compiler and ensure it cannot do static type-checking at compile time, then you cannot ensure the types will be correct at run time. Is that the grist?

    So eric’s method is only provable correct to the extend that static type checking works at compile time? Can I live with users that prevent the use of static type-checking being bitten by the fact his types (classes) will be incorrect at run time? I think I can.

    How to solve this conundrum? By enforcing classes can be checked at compile time. So go and use Java.

  21. > What are odds that Microsoft, given its features-at-all-costs agenda and its record of poor architectural discipline, would in fact implement those techniques correctly?

    Your continued fear of Microsoft is amusing.

    > Your evidence for this is?

    Have you read it recently with unjaundiced eye?

    Look, I’m not saying it sucks, just that its not very good. It’s mediocre. Thankfully, it seems to adhere to “primum non nocere”.

    > In fact, I wrote pretty much the whole thing

    Obviously it will not be difficult to blame your co-authors for any short-coming. Perhaps your editor was a hack.

    > Brian Kernighan thought it was pretty good and told me so

    … and you were placated.

    Brian seems a very kind, disciplined individual, he presents a very gentle manner, even with things he obviously loathes (see “Why Pascal is not my favorite programming language.”)

    Here is what BK has to say about open source:

    I think that the open-source movement is in general a good thing. I am not sure that it will ever replace tailored, professional, rock-solid commercial products sold for profit. But what it might do in a lot of cases, and I think that genuinely it does do in some things like C compilers, is to provide a reference implementation and a standard that’s pretty good and that other implementations have to roughly match or why would anybody pay for them? I think that in that sense it’s a useful thing.

    1. >Your continued fear of Microsoft is amusing.

      If by “fear of” you mean “contempt for”, yes.

      >Have you read it recently with unjaundiced eye?

      No. It was a response to a time now gone; POSIX and ANSI C standardization had made it largely obsolete by 1995, and I’ve barely thought about it since then. It filled a genuine need ten years earlier, though – Prentice-Hall wouldn’t have published it in their series for the weightiest core Unix books otherwise.

      >Obviously it will not be difficult to blame your co-authors for any short-coming.

      Alas, I can’t do that, as their contributions pretty much ended with the Foreword and the Introduction.

  22. @TJL
    BK said
    “I am not sure that it will ever replace tailored, professional, rock-solid commercial products sold for profit.”

    And he was referring to Microsoft software at that point? Or Solaris maybe?

    This is what followed in that interview (2000):
    “As for Plan 9, I think that’s too late, unfortunately. I think Plan 9 was a great idea and it should’ve been released under an open-source license when it was first done, eight years ago, but our legal guardians would not permit it. I think that they made a grievous mistake. The current open-source license is definitely worth having but it’s not clear whether Plan 9, at least as a general-purpose operating system, will have much effect except in a relatively small niche. It has many things going for it which make it valuable in different areas, particularly where you need a small and highly portable operating system, but is it going to take over from Linux? Probably not. ”
    http://www.cs.cmu.edu/~mihaib/kernighan-interview/

    Now, if you would like to tell us Plan 9 is superior to Linux, be my guest. I won’t argue. But do you see how BK thought Plan 9 should have been released under “an open-source license”?

    But for an OS that supplies >90% of the Top500 super computers and is the best selling OS for “smartphones” (whatever that may be), I think we can say it is at least mediocre.

  23. > This was the situation when I wrote doclifter, which despite not exactly being a compiler is the most compiler-technology-intensive program I’ve ever written.

    Because nobody else has ever had to do this? This guy wrote a whole book in troff, then translated it to Docbook for his editors at O’Reilly. http://www.oreillynet.com/xml/blog/2007/05/docbook_elements_in_the_wild_a.html#comment-599872

    quoting:

    I’m intrigued to see MySQL Cookbook included in the list, but I’m not sure it *quite* qualifies as a book produced “in DocBook.” The O’Reilly production department did indeed work with the DocBook version of the manuscript, but in fact I wrote it using troff + vi. I ran the chapters through a troff -> DocBook converter before sending them to O’Reilly. :-)

    Paul DuBois

    I’m pretty sure he used Pandoc

    1. >I’m pretty sure he used Pandoc

      If he did, he only caught the semantics that markdown can express – pandoc uses markdown as an intermediate format. This implies some significant limitations on, for example, translation of TBL markup. Nor will it handle upconverting raw troff cliches to the specialized subset of DocBook tags used in marking up command synopses. pic conversion? eqn-to-MathML conversion? Not going to happen. groff man as input, yes, but ms? mm? me? Berkeley mdoc? The odd troff superset the Tcl docs use? doclifter handles all that.

      pandoc cannot be as good at the more specialized job as doclifter is. That having been said, pandoc is a very clever idea and useful for the simpler cases. Thanks for the pointer….but as it happens, I know Paul duBois and I’d be astonished if he did not in fact use doclifter. He was in the audience once when I did a talk on it.

  24. # esr Says:
    > Jessica: Thus, this is not a solution, just another source of problems. I fail to be in the least surprised.

    Sorry, this does not correspond with my experience. My experience is that pre-compiled headers work VERY well, and are very reliable. So I don’t agree at all. I’m not saying there are never any problems, but I am also not saying that there is no such thing as a compiler bug either.

    Further, I am confused on this whole code bloat thing. Exactly where is the extra bloat coming from? I suppose it depends on the compiler, but my experience is that the only potential code bloat source is the instantiation of templates that are not used, but AFAIK, compilers don’t do that, certainly MS compilers don’t. So perhaps I am missing something but I don’t know where this alleged code bloat comes from. But feel free to help me out of my ignorance.

  25. > Now, if you would like to tell us Plan 9 is superior to Linux, be my guest.

    I think we all understand that linux is the major example of “worse is better”.

    > I think we can say it is at least mediocre.

    too true! It could be a lot better, but alas, the MIT way lost out to NJ, and when Unix was commodified, we got linux.

  26. > And he was referring to Microsoft software at that point? Or Solaris maybe?

    Maybe Oracle, or Apple. :-)

  27. > Winter Says:
    > Maybe you could be more clear in explaining why pre-compiled headers are evil?

    In my experience, precompiled headers are unstable. On Windows I now always turn them off, but if I remember correctly the reason I do this is because they break the C preprocessor. For example when you change a macro used with an #if or used elsewhere, the compiler won’t notice that the precompiled header has a dependency on the changed macro. Result: out-of-date precompiled header gets linked into new program, massive destruction ensues. The problem wasted many hours worth of debugging time for me, before I realized: “on Visual Studio, when your code is acting really strange, step 1 is a full rebuild”. Frighteningly often, the full rebuild fixed the strange behavior.

  28. # Mike Earl Says:
    > You’ld be surprised (ok, probably you wouldn’t) at the amount of
    > HTML pages being pushed around the web that include large numbers
    > of javascript functions they never call.

    You know there is a real danger of getting the wrong idea here. If you just go blindly in and mark and sweep to find unused functions you could end up being worse off than you started. Many of the unused functions are in large libraries like jQuery, MooTools, jQueryUI etc. But you don’t want to cut out unused functions from these. These libraries usually come from CDNs (in well designed web sites anyway.) They are designed to sit in your browser cache, your proxy cache or somewhere else in your enterprise caching strategy. So if anyone in your enterprise accesses a jQuery web site, nobody else in the enterprise should have to ever reload this from the CDN again. (Not strictly 100% true, but you get the idea.)

    If you generate, for example, a customized jQuery library, cutting out the functions you don’t use, you will entirely defeat this caching strategy, and end up forcing your users to unnecessarily load jQuery again and again for your site. This is not good.

    So I wouldn’t go blundering into that one blind. My bet is that there probably aren’t too many functions in the JavaScript being copied about that aren’t used, or don’t fall into this caching arena, specialized corporate frameworks notwithstanding. But, I haven’t really tested, so I could be wrong.

  29. If you generate, for example, a customized jQuery library, cutting out the functions you don’t use, you will entirely defeat this caching strategy, and end up forcing your users to unnecessarily load jQuery again and again for your site. This is not good.

    Fair point. And in fact, the one time I ran into this, the solution ended up being “roll the large pile of #includes on each page up into a single one” rather than eliminating them entirely – bandwidth was irrelevant, strangely enough it was SSL connection setup that was killing us (and it wasn’t reusing connections due to some kind of reverse proxy issue and blah blah blah…).

  30. Re precompiled headers:

    Many years ago I wrote an assembler for a TI DSP in Python. I had 12 MB of DSP assembler source code, with lots of nested include files.

    I did the equivalent of pre-compiled headers, but only per build — the Python script would reuse the parse information it had available if it ever encountered the same file in the same state again. By same state, I mean that any defines or other parse information that was used inside that include file to control assembly had to be in exactly the same state, or it would re-assemble the include file.

    This was a huge win on build time, without the potential problems of using stale precompiled headers. Of course it only worked because the integrated make system was also written in Python and all the files were assembled from inside the same process.

    That assembler is actually my canonical example of why Python is not necessarily slower than C. The assembler was faster than its C predecessor (from TI) by almost an order of magnitude. The assembler actually started out much slower, but some of the optimizations (like the precompiled headers) that were almost trivial to get logically correct in Python would have been a nightmare to try to implement in C.

    In a large competitive market, it is often worthwhile to spend the time and money to polish C. But when you really have to ration the programmer time, Python can give you huge leverage.

    Re stripping headers:

    Once about 20 years ago, when I was thrown on a new project that was reusing a lot of old code, one of the other programmers came by and asked me what I was doing. When I replied that I was cleaning up the code, removing unused includes, fixing comments, etc. his response was “you shouldn’t be doing that — we have junior programmers that can do that.” But of course, the very act of modification can be a huge aid to understanding if not done blindly, and when the project lead was promoted, I was made project lead, because I knew a lot more about how it was all put together than anybody else.

  31. There are a couple of workable solutions to the “this could break other platforms” problem.

    1) Run the header remover code on lots of different systems, and only remove headers that were redundant on every system test.

    2) Comment out the includes, note that they were removed by a program and that it’s possible they caused problems. Commit. Be willing to break it and see if anyone screams. The cost of a couple of build breaks in return for thousands or tens of thousands of somewhat faster compiles is still a net win in programmer time.

  32. # techtech Says:
    > when you change a macro used with an #if or used elsewhere,
    > the compiler won’t notice that the precompiled header has a
    > dependency on the changed macro.

    I’m sorry, but that is just not correct, I have never seen this problem, and I just ran a few test cases here and this problem didn’t manifest.

    FWIW, MS tools also have an incremental linker, and I have occasionally seen problems with stale code there (though rarely), and its biggest problem is that it bloats with incremental links. But a rebuild solves this problem, and for large projects it massively decreases the linker run time.

  33. @TJL
    “I think we all understand that linux is the major example of “worse is better”.”

    What do you expect from an OS that is developed without plan or structure, but in evolutionary fashion by natural selection of the fittest code?

    So instead of a well designed Cadillac, we get a camel. Take your pick.

  34. re: pre-compiled headers

    First off, this notion that some people *cough*Lizzard*cough have that open source tools don’t support pre-compiled headers is ridiculous. GCC 3.4 and later supports pre-compiled headers, though many projects choose not to use them. There are lots of good and not-so-good historical and technical reasons why this is the case and going into them is probably off-topic.

    Secondly, the use of pre-compiled headers might reduce compile time, but even projects that use pre-compiled headers could still benefit from this tool. Remember that this tool solves not just build time issues; esr stated that there are three things the tool aims to do:

    The benefit of removing these unused includes is threefold. First, it reduces build time – sometimes by a lot, especially on C++ projects. Secondly, it may cut the runtime size of your binary modules. Third, by removing noise from #include lists it makes clearer what each module’s dependencies actually are.

    Even if you are using pre-compiled headers, removing unnecessary headers is likely to further reduce build-time. Furthermore, doing so may (or may not) cut the runtime size of the resulting binaries, depending on the individual case. I wouldn’t argue this point with someone who has written “4 or 5 compilers.” And of course, the 3rd item is extremely valuable in and of itself: the more clarity and readability you can bring to any source code, the easier that source code is going to be to maintain. Anybody arguing this point clearly has job security issues.

  35. And of course, the 3rd item is extremely valuable in and of itself: the more clarity and readability you can bring to any source code, the easier that source code is going to be to maintain.

    I do not think this is a good tool for determining the “real” dependencies, since as Eric has acknowledged it will happily delete headers required for conformance based on flukes of implementation. As TJL pointed out, deleting headers can change program meaning without generating any warnings from the compiler. Perhaps if you understand the dependencies thoroughly you can run the tool without harm, but that’s not the use case it’s being advertised for.

    No, I’ve only written four or five compilers. Clearly, with all that practical knowledge and experience under my belt, I cannot possibly know what I am talking about compared to someone who has (gasp!) taken a compiler class.

    I think the comment that you don’t know what you’re talking about is fair enough when you embarrass yourself with nonsense like this:

    With precompiled headers the picture is murkier. The most obvious ways to implement them would require the linker to do the same sort of pruning, assuming it’s done at all — which it probably isn’t.

    This is flat out incorrect, and furthermore it is clear that you haven’t read anything about implementing PCH (or at least, nothing you understood).

  36. # Morgan Greywolf Says:
    > Even if you are using pre-compiled headers, removing
    > unnecessary headers is likely to further reduce build-time.

    That is true, though you will grant me that absent parsing and semantic analysis the impact will be much smaller.

    > Furthermore, doing so may (or may not) cut the runtime
    >size of the resulting binaries, depending on the individual
    > case. I wouldn’t argue this point with someone who has
    > written “4 or 5 compilers.”

    Appeal to authority is not an argument, please offer examples or an explanation why you believe this is the case. It isn’t obvious to me.

    > And of course, the 3rd item is extremely valuable in and of itself: the more clarity and readability you can bring to any source code, the easier that source code is going to be to maintain.

    That is true, nonetheless, it is also not terribly significant. Using include files are not a great metric of measurement here. This is especially true since you really need a list not just of the include files in your .c file, but the transitive closure of these.

    Of course the right way to do that is to have a decent IDE that lets you readily examine dependency chains and other internal data. Needless to say, Visual Studio has extremely good facilities for that.

    Of course the real truth thing here is that the problem is an ugly legacy thing that Thompson and Richie introduced because of the limitations of the system they had, and that has hung around to poison tractable program analysis for forty years, which is to say the pre-processor.

    FWIW, I actually think this program esr wrote is interesting, because the way he did it is cool. Nonetheless, it is kind of like duct taping the old leaky plumbing system in a historic preservation house. It kind of sort of works, even though you can’t fix it properly.

  37. >And writing an optimizer to remove that crap automatically is exactly the sort of
    >thing I do. That was a hint, wasn’t it?

    It would be most useful as an Apache module. Or, if you wanted to assist the military industrial complex you would…Uh…Crap, my current contract won’t let me make money on this, but now I gotta check if I can talk about it.

    Shit.

  38. Appeal to authority is not an argument, please offer examples or an explanation why you believe this is the case. It isn’t obvious to me.

    Well, it’s not an appeal to authority, it’s a statement of my own ineptitude. ;) I have, to date, written a total of zero compilers, so I actually only have very vague ideas about which cases would and would not result in smaller binaries. I think in some cases, it would probably make no difference since some include files simply specify macros or symbolic #defines. For example, I suspect that removing an unnecessary ‘#include ‘ would make no difference in the size of the resulting binary. In other caess, when you include a library header for example, the linker will add a symbol table for that library, so I think in that case it would result in a smaller executable.

    Of course, someone who has written compilers before is welcome to jump in and prove me wrong. :)

    Of course the right way to do that is to have a decent IDE that lets you readily examine dependency chains and other internal data. Needless to say, Visual Studio has extremely good facilities for that.

    Eclipse/CDT will do this as well, but of course there are plenty of other tools for doing this without an IDE. Development tools, I find, are very much a matter of personal and/or project preference. Furthermore, using “Visual Studio” and “extremely good” in the same sentence has got to be some sort of violation of the rules of English. :) (I could write volumes about what is wrong with Visual Studio, but I won’t do it here.)

  39. Morgan Greywolf Says:
    > Furthermore, using “Visual Studio” and “extremely good” in the
    > same sentence has got to be some sort of violation of the rules of English. :) (

    I don’t get that at all. I have used lots of development systems and Visual Studio if certainly the best I have used, especially in combination with the C# language, which is certainly the best programming language I have ever used (for the kind of work I do.) Here is a data point. Just recently I had to write something pretty similar to the C preprocessor without the conditionals. But it did require #include and parameterized macros. In VS 2010 and C# I wrote it in two hours. It had two minor bugs. I think that is pretty quick, and I couldn’t do it that quick in any other system.

  40. @JessicaBoxer:

    I couldn’t do it that quick in any other system.

    I’m not trying to be judgemental here, but either you’ve lived in a Microsoft bubble the last 20 years, or you’re just very young and inexperienced.

    Something like the C preprocessor without the conditionals is extremely simple to write in just about any of the dynamic languages: Python, Ruby, or Perl. In fact, Python comes with pre-canned parser classes that would make such a project very easy indeed.

  41. # Morgan Greywolf Says:
    > Something like the C preprocessor without the conditionals is extremely simple to write in just about any of the dynamic languages: Python, Ruby, or Perl. In fact, Python comes with pre-canned parser classes that would make such a project very easy indeed.

    Obviously if someone else wrote if for you already, it is pretty easy. Batteries included is the one good thing about Python. But frankly, I have used Python, I really wanted it to be good. I really wanted it to be productive. I committed a significant amount of time to it. My opinion is that it is horrendous. In throwing away static checking, they have thrown away much of what makes programming simpler, and have gained very little in return. I can’t tell you the number of hours I have banged my head against a wall trying to find a bug in a Python program, to eventually find it, and realize that it would either have been impossible, or would have immediately been automatically detected with a static type system. We already had a dust up about it on this blog during the time I was trying to get into it. It wasn’t pretty.

    I am unfamiliar with Ruby, but Perl? See the thing is that I actually need to be able to maintain the thing. And horror of horrors, other people need to be able to read my code after I am done.

    Oh and FWIW, using a “parser” class for this project is a very bad idea indeed. Unless you are doing the conditional thing anyway.

  42. Jessica,

    > I am unfamiliar with Ruby, but Perl? See the thing is that I actually need to be able to maintain the thing. And horror of horrors, other people need to be able to read my code after I am done.

    I don’t have any problem reading other people’s Perl. I know it’s a weird language, so I guess I’m also weird.

    Yours,
    Tom

  43. In throwing away static checking, they have thrown away much of what makes programming simpler, and have gained very little in return. I can’t tell you the number of hours I have banged my head against a wall trying to find a bug in a Python program, to eventually find it, and realize that it would either have been impossible, or would have immediately been automatically detected with a static type system.

    You have probably not absorbed the zen of incremental unit-testing that is so easy in Python. I have the opposite experience; what few bugs I have like this are caught at least as quickly as they would be in a compile/edit cycle in another language.

    I am unfamiliar with Ruby, but Perl? See the thing is that I actually need to be able to maintain the thing. And horror of horrors, other people need to be able to read my code after I am done.

    I often tell people that perl is fine for a 30 line program that you know you will never need to reuse, if you are a perl master. Oddly enough, there is a huge universe of such 30 line throwaway perl programs. Unfortunately, nobody ever got to perl-master status by just writing 30 line programs.

    Oh and FWIW, using a “parser” class for this project is a very bad idea indeed. Unless you are doing the conditional thing anyway.

    You may be right about this. But regexes are extremely powerful for this sort of preprocessing, and can be used in a very ad hoc fashion quite easily.

  44. I don’t have any problem reading other people’s Perl. I know it’s a weird language, so I guess I’m also weird.

    I usually don’t find perl all that hard to read, but I do find it hard to write. I’ve rewritten several perl packages in Python to add functionality.

  45. Patrick Maupin Says:
    > You have probably not absorbed the zen of incremental unit-testing that is so easy in Python.

    I assure you, Python doesn’t own agile.

  46. How did we get onto the language wars? C# is hardly a product of the “Microsoft bubble”.

  47. > I usually don’t find perl all that hard to read, but I do find it hard to write.

    I find it easy to write as well, and I write larger (but not huge) programs in it. Python and Erlang seem cool, but I haven’t had a reason to use either at work. Ksh, C and Java are great. I’m not a fan of C++, but I suspect I’d like C#.

    > You have probably not absorbed the zen of incremental unit-testing that is so easy in Python.

    I’ve just started learning Perl unit testing. I suppose I should also learn JUnit.

    > I assure you, Python doesn’t own agile.

    Is it called C#Unit in C#?

    Yours,
    Tom

  48. I assure you, Python doesn’t own agile.

    And I assure you that if you had all that heartbreak and angst because of a few things that static typing could have caught, your unit tests really aren’t all that good.

    If your unit tests are good enough to allow rapid refactoring of the code (without having to do major surgery on the unit tests themselves), then you are in a much more “agile” position. When it comes to making major changes without disruption, nothing else I have ever worked with facilitates this as well as Python.

  49. @JessicaBoxer:
    I agree with Patrick: if you’re having issues because Python lacks static typing, you’re doing it wrong. While perhaps not the world’s finest example of Python unit testing, take a look at the unit tests I’ve put together for my pure Python KML library. Furthermore, look at the classes in the kml package and notice the type checking code and notice how some simple type-checking allows me to handle very different types of data, yet using only one method to do it.

  50. Regarding the static typing issue, I’ve come to appreciate both the flexibility of “duck typing” and the verification (and autocompletion) of static typing. Has anyone here thought through the implications of a language where method/function signatures were required to use Java-style interfaces only and not concrete classes (or only a heavily restricted subset)? Obviously, this would require brand-new classes to declare an interface to represent their functionality, but what architectural issues, unsolvable by syntactic sugar, would arise?

  51. And I assure you that if you had all that heartbreak and angst because of a few things that static typing could have caught, your unit tests really aren’t all that good.

    Rewriting the type checker many times over seems like a waste of time.

  52. > You disappoint me. Why so determined to find fault that you attack me for not knowing of and using a tool that doesn’t actually do the job?.

    I’d claim that redundant includes are a large part of the problem.

    > Thank you. I will document these as potential failure cases that an intelligent user of the tool should know about.

    Maybe you should put it in the paper. (Did that ever get finished?)

    1. >I’d claim that redundant includes are a large part of the problem.

      No, they’re not, not on the codebases I’ve used it on.

      >Maybe you should put it in the paper. (Did that ever get finished?)

      Hanging fire until I can get more face time with my collaborator.

  53. A version of this that worked for perl use module lines (and equivalents in other interpreted languages) could be even more useful as importing unused libraries can significantly add to the run time of all sorts of programs.

    I think is should be pretty trivial in perl (perl -c does almost all the heavy lifting) but my google-fu is failing to show one that exists so I suppose I’ll have to write it.

  54. > No, they’re not, not on the codebases I’ve used it on.

    Does it use #pragma once or “include guards”?

  55. Morgan Greywolf Says:
    > I agree with Patrick: if you’re having issues because Python lacks static typing, you’re doing it wrong.

    Ah well that explains it.

    FWIW, I also detest JavaScript as a language for most of the same reasons, but, unfortunately, I have to use it. The way I get it to work is that I have built a whole infrastructure on top of jQuery to perform all the stuff that the language should have done in the first place, including type checking. So Roger’s comment of “rebuilding the type system” is right on the money. When I added all that stuff, my JavaScript velocity increased at least 400%, and I have far fewer of those “takes a whole day to fix” sort of bugs.

    (FWIW, I hate JavaScript more than I hate Python.)

  56. (FWIW, I hate JavaScript more than I hate Python.)

    If it makes you feel any better, I also hate JavaScript, but probably for different reasons than you. :)

    Furthermore, to borrow a phrase from Michael Elkins, all programming languages suck; (I just think that) Python sucks less.

  57. @Jessica:

    If you really want Python with static typing, you should look at the traits package from enthought. Obviously, smart people disagree with me and Morgan about the whole thing, because they have written a fairly comprehensive system that allows static typing. And I can see that Python with static typing is still an excellent language, just not as good in my opinion as Python without static typing.

    @Roger:

    > Rewriting the type checker many times over seems like a waste of time.

    I wouldn’t know. I never bother to write a type checker. My comment was to the effect that anything that a type checker would catch will also be caught by pure functional testing, if the functional tests are reasonably comprehensive.

  58. My comment was to the effect that anything that a type checker would catch will also be caught by pure functional testing, if the functional tests are reasonably comprehensive.

    Roger’s remark was to the effect that many of the problems that would need extensive functional testing to catch in Python would be caught early in the development process had the language a strong static type system that was checked at compile time.

    Imho it’s an engineering tradeoff. With dynamic types, a smart, careful programmer can put together a complex system quickly, and modify it quickly under changing requirements or greater understanding of the problem domain. This comes at the cost of speed, and some assurance that the program is type-correct from the static type-checker. Hence, why we see static type-checkers for Lisp and Python: it lets the programmer hack in one modality and then switch to the other for speed and correctness.

    (I used to hate JavaScript too. Then I grew up and realized that as Lisp’s currency is closures and lists, JavaScript’s currency is closures and associative arrays. The real problem with JavaScript is differences in the APIs to different browsers.)

  59. I wouldn’t know. I never bother to write a type checker. My comment was to the effect that anything that a type checker would catch will also be caught by pure functional testing, if the functional tests are reasonably comprehensive.

    It’s a measure of the dynamic typing community that this demonstrably false claim gets thrown around so much. What people usually mean when they refer to unit testing is finite substitution; there are many statements that cannot be proven this way. A type checker does not suffer from this limitation.

    With a good static type system I can propagate invariants statically throughout the entire program, with testing required only at the single point at which the invariant is established. This is not possible in a dynamically typed language. Don’t get me wrong; I don’t mind Python all that much, but I see a lot of unfounded criticism/dismissal of static typing from Python programmers who do not know much about static typing.

  60. Don’t get me wrong; I don’t mind Python all that much, but I see a lot of unfounded criticism/dismissal of static typing from Python programmers who do not know much about static typing.

    Oh, I know static typing alright. I cut my programming teeth on Turbo Pascal. You don’t get more static than Pascal. :) I never liked Python’s dynamic typing until I learned to love some of the neat things one can do with it.

  61. @Jeff:

    Roger’s remark was to the effect that many of the problems that would need extensive functional testing to catch in Python would be caught early in the development process had the language a strong static type system that was checked at compile time.

    Yeah, I understood that, but I disagree.

    Imho it’s an engineering tradeoff. With dynamic types, a smart, careful programmer can put together a complex system quickly, and modify it quickly under changing requirements or greater understanding of the problem domain. This comes at the cost of speed, and some assurance that the program is type-correct from the static type-checker. Hence, why we see static type-checkers for Lisp and Python: it lets the programmer hack in one modality and then switch to the other for speed and correctness.

    I’ve never used a static type checker in Python. What I tend to do is to sacrifice even more speed, and write very defensive code that checks invariants it cares about. Then, at some point, if things are too slow, I can go in and remove some checks. This is just an extension of the “premature optimization is the root of all evil” mindset — assuming that two modules implicitly agree on their mutual interface and that you don’t have to check for that at runtime is often a premature optimization.

    @Roger:

    It’s a measure of the dynamic typing community that this demonstrably false claim gets thrown around so much.

    Well, you haven’t demonstrated it false.

    What people usually mean when they refer to unit testing is finite substitution; there are many statements that cannot be proven this way.

    That’s a pretty broad mischaracterization of what most people “usually mean.” Here, let me give you a better one that is more accurate in my experience:

    “What most people usually mean when they refer to static type checking is a very simplistic check that, for example, assures that the sum of two integers is always stored in an integer variable, but does not do any checking on the actual range of the output variable.”

    With a good static type system I can propagate invariants statically throughout the entire program, with testing required only at the single point at which the invariant is established.

    True, but with most statically typed languages, the propagated invariants are pretty simplistic. If you want to propagate more complicated invariants, guess what? The language will actually be doing work at runtime under the hood in most cases.

    Yes, there are formal methods that allow fancier type-checking, and yes those are actually becoming useful these days, but guess what? A lot of those don’t even require formal declarations. So this tempest in a teapot will be completely irrelevant in a few years.

    And, in Python, right now it is possible if you care about invariants to actually tag things such that your variables maintain information about where they came from, automatically check themselves, etc. Yes, it’s a lot of run time overhead, but for a lot of applications it just doesn’t matter. And what I typically do in the first pass of coding is check for invariants where I care about them. In Python, it is usually ridiculously easy to check even really complicated invariants about dicts or lists in just a couple of lines of code. That way, your code blows up at the check code, and not a lot later, and it’s easy to fix. After all your interfaces are happy, if you have a performance issue, then you can just remove a few checks.

    Don’t get me wrong; I don’t mind Python all that much, but I see a lot of unfounded criticism/dismissal of static typing from Python programmers who do not know much about static typing.

    I don’t know why you seem to assume that people making this “unfounded” criticism don’t know much about static typing. Like Morgan, I used to code a lot of Pascal. Then I coded a lot of Modula-2 and a tiny bit of Ada. I wouldn’t even touch C. When C got ANSI-fied, it got enough better that it was worthwhile (and it had the mind-share momentum). So I started writing C. Whenever I code in C, I use a lot of types, and I ensure that all warnings disappear at the most strict level, and I find that incredibly useful. But when I code in Python, I do things in a qualitatively different way, and don’t miss the static typing at all.

  62. > I see a lot of unfounded criticism/dismissal of static typing from Python programmers who do not know much about static typing.

    This might be a valid criticism of actual Python programmers who do not know much about static typing. I’m pretty sure Guido van Rossum, who created Python, knows a good bit about static typing. I also cut my teeth on Pascal. Patrick’s comments about the relative utility of static typing sound wise.

    Yours,
    Tom

  63. BTW, it could be that the reason I find Perl easy is that I got used to shell languages where everything is text and you have to use different comparison operators to compare things numerically instead of lexicographically.

    Yours,
    Tom

  64. That’s a pretty broad mischaracterization of what most people “usually mean.” Here, let me give you a better one that is more accurate in my experience:

    Give me an example of a unit test that doesn’t operate on the principle of finite substitution. Fuzzing is still finite.

    “What most people usually mean when they refer to static type checking is a very simplistic check that, for example, assures that the sum of two integers is always stored in an integer variable, but does not do any checking on the actual range of the output variable.”

    What does “simplistic check” mean? Anyone who is using C#, C++, Ocaml, Java, for example, is using a type system where you can have the type system do all the things I have described. It behooves you to actually make an argument, and not simply try to deflect points back at me on a superficial basis.

    True, but with most statically typed languages, the propagated invariants are pretty simplistic. If you want to propagate more complicated invariants, guess what? The language will actually be doing work at runtime under the hood in most cases.

    You misread me. The invariants are enforced at runtime and their correct propagation is ensured statically. I can make a type in Haskell that witnesses an arbitrary contract. The type system itself is unaware of it, but where I see that type, that contract is manifest. As I said, test once. This, unlike most testing methods, is exhaustive.

    Yes, there are formal methods that allow fancier type-checking, and yes those are actually becoming useful these days, but guess what? A lot of those don’t even require formal declarations. So this tempest in a teapot will be completely irrelevant in a few years.

    I was not talking about Agda, Coq, or whatever it is you think I was talking about.

    And, in Python, right now it is possible if you care about invariants to actually tag things such that your variables maintain information about where they came from, automatically check themselves, etc. Yes, it’s a lot of run time overhead, but for a lot of applications it just doesn’t matter.

    Like I said, I don’t want to rewrite the type checker. For which applications do invariants not matter? Unfortunately, your assumption that simply running test cases through a complex program will uncover all errors is false. Some of these errors can be detected by a static type checker. Therefore your assertion is false. A type checker is a miniature proof searcher, and there are many universally quantified claims that it can prove. Unit testing is usually just finite substitution, which is insufficient to prove non-trivial universal claims.

    In Python, it is usually ridiculously easy to check even really complicated invariants about dicts or lists in just a couple of lines of code. That way, your code blows up at the check code, and not a lot later, and it’s easy to fix. After all your interfaces are happy, if you have a performance issue, then you can just remove a few checks.

    I already know you can do many things in Python if your’e willing to do the legwork. And I still do not want to write code to check things I can do more easily in the type system.

    I don’t know why you seem to assume that people making this “unfounded” criticism don’t know much about static typing. Like Morgan, I used to code a lot of Pascal. Then I coded a lot of Modula-2 and a tiny bit of Ada.

    So what you’re saying is you’re only three decades out of date? It’s comical that in another thread you’re criticising others for making absolute statements which fail in some cases, when your own statement here meets that criteria. A bigger man would simply admit error. You said,

    “My comment was to the effect that anything that a type checker would catch will also be caught by pure functional testing, if the functional tests are reasonably comprehensive.”

    I’ll make it easy for you: if you have a procedure that performs a type-incorrect operation on some strange edge case (the result of a bug, perhaps) then we can easily conceive of tests that are comprehensive functionally that will miss the case. A static type checker would alert you upon compilation. You are wrong.

  65. Tom: This might be a valid criticism of actual Python programmers who do not know much about static typing. I’m pretty sure Guido van Rossum, who created Python, knows a good bit about static typing.

    I did not mention Guido van Rossum. If you can find him arguing that functional testing will catch everything a static type checker will, do let me know.

    Morgan: Oh, I know static typing alright. I cut my programming teeth on Turbo Pascal. You don’t get more static than Pascal. :) I never liked Python’s dynamic typing until I learned to love some of the neat things one can do with it.

    So you know one language from 30 years ago? So what? We don’t live in the 1980’s anymore.

  66. # Patrick Maupin Says:
    > In Python, it is usually ridiculously easy to check even really
    > complicated invariants about dicts or lists in just a couple of
    > lines of code.

    I presume you understand that there is a world of difference between “a couple of lines of code” and zero lines of code?

    1. >I presume you understand that there is a world of difference between “a couple of lines of code” and zero lines of code?

      I’m sure he does. The advantage of those couple lines of code is that they check what you actually want to check, rather than a type-coherence property that may have some incidental relationship to what you want to check. Or may not.

  67. Roger,

    > I did not mention Guido van Rossum.

    I know you didn’t. You made a general point. My point was a gentle reminder that your general point may not actually apply to as broad a community as you seem to think.

    Patrick,

    > My comment was to the effect that anything that a type checker would catch will also be caught by pure functional testing, if the functional tests are reasonably comprehensive.

    You are too either proud of your testing code, or you are making another inadvisable absolute statement.

    Yours,
    Tom

  68. So you know one language from 30 years ago? So what? We don’t live in the 1980?s anymore.

    Now I know you’re just trolling, Roger. If you think I have no skill in other programming languages you haven’t been paying attention.

    I presume you understand that there is a world of difference between “a couple of lines of code” and zero lines of code?

    Sure, but you don’t always need or even want type-checking in Python. Say you have a class that reads and parses a text file. Maybe you don’t actually care if the file object implements any methods of a file object other than open, read, and close: therefore all you have to do is check to see if the file object has those methods. In that case, it doesn’t matter if the file object represents a real text file, a stream or a network socket. That means that your class is now network transparent, and you didn’t have to do anything to make it that way. Obviously, this assumes that the file-like object implements or at least interfaces to the proper protocols, but that’s probably what you want anyway.

  69. Here is the heart of the matter:

    > The advantage of those couple lines of code is that they check what you actually want to check, rather than a type-coherence property that may have some incidental relationship to what you want to check. Or may not.

    If one puts those lines of code in the object after all state changes it effectively becomes a type-coherence property (but also test it!) – which Patrick said he likes to do above.

    Yours,
    Tom

  70. # esr Says:
    > I’m sure he does. The advantage of those couple lines of code
    > is that they check what you actually want to check,

    Personally, I like to have my cake and eat it, especially if the first piece if free.

    Morgan Greywolf Says:
    > Maybe you don’t actually care if the file object implements any methods of
    > a file object other than open, read, and close: therefore all you have to do
    > is check to see if the file object has those methods.

    This is an interesting example. Lets say that the purpose of the code was to write a log file when the network failed. This is the sort of code fragment that is very hard to unit test. Since the check is done at run time, and I supply a bad object, my code fails. And my code fails at the absolute worst time possible (when the customer is on the phone complaining –because obviously it was your software that killed their network, not the other way round.) This is the very heart of what troubles me about dynamic checking — it violates one of the fundamental principles of good software development — bugs found early are way cheaper than bugs found late.

    Needless to say, modern statically checked programming languages provide an abundance of facilities to do exactly what you just described in a compile time checked manner. Interfaces with constraints being the obvious solution here. Unfortunately, crusty old languages like Turbo Pascal, C and (to a limited extent) C++ don’t have them.

  71. Jessica,

    > Needless to say, modern statically checked programming languages provide an abundance of facilities to do exactly what you just described in a compile time checked manner. Interfaces with constraints being the obvious solution here.

    Is Java modern in this way? If so, how do you do this in Java? Is Erlang? If so, do you know how to do it in Erlang? I really want example languages, because this is cool. I think Perl 6 might be a modern statically checked programming language.

    Yours,
    Tom

  72. Tom DeGisi Says:
    > Is Java modern in this way? If so, how do you do this in Java? Is Erlang?

    Sorry, I have only a passing familiarity with Java, and read about Erlang once in a book.

    In C# you would define (or more likely use an existing) interface which is essentially a description of the functions you require, and define the type of the object as that interface. It is an extremely common coding paradigm in C#.

  73. @Tom:

    My comment was to the effect that anything that a type checker would catch will also be caught by pure functional testing, if the functional tests are reasonably comprehensive.

    You are too either proud of your testing code, or you are making another inadvisable absolute statement.

    Not my code, per se. I don’t devote nearly as much code to unit tests as a lot of Python programmers, for the simple reason that most of my code is not in mission critical long running processes that need multiple nines.

    But let me break it down a bit more:

    1) Although, as Roger Phillips points out, “finite substitutions” cannot prove invariants that type checking can prove (unless, of course, you have little enough state that you can exhaustively test all possibilities), it is also true that for every required invariant that type checking can enforce, there is at least one finite substitution test that can be written that will provide a counter-example if the invariant is violated.

    2) There are other invariants and functionality tests that type-checking cannot possibly help with. These other things that need checking, in general, are at least as important as simple type checking. Moreover, there are more of these possible issues than there are simple type checking, often by an order of magnitude.

    3) Code that tests functionality will usually also find type problems if there are any. So, for example, if you have 90% coverage of functionality, and possible type errors are only 5% of possible errors that could affect functionality, then possible type errors are still only 5% of the 10% of possible errors that you don’t have coverage for. In other words, a very small number.

    4) As discussed earlier on this thread, I tend to do a lot of testing in-line, for what is essentially contract compliance. As you point out, this testing also has the side-effect of testing type coherence, but usually without explicitly testing for types. For example, summing a list of percentages to see if they total 100% will automagically check to see if they are all numbers.

    @Jessica

    Personally, I like to have my cake and eat it, especially if the first piece if free.

    For one thing, in most statically typed languages, the first piece is not free. You are so used to it you don’t see it, but keeping variable types coherent has a cost associated with it in programming time.

    In general, the way I code, the first piece of cake is free, but I don’t get to eat it until I’ve written the code for the other pieces as well. But this is usually quite incremental — whenever I write a piece of code, I ask what that assumes, and write code that checks for compliance. As I explained in (4) above, this often does an excellent job of implicit type checking.

    Lets say that the purpose of the code was to write a log file when the network failed. This is the sort of code fragment that is very hard to unit test. Since the check is done at run time, and I supply a bad object, my code fails. And my code fails at the absolute worst time possible (when the customer is on the phone complaining –because obviously it was your software that killed their network, not the other way round.) This is the very heart of what troubles me about dynamic checking — it violates one of the fundamental principles of good software development — bugs found early are way cheaper than bugs found late.

    There is no question that you have hit on one of the most difficult aspects of testing in Python. Testing exception paths is very difficult. This might make Python coding more problematic for long running processes, which it appears is one of the kinds of programming you do.

    Although I do use CherryPy for an internal webserver, most of what I use Python for is for batch tasks. (I do use it for long-running simulations of hardware, but once the data producers and consumers are up and running, exceptions in this environment are few and far between.) When building batch point-solutions, I can actually find most bugs much more quickly with Python than if I were using an edit/compile cycle, but you are absolutely right that unless explicit (and sometimes almost impossible to get right) tests are written for exception cases like disk full or network down, the sorts of simple errors that human programmers are prone to making will slip through the cracks much more easily in a dynamic language. I would say that easily over 80% of the bugs in my released code are in error reporting code, where I’m trying to report that the user did something stupid (because, in fact the user did something stupid), but even though I check for the erroneous input and my program refuses to operate on the garbage, I didn’t actually test with that exact garbage and I’ve done something wrong like not having enough operands for a format string for the error message. The good news is that in my environment (building chips, where it is very costly if we get the mask set wrong), people are actually happy that my programs refuse to run when supplied bad input, and helping users to troubleshoot their data by supplying a useful error message rather than a traceback exception is actually icing on the cake, rather than required and expected behavior.

  74. 1) Although, as Roger Phillips points out, “finite substitutions” cannot prove invariants that type checking can prove

    It’s good to see you finally admitted that you were wrong.

  75. I was trying to be polite and discuss within the parameters that others defined, but obviously that tactic doesn’t work all that well with assholes.

    In case you hadn’t noticed, I also discussed other tests that test the functionality at interfaces, that aren’t static type checkers, that implicitly check types. I never said that the only tests I use are “finite substitution” tests, although I did discuss how even those can reduce the number of uncaught type errors substantially.

  76. @Roger:

    I missed one of your earlier posts. Here are some replies:

    Like I said, I don’t want to rewrite the type checker.

    You don’t have to. If you want static typechecking by itself in Python, use something like traits. But if you want implicit typechecking in the context of other tests, that’s easy too.

    For which applications do invariants not matter?

    I never asserted that invariants don’t matter. I asserted that for a lot of applications, extra runtime overhead to check invariants doesn’t matter.

    Unfortunately, your assumption that simply running test cases through a complex program will uncover all errors is false.

    I never assumed or said that. I did say that all errors could be caught by pure functional testing. But testing functionality can include testing interface contracts, and type errors can be caught implicitly when testing for invariants at a higher level than the type system operates at.

    Some of these errors can be detected by a static type checker.

    Yes, and any error that could be caught by a static type checker can also be caught by other means.

    Therefore your assertion is false.

    No, your mischaracterization of my assertion is false.

    A type checker is a miniature proof searcher, and there are many universally quantified claims that it can prove. Unit testing is usually just finite substitution, which is insufficient to prove non-trivial universal claims.

    I never said that what you call unit testing, by itself, will always catch all errors that static typing could catch. Looking back through the thread, I can see how you could mistakenly ascribe this opinion to me. I did make the somewhat weaker claim that with good unit tests, a programmer won’t usually suffer from errors that a static type checker could catch. I stand by my claim that functional testing can be done at runtime in Python that will catch everything that static typing could catch in another language. I think you implicitly believe that as well; otherwise you wouldn’t be so adamant about not wanting to rewrite the type checker.

    I already know you can do many things in Python if your’e willing to do the legwork. And I still do not want to write code to check things I can do more easily in the type system.

    But, the whole point of the disagreement is that, in my opinion, a modern type system only catches the tip of the iceberg, and it’s possible to write code to catch any portion of the iceberg you want, including the whole thing if that’s required.

    I don’t know why you seem to assume that people making this “unfounded” criticism don’t know much about static typing. Like Morgan, I used to code a lot of Pascal. Then I coded a lot of Modula-2 and a tiny bit of Ada.

    So what you’re saying is you’re only three decades out of date?

    No, I use Python. BTW, I found it interesting that you included C++ in the list of languages with “modern” type-checkers. The C++ type-checking system is held together with baling wire and chewing gum.

    I’ll make it easy for you: if you have a procedure that performs a type-incorrect operation on some strange edge case (the result of a bug, perhaps) then we can easily conceive of tests that are comprehensive functionally that will miss the case. A static type checker would alert you upon compilation. You are wrong.

    If the tests are functionally comprehensive (at the lowest block level where code might be reused), and show no bug, then there is no bug. I declare my programs correct when they work, not when they compile.

  77. This is an interesting example. Lets say that the purpose of the code was to write a log file when the network failed. This is the sort of code fragment that is very hard to unit test.

    No, it’s not. VirtualBox. You think like a corporate code monkey, not a systems architect. With virtual machine technology and a few tricks you can simulate any conditions you want. But you’d probably not use such things because you haven’t received your necessary Training Certificate(tm) and such techniques aren’t ISO-something certified and you’d have to clear it with your Six Sigma Black Belt, who can’t look at it until after his done with his pre-pre-planning committee meetings for this year’s Team Building Event.

    1. >You think like a corporate code monkey, not a systems architect.

      OK., Morgan, for this I have to call unjustified rudeness. Jessica may not quite be one of us, but she has shown she is far too bright to be dismissed with this kind of slur.

  78. @Morgan:

    Good point. I don’t typically think inside that particular (virtual) box, because, as I described, my programming problems don’t typically involve making sure that long-running processes do the right thing.

  79. # Roger Phillips Says:
    > It’s good to see you finally admitted that you were wrong.

    FWIW, I hate this sort of comment. If this were my blog this would be close enough to getting you banned. Crowing about something like this simply discourages people to think openly about ideas, and demands that they defend their ideas. It is the worst thing you could possibly do to foster open serious discussion.

    Ironically, I largely agree with Roger’s point of view, however, I hate being in the same camp as him when he says offensive things like this. FWIW, on the other side, Morgan, who always struck me as a nice person, has been remarkably unpleasant toward me, saying I live in a bubble, or I am grossly inexperienced, or lately, that I am a corporate code monkey. Regardless of the fact that all these things are objectively untrue, it hardly makes for enjoyable recreation, which is what this blog is supposed to be no?

    Here is a crazy thought: why not just make your case, and leave the ad homeinem for the politicians and other fools.

  80. Morgan Greywolf Says:
    > With virtual machine technology and a few tricks you can simulate any conditions you want.

    Yes indeed, that is why I indicated that it was difficult, I did not say it was impossible. However, we corporate code monkeys often have to weigh cost benefit analysis, and one of the things we do in that regard is think that the compiler automatically checking these things for you, at no marginal cost, is often a better choice than the extreme effort involved in the procedure you talked about. And that is the subject under discussion.

  81. @Jessica and @esr: I’m sorry for my earlier comment. I was in a very flippant mood last night for reasons not related to this blog and should have checked the chip on my shoulder at the door before I replied.

    And, yes, Jessica, I understand cost/benefit analysis. I do have a business degree, believe it or not. I think that’s a big difference between the corporate environment and those of us coding because we enjoy it: we think nothing of creating entire entire testing systems and derive actual enjoyment out of it. In a corporate environment, you have to justify your time better than that.

  82. Patrick Maupin Says:
    > For one thing, in most statically typed languages, the first piece is not free. You are so used to it you don’t see it, but keeping variable types coherent has a cost associated with it in programming time.

    I don’t know if this is true for “most statically typed” languages, since I have only used a few. However, I think it varies. In the language I use primarily, I don’t think it is true. A well designed program has the type system corresponding to the problem domain model, and to maintaining the type system is almost the same as maintaining the domain model, and thinking about coherence of variable usage is almost the same as thinking about coherence of the artifacts in the domain model.

    For example, if I were a corporate code monkey working for for Walmart managing their distribution center. I am going to have to maintain types called packages, customers, trucks and so forth. But maintaining these is very closely analogous to maintaining the actual domain model itself. I should probably say also that domain model necessarily includes both the problem domain and the domain of representing that on the computer (such as underlying database tables, file systems etc.)

    I’m not saying there isn’t some divergence between the two, however, the degree of divergence is determined by the quality of the programming model, which is to say the representational power of the type system. My experience is that C# from version 3.0 on is excellent at this.

    A while ago Eric talked about an important principle: the single point of truth. One advantage of a static type system is that there is a single point of truth about the object invariants and capabilities. If you unit test, you spread that logic all through your unit tests. I think that is bad.

    One interesting side advantage of that is that creating programs in the first place is much easier. Because your program describes what an artifact can do, the IDE can provide a great deal of feedback as you go. Recently, I was laughingly discussing with a friend the bad old days when you had to look up function parameters. With the Intelisense feature of Visual Studio, you simply don’t have to do that, since they are provided, with full documentation automatically as you type. True for system objects and user defined objects. I am sure dynamic typed languages have some feedback like that: but my experience is that they are simply not as powerful as tools like intelisense or the various object browsers and program analyzers built into Visual Studio. But perhaps that reflects my inexperience in your world more than being a real difference.

  83. # Morgan Greywolf Says:
    > I think that’s a big difference between the corporate environment and those of us coding because we enjoy it: we think nothing of creating entire entire testing systems and derive actual enjoyment out of it. In a corporate environment, you have to justify your time better than that.

    I think that is a very narrow view of “cost” and “benefit”. If you didn’t have to do all that screwing around to get your automated network fail system set up, you could instead spend your time doing other things, creating other code for your own enjoyment. The point is that setting up all that stuff is only necessary because you have to unit test stuff that is statically tested automatically in a different system. If you prefer it to writing other code then go nuts. However, with a statically typed language you don’t have to. I’m just giving you choices Morgan.

    I know this is probably hard to believe, but people like me write code for enjoyment too. There is a thriving community of people who do so and share for free with others. If you are unaware, I suggest you look at the web site http://www.codeproject.com, of the complete systems in codeplex.

  84. For one thing, in most statically typed languages, the first piece is not free. You are so used to it you don’t see it, but keeping variable types coherent has a cost associated with it in programming time.

    I don’t know if this is true for “most statically typed” languages, since I have only used a few. However, I think it varies. In the language I use primarily, I don’t think it is true. A well designed program has the type system corresponding to the problem domain model, and to maintaining the type system is almost the same as maintaining the domain model, and thinking about coherence of variable usage is almost the same as thinking about coherence of the artifacts in the domain model.

    It is true that more and more statically typed languages now allow implicit typing, which definitely reduces the overhead associated with static type checking. I think this improvement is directly attributable to competition with dynamically typed languages like Python. It is hard to balance static typing with the sort of dynamic execution that Python allows, but in truth, most Python code could be statically checked fairly easily. As John Aycock says “Giving people a dynamically-typed language does not mean that they write dynamically-typed programs.”

    In an ideal world, programmers could use dynamic features like introspection and duck typing quite easily, and whatever could be checked statically in a reasonable amount of time would be checked with no effort on the part of the programmer. I think we are lurching toward that world, but we are not there yet. In today’s world, static vs. dynamic type-checking is yet another tradeoff to think about when choosing a language. But in my opinion, the other benefits of Python (including ease of testing things that a static type checker couldn’t possibly find) far outweigh the lack of a standard static type-checker, so much so that I don’t even bother with any of the add-on typecheckers that are available (some of which are quite good).

    My experience is that C# from version 3.0 on is excellent at this.

    All indications are that C# is helping to move the ball in the right direction. But I don’t use Microsoft products. Google’s go looks interesting, but I’m too old to devote too much time to at language that is still in that state of flux. My strategy is to have a very small toolbox that I am very familiar with for everyday use, and only break out other tools when the job is specialized enough that there is a clear win. Right now, Python is the primary programming tool in my toolbox.

    A while ago Eric talked about an important principle: the single point of truth. One advantage of a static type system is that there is a single point of truth about the object invariants and capabilities. If you unit test, you spread that logic all through your unit tests. I think that is bad.

    As I mentioned earlier, you can write tests that test invariants where you care about them. This places the test code next to the functional code. Surely that’s not a bad thing? In any case, I’m not sure that some of the capabilities you (or Roger for that matter, for some of what he said) are referring to are related to static type systems. If you declare that something is an integer in the range of 1-100, the compiler can check portions of that statically, but might have to insert exception code to catch some violations at runtime. This isn’t static any more; it’s a dynamic check. It’s extremely easy in Python to do such dynamic checks on object accesses, and type systems like traits will do that for you. As I mentioned to Jessica, the only time I have had issues because of Python’s lack of static checking was because of very infrequently executed code. If this were important enough (if it was a mission critical long running process), I would use code coverage tools to weed out these surprises.

    One interesting side advantage of that is that creating programs in the first place is much easier. Because your program describes what an artifact can do, the IDE can provide a great deal of feedback as you go. Recently, I was laughingly discussing with a friend the bad old days when you had to look up function parameters. With the Intelisense feature of Visual Studio, you simply don’t have to do that, since they are provided, with full documentation automatically as you type. True for system objects and user defined objects. I am sure dynamic typed languages have some feedback like that: but my experience is that they are simply not as powerful as tools like intelisense or the various object browsers and program analyzers built into Visual Studio. But perhaps that reflects my inexperience in your world more than being a real difference.

    I don’t like IDEs. I don’t even like editors that think they are smarter than I am. I can type quickly, I have a reasonably good memory, but I make lots of little micro-edits as I go along, and the only thing I want out of an editing system is consistency in operation, not lots of little bubbles showing me a tree of possibilities, along with random indentation which depends on the partial code fragment I left hanging below while I went back to type some more on a previous thought.

  85. Patrick Maupin Says:
    > I don’t like IDEs.

    The difference between cavemen and astronauts is in the tools they use and the education they have. I’d advocate not throwing one of those away.

  86. The difference between cavemen and astronauts is in the tools they use and the education they have.

    You probably think that astronauts are somehow “better” but it really depends on the environment they’re in, doesn’t it? I don’t have a real urge to spend a lot of contiguous time in a tiny tin can with stale air, so I use Python. Although there are Python IDEs available, and Python type checkers available, I choose not to use either, because in my particular environment, the cost/benefit analysis doesn’t support the use of such tools.

    I’d advocate not throwing one of those away.

    That’s fine. I gave you my reasons. Just as I expect that my language of choice (either Python or something else in the future) will eventually both be dynamic and have good static typechecking, I also believe that at some point, something that you admit meets your classification of IDE will be my coding environment. But that’s not happening today. BTW, even if a Microsoft IDE (which I suspect you use based on some of your statements) were the best thing since sliced bread, it wouldn’t make up for the fact of having to run it on Windows…

  87. @Patrick:

    but I make lots of little micro-edits as I go along

    You sound like you code like me: start typing one thing, something you’re typing gives you an idea so you jump somewhere else, make a few quick changes to a routine you just wrote out, go back, type some more, realize some code you just wrote could be made into a function rather than repeating the same thing 3 times, put it into a function, that gives you an idea, so you jump somewhere else, etc. Sometime later, you realize that the routine you started to write still isn’t done, so now you’ve got to go back and complete it…

    Or that could just be me projecting my own scatter-brained habits onto you. ;)

  88. Patrick Maupin Says:
    > You probably think that astronauts are somehow “better”

    Better at producing technology artifacts for sure. Nonetheless, tools are leverage, even if you have to change your work style to accommodate the needs of a tool. Sitting on my butt for hours on end if not my first choice either, but that is the work style a car requires, and it is certainly a worthwhile compromise to make so that I can travel at 70mph rather than 3mph on my feet.

    There is another thing, beyond just speed of execution. Tools allow you to do the higher level work rather than the lower level boring work. For example, if you are laying out an HTML page it is often easier to use a visual tool rather than messing around with mark up. Though obviously you need both. And the integrated part in real important here, because it allows you to seamlessly move from pretty graphics through to event handlers, without lots of fussing around.

    Or, alternatively, it is a lot easier to have a tool create a web services definition file automatically from a class definition than do it yourself. The tools I do allow me to do these things with zero effort on my part, so I can think at a higher level rather than a lower level. In fact this applies to the type system too. For sure I can check the schema of an object at the point I am going to use it, but I’d rather do it once and let a tool take care of it for me in future.

    Perhaps this is the essence of our difference on this. I’d rather use tools to do the boring stuff, so I can do the interesting stuff.

  89. Better at producing technology artifacts for sure.

    But that gets back to the question of whether artifacts are needed or not.

    Nonetheless, tools are leverage, even if you have to change your work style to accommodate the needs of a tool.

    Tools have their uses, but sometimes using one tool obviates the need for another type of tool. Also, tools are often overused, hence the “when all you have is a hammer, every problem looks like a nail” meme. At work, I have fancy logic analyzers, spectrum analyzers, frequency generators, etc. available. I usually use an oscilloscope or a voltmeter. When you use an elephant gun to kill a gnat, you’ll actually probably miss.

    Tools allow you to do the higher level work rather than the lower level boring work.

    Sure, in some cases. I use tools all the time for this. In fact, I even create tools all the time for this. But we’ve been specifically talking (recently) about purported “productivity” tools. In general, these tools allow you to do different work. I’m not terribly excited by most “productivity” tools, and I can do the high-level non-boring work on a napkin.

    Frankly, where I work, the people who get really excited by tools usually get sucked into validation, because they want excuses to play with shiny tools, and they tend to view every problem through the lens of the available tools. Sometimes, when they complain that something is not possible to do, I wild up building a new tool for them. Sometimes they are envious that I get to build stuff and they don’t, but that’s really because of their mindset of “I can’t do it because the tools don’t support it” rather than “What do I have to have in order to do this?”

    Or, alternatively, it is a lot easier to have a tool create a web services definition file automatically from a class definition than do it yourself. The tools I do allow me to do these things with zero effort on my part, so I can think at a higher level rather than a lower level.

    Sure, and tools like swig and Cython automate boilerplate for the same sort of impedance mismatch, and I happily use them. Why do you apparently have some mental image of me with my club, touch-typing with a stick because my fingers are too big for the keyboard?

    In fact this applies to the type system too. For sure I can check the schema of an object at the point I am going to use it, but I’d rather do it once and let a tool take care of it for me in future.

    Checking the schema of an object does not actually enforce a contract between two pieces of code. A piece of code checking that it is happy with all its inputs can enforce a contract with a meaningful error message, rather than some exception or core dump many CPU cycles later. It just so happens that such contract checks actually do a pretty good job of finding things that static typing would as well, even if that is not an explicit goal.

    Perhaps this is the essence of our difference on this. I’d rather use tools to do the boring stuff, so I can do the interesting stuff.

    No, I think the essence of our difference is that you think you have it all figured out, perhaps because you’re young.

  90. @morgan:

    Or that could just be me projecting my own scatter-brained habits onto you. ;)

    No, I think you’ve captured it pretty accurately.

  91. @Corporate Code Monkey:

    For example, if you are laying out an HTML page it is often easier to use a visual tool rather than messing around with mark up. Though obviously you need both. And the integrated part in real important here, because it allows you to seamlessly move from pretty graphics through to event handlers, without lots of fussing around.

    Here’s, perhaps, another difference between us. I don’t bother to expend the time and energy to produce pretty graphics. To me, that’s boring.

    There are only really two reasons to invest the time to learn a tool — either because the benefit exceeds the cost in a purely economic fashion, or because you are keenly interested in what you can do with the tool. The unix mindset means that most unix command line tools (like Eric’s deheader tool that started this whole discussion) have a very high benefit/cost ratio. As Eric says, such tools are easily “discoverable.” Some other tools, not so much, unless you spend a lot of time using the tool. Specialization is one of the keys to modern society. For example, I mentioned that a lot of avid tool users where I work do validation. Another locus where you find these people is schematic capture and board layout. I do these things very occasionally. Usually somebody else does them for me. If I were really interested, then I could spend a lot of time doing schematics and board layout, and I would learn to use the tools a lot better, but from a purely economic viewpoint, it’s better to have one or two people learn the tools that well, so one corollary to this is that if I learned one of these tools well, then I would probably be tasked with doing a lot more schematics or board layout in lieu of some of the other things that I get to do now. (One of the tools I do know fairly well dumps some extra work on my plate occasionally — I am the local expert in using Xilinx FPGAs, so I get to do a lot of FPGA work. But I find that more interesting than schematics, so that’s fine.)

    Now, one of the things I do that I find interesting is programmatic schematic checking for my boards. When I get a new schematic from the guy who is inputting it, rather than doing an “optical compare” to see what changed, a Python program does this for me. A Python program also insures that the netlist is connected appropriately. A Python program also creates the pin constraint file that connects verilog signals inside the FPGA to the correct traces on the board. I have a very good track record of producing working, useful boards, but that’s just one part of what I get to do.

    You write as if learning tools is unmitigated goodness, but in reality, you can’t learn every tool really well, and in any case, not all tools are created equal. I am very careful about what I put in my main toolbox, and for the last 14 years Python has held a prominent place there. Now if I were dashing off GUIs or webpages all day long (a prospect that doesn’t excite me), then perhaps I would be more interested in autocompletion for all the APIs that I couldn’t remember. But, I’m more interested in algorithms than in gluing lots of little API pieces together to make something pretty, and my work habits and the niche I have gravitated into both reflect this.

  92. Jessica,

    > Sorry, I have only a passing familiarity with Java, and read about Erlang once in a book.

    > In C# you would define (or more likely use an existing) interface which is essentially a description of the functions you require, and define the type of the object as that interface. It is an extremely common coding paradigm in C#.

    OK. Java has interfaces. Same term, same usage. I thought you were saying that you could code arbitrary constraints which do even more checking at compile time. Not as cool as I thought.

    Yours,
    Tom

  93. I am not a hacker and would rather call myself a hobbyist geek, but what I think really attracts Python to the likes of me are: instant gratifcation! The heady feeling of seeing an idea turn into reality within minutes are what makes Python really attractive to a certain audience.

    The step between algorithm and working solution is narrower in Python than in any other language. In my recent projects I found out how quickly you can produce working and sane code in Python compared to say C (it’s worse in C++ due to its inherent complexity). For another, Python provides so many goodies in its standard library (reasonably well documented at that) that it makes it almost like a drug addiction. Step away from Python for a minute and you realize how much you miss.

    Again, in my limited experience with C++, the language forces you to plan, plan and then plan again before producing working code forcing you to make a seemingly endless variety of choices too early in development, especially in object-oriented development. When you make mistakes in object design, C++ can become a nightmare. Fundamentally there is a certain complexity in language syntax that slows down production of working code. Then in C++, the lack of coherent and single source documentation for the standard library is a big drawback: especially when you need to incorporate both C library functions and C++ standard library classes. Certain things like object serialization that are so much of a boon in Python (using the standard pickle or cPickle module) require you to rely on third party libraries like boost in C++.

    Finally not having to compile sources is really a benefit with Python. It might be a minor point, but it makes the development cycle slightly shorter especially when you rely on a heavyweight library in C++.

    My own experience is that if you have a working algorithm in mind you can probably make working code faster in Python than any other language today (note: I have no experience of other so-called “scripting” languages like Perl or Ruby). And yes, GUI programming is so heavyweight and even Python’s elegance cannot take away from the fact that GUI development bogs down your basic task of what you intend to achieve (unless the GUI is an end in itself).

    The fact that I, with only theoritical knowledge of how to build a search engine and implement it, could write a simple search engine database generator for my website in Python and also write a Python CGI script to query it in a matter of days is something I consider Python’s real benefit. I doubt my path would have been this easy in C or C++.

  94. Tools have their uses, but sometimes using one tool obviates the need for another type of tool. Also, tools are often overused, hence the “when all you have is a hammer, every problem looks like a nail” meme. At work, I have fancy logic analyzers, spectrum analyzers, frequency generators, etc. available. I usually use an oscilloscope or a voltmeter. When you use an elephant gun to kill a gnat, you’ll actually probably miss.

    You seem to be inverting the meme there. That meme is (afaik) talking about using a tool for something it’s not really good at because of lack of availability or familiarity. You seem to be saying that with all these fancy tools (that one would assume would be better at some kind of task than a more general tool), you still just use an oscilloscope and a voltmeter.

    Kind of like saying “If everything can be banged into the wood like a nail then you don’t need anything more than a hammer”.

  95. @JonB:

    That’s a very good point. During the process of editing my post, I conflated a couple of orthogonal thoughts, so it doesn’t read very well.

    But my own interpretation of the “when all you have is a hammer, every problem looks like a nail” bumper-sticker has two parts:

    “when all you have” — In other words, if you are not careful, you will define your capabilities by the tools that are available to you.

    “looks like a nail” — you will attempt to solve problems by banging on things, like Emmett on the Andy Griffith show. This may sometimes work, but will quite often fail.

    Now if you have somebody with that mindset, and you give him a screwdriver in addition to his hammer, he now has a whole new set of capabilities. He will also probably change how he handles a few problems. For example, he can now remove a screw without leaving claw marks all over whatever it was screwed into.

    But that basic mindset can prevail no matter the number of tools, and it can lead to people trying to do ludicrous things with their tools to solve problems. The man who only had a hammer is amazed and excited by his new capabilities when you give him a new tool, and he now has to use that new tool to its full extent, just like he was using his hammer.

    The opposite of that mindset is the carpenter who carries around a few basic tools on his person at all times, but who implicitly knows that he has (among other things) a nail gun and a couple of different types of power saw on the truck, and a whole room full of woodworking equipment back at the shop.

    This carpenter might use a hammer for a few nails, but get the nailgun if he was doing a lot of nails. Or he might do some cuts with the circular saw on his truck, but decide to go back to the shop and use the table saw if he had to build a cabinet. If it’s a really fancy table saw and he doesn’t use it very often, he might get somebody else who is more familiar with it to use it for him.

    So, what I was trying to say, is that (to me) “when all you have is a hammer, every problem looks like a nail” implies a focus on the capabilities of the tool. The problem is secondary. If you smash a TV in the process of trying to fix it with a hammer, well, it’s just time to get a new TV, because that one was obviously beyond repair.

    I try to focus on the problems, and think about which tools are best for the problem at hand. Like the carpenter I described, I might use my hammer and screwdriver (volt meter and oscilloscope) at my desk, or I might decide to go back to the lab to use some more heavy-duty equipment. I might even temporarily drag the heavy equipment out to the jobsite. The point is, I don’t just have a hammer, so even though that’s often the tool I reach for first, the decision to use it on any particular problem is informed by the fact that other tools are available or can be built, if that will lead to a better solution.

  96. Patrick,

    FWIW, I think I better understand the deal. You and I write completely different types of software. You seem to write “algorithmic” software, by which I mean the bulk of the work is getting a complex algorithm right, I write “user interaction” software, where the bulk of the work is getting the complex user interaction right.

    I have written some “algorithmic” software, such as replenishment prediction algorithms for retail stores (which is a very complex problem BTW), but most of my work is of of the latter kind.

    I would propose that programming language/environment matters much less for “algorithmic” software than for “user interaction” software. The fact is that if I wrote a connection net checker for a board layout in Java, the same code would compile in C++, C#, Pascal, Modula and so forth with only minor syntactic changes. The same code would also run in Python with slightly more complex syntactic changes.

    That is certainly not true of “user interaction” software, for a host of reasons, but which all come down to one core thing: the programming environment itself is a much more dominant thing.

    A second factor is that I tend to write really large pieces of software, you seem to write smaller more cohesive software. The software I write is used by tens of thousands of people, it sounds like the software you write is used by a handful of close people. Again, the combinatorial explosion of large software is far more dependent on tools than small cohesive programs are.

    BTW, I am not suggesting for a moment that “algorithmic” software is easier than “user interaction” software. Far from it. I am just saying that programming languages and programming systems matter far less in your world than in mine.

  97. I would propose that programming language/environment matters much less for “algorithmic” software than for “user interaction” software. The fact is that if I wrote a connection net checker for a board layout in Java, the same code would compile in C++, C#, Pascal, Modula and so forth with only minor syntactic changes. The same code would also run in Python with slightly more complex syntactic changes.

    I agree on the “environment” part. This is partly because with user interaction software, you are typically interfacing with a large, ill-documented toolkit with an incoherent API. This is the environment autocompletion is helpful in (and, as discussed, it’s not an environment I’m particularly interested in).

    I disagree on the programming language part. Python is a huge win over all the other languages you mentioned for rapidly testing out different algorithms on an intractable problem, although there are other languages you did not mention that pose some competition for it.

    A second factor is that I tend to write really large pieces of software, you seem to write smaller more cohesive software.

    In the past, I have contributed to, indeed been the primary maintainer of, some reasonably large packages. Just not with GUIs.

    The software I write is used by tens of thousands of people, it sounds like the software you write is used by a handful of close people.

    Some software I have written has been used by millions of people. For one example, if you have a Tivo that downloads programming over a landline, you may be using some software I helped write. Some hardware I helped design is also quite heavily used — if you’re making call with a standard POTS phone, whether it’s connected to a central office, or plugged into a DSL or cable modem, you may be communicating through a chip I worked on.

    I am just saying that programming languages and programming systems matter far less in your world than in mine.

    That’s absolutely true. What you may or may not have understood, though, is that this is mostly because I deliberately arranged my world this way. The whole point of being a sentient tool user is that you can shape your environment to suit you better.

  98. >That is certainly not true of “user interaction” software, for a host of reasons, but which all come down to one core thing: the programming environment itself is a much more dominant thing.

    Blame third party libraries for this. GUI code is not built into the majority of programming languages.

    1. >Does this also try removing #includes from header files?

      No, though I’m thinking of trying to add that. I have an internal representation called an InclusionMap that represents the dependencies; what I’d have to do is modify the test logic so that when a .h file is tested all objects thaty depend on it are test-recompiled.

  99. I have an internal representation called an InclusionMap that represents the dependencies; what I’d have to do is modify the test logic so that when a .h file is tested all objects thaty depend on it are test-recompiled.

    If the goal is to reduce unnecessary header inclusions, and the situation is “A includes B, B includes C”, then just because removing C from B causes A to stop compiling properly does not mean that B doesn’t have an unnecessary inclusion. Assuming B compiles on its own and is used (or may in the future be used) in other modules, the best solution may be to move C’s inclusion from indirectly-via-B to directly-in-A. In one of the projects I work on we have a little script to compile headers on their own, which helps in finding unnecessary inclusions to remove and missing inclusions to add… but I’m not sure how hard it would be to pull off the same functionality when you’re forced to work through arbitrary third-party makefiles.

  100. Okay, I’m only partway through reading all the comments, but I want to respond to a lot of what people are saying to support IDEs.

    There seems to be some confusion of what, for example, Patrick said when he said he didn’t like IDEs. I doubt very much that he’s contesting the usefulness of all the features you’d get with an IDE. Code correctness checking tools, a good make system, a nice debugger, profilers, documentation viewers and so on are all lifesavers and life without them is just painful. No programmer is contesting that.

    What we (or at least I) object to is that it all needs to get lumped into one program. It’s better to have a nice big toolbox, and select the right implement for the job. For example, in my C programming workflow, I use:
    * Gedit (with tons of plugins)- text editor
    * GNU Make/Autotools/Scons- build system
    * GCC- compiler
    * GDB/DDD- debugger
    * Valgrind Memcheck- code correctness analysis, memory leak detection
    * Valgrind Callgrind/Kcachegrind- performance profiling and visualization
    * Bash scripts- compile-time code generation, other odd jobs
    ESR’s new tool is intended to be added to that toolbox, yet another tool to use in the right circumstances. Will it magically fix all your problems and cure your uncle’s brain cancer? No, of course not, but it’s not intended to. Those who are coming up with pathological circumstances under which it won’t work are missing the point. It’s not supposed to replace Lint, or whatever other correctness check you use. It’s supposed to be used alongside them.

    On the other hand, IDEs are intended, from the get-go, to do everything evar. The whole point of an IDE is that you can spend your entire coding career using it and never use anything else (or at most install some plugins.) But at but at the same time, I doubt very much that they actually *can* fix all your problems and cure your uncle’s brain cancer. All they end up doing is becoming huge and ungainly.

    Basically, I’m proposing that, in general, two applications will ALWAYS have more room for functionality than one. If you glue them together in any non-trivial way, you will have problems which will either result in more development work, more bugs, or less functionality. Better to leave them separate.

    So Deheader isn’t pointless because you can make MSVC do something similar. Far from it. Having tried both, I can say that MSVC is worse than Deheader at what Deheader does (fancier analysis algorithms notwithstanding) precisely because MSVC has so much other baggage as well. And $EDITOR is also not pointless because MSVC can do something similar. And GCC is not pointless for the same reason.

    I’ll have more comments as I read through more of this thread.

  101. There seems to be some confusion of what, for example, Patrick said when he said he didn’t like IDEs. I doubt very much that he’s contesting the usefulness of all the features you’d get with an IDE. Code correctness checking tools, a good make system, a nice debugger, profilers, documentation viewers and so on are all lifesavers and life without them is just painful. No programmer is contesting that.

    Absolutely correct. Except… When I’m writing in Python (which, these days, is most of the time when I’m not writing Verilog), I have never used code correctness checking tools or a make system or a debugger. I occasionally get up and personal with a profiler, and for documentation, I either type help() at the python command prompt or look it up on the web.

    That’s not to say that other people don’t run different kinds of tools on some of my Python. For example, Roberto Alsina is interested in things like cyclomatic complexity.

    But that gets straight back to your point. What kind of IDE is going to have the exact tools you’re interested in?

    Having said this, I must confess to often using Xilinx’s IDE when I am dashing off code for a small test FPGA or CPLD. But I really just use it as a glorified make system when I’m too lazy to build a proper makefile. I don’t even use it’s editor — just click the build button. The equivalent steps for compile and link are just too painful. In the past I wrote my own build driver for this (at a different company), and I need to finish up version 2 of that, primarily because the tools spill an egregious amount of inescapable warning verbiage.

  102. A decent python editor is SPE (stani’s python editor). It’s code completion is not comprehensive but it does allow you to browse the classes/modules.

    By the way I do agree that a good code completion tool with function tooltips does increase productivity. I forget function parameters and their order easily so not having to dive into the documentation ever so often is a great mechanism.

    That’s about the only reason I don’t use the basic text editors like vim or gedit for coding.

  103. > By the way I do agree that a good code completion tool with function tooltips does increase productivity.
    > I forget function parameters and their order easily so not having to dive into the documentation ever so
    > often is a great mechanism.
    Some research shows that a Gedit plugin exists to do this for PHP. If it didn’t already exist, writing such a plugin for Python would be almost trivial– Gedit supports Python plugins, and Python has a built-in help() function. For example, try running this code:
    help (“string.split”)
    I don’t object to putting in features like that, which you directly use when editing. That makes perfect sense, if it’s implemented as an optional plugin. It’s when you start adding other stuff that it starts to annoy me. Just because two programs are often used at the same time (i.e. editor and compiler,) doesn’t mean they share the same problem domain.

  104. I’m agnostic on most religious programming issues. I like Eclipse and I like vi. I like Java and I like Perl. I don’t like the GPL, but I can live with it. I guess I’m not a hacker (and there are other reasons to think that).

    Yours,
    Tom

  105. I love IDEs. There’s one I use daily that I couldn’t live without. Perhaps you’ve heard of it: it’s called Unix.

    It’s an old joke but it’s also ha-ha-only-serious: it was as an IDE that Unix made its way outside the research worls in the 1970s. IIRC PWB/Unix (Programmer’s Workbench) had available compilers and job-control tools for other, more difficult to control platforms like OS/360.

    IDEs in the more narrow sense are useful to a certqin kind of programmer, but Emacs and a constellation of scriptable Unix tools get me 90% of where I need to be, and there are better ways to cover the last 10% than the bloat and cruftiness of a tool like VS or Eclipse.

    But that’s just me. I’m not every programmer, and I represent a minority of programmers that dwindles every day.

  106. Note that I’ve trimmed off some parts of your response, because there’s only so many times I can repeat myself.

    Patrick:

    You don’t have to. If you want static typechecking by itself in Python, use something like traits. But if you want implicit typechecking in the context of other tests, that’s easy too.

    So your defense of dynamic typing is that you can do static typing?

    I never asserted that invariants don’t matter. I asserted that for a lot of applications, extra runtime overhead to check invariants doesn’t matter.

    My error.

    I never assumed or said that. I did say that all errors could be caught by pure functional testing.

    Wrong. You said that they “would” be. caught, not that they “could” be Quoting you directly:

    “My comment was to the effect that anything that a type checker would catch will also be caught by pure functional testing, if the functional tests are reasonably comprehensive.”

    I can have “reasonably comprehensive” functional tests that do not show up errors. This statement is in error. Of course, there always exists a test that will uncover a particular error. So what? The space of inputs may be enormous. There is no guarantee you will pick the correct values.

    But testing functionality can include testing interface contracts, and type errors can be caught implicitly when testing for invariants at a higher level than the type system operates at.

    They can. Or they can also not be caught. I do not understand why you are pursuing this thoroughly flawed argument.

    No, your mischaracterization of my assertion is false.

    See above.

    Looking back through the thread, I can see how you could mistakenly ascribe this opinion to me. I did make the somewhat weaker claim that with good unit tests, a programmer won’t usually suffer from errors that a static type checker could catch.

    “anything that a type checker would catch will also be caught by pure functional testing, if the functional tests are reasonably comprehensive”

    I stand by my claim that functional testing can be done at runtime in Python that will catch everything that static typing could catch in another language.

    Nope, not what you said. Stop trying to weasel out of this linguistically (“can”). As you say here, I can come up with a finite substitution that will catch every error catchable by static typing. I can, in fact, write a finite substitution that will catch every error possible. However, in practice the errors are not known ahead of time, which is the whole reason we have things like proof. Again, you said:

    “anything that a type checker would catch will also be caught by pure functional testing, if the functional tests are reasonably comprehensive”

    They will not necessarily be caught, because you will not necessarily know the exact substitutions to make.

    But, the whole point of the disagreement is that, in my opinion, a modern type system only catches the tip of the iceberg, and it’s possible to write code to catch any portion of the iceberg you want, including the whole thing if that’s required.

    Again, this is wrong unless you either know all the errors in advance (it is a given that you don’t, since you’re attempting to discover them) or you write a type checker.

    No, I use Python. BTW, I found it interesting that you included C++ in the list of languages with “modern” type-checkers. The C++ type-checking system is held together with baling wire and chewing gum.

    Because that’s how far back these concepts go. Would you rather talk about System F or dependent types?

    If the tests are functionally comprehensive (at the lowest block level where code might be reused), and show no bug, then there is no bug.

    Look, you can make assertions like this, but the mathematics is simply that unless you know all your errors by some other means you cannot find all the errors by finite substitution, even for the class of errors caught by static type checkers.

  107. It is the worst thing you could possibly do to foster open serious discussion.

    Was my comment rude? Probably. Patrick is wrong, not because he likes Python but because he made a specific assertion about the capabilities of unit testing versus static typing. Either he is trying to make it out that he didn’t make this claim, or he hasn’t sat back and had an honest think about the merits of it. Either way, this is more egregious to me than pointing out that he was wrong.

  108. @Phillips
    I haven’t run into any such problems with Python. Unless you slipped on a banana peel and accidentally wrote quotes around one of your integers, you should be fine. Maybe you’re right in theory but in practice it just doesn’t matter.

  109. I used to worry sick about variables not being declared and having static types in Python but the reality is that it’s only one aspect of the language and a relatively minor one at that.

    Not having static types can be a boon if you know what you’re doing and your code is carefully documented. I know this sounds silly but it saves so much coding time not to declare variables and using them only when necessary. I think this avoids a lot of namespace pollution as well.

    Nevertheless, I am no expert like many of you. So I can only speak from my own experience. My programs are nowhere near complex enough.

  110. @Max E:

    Patrick is wrong, not because he likes Python but because he made a specific assertion about the capabilities of unit testing versus static typing. Either he is trying to make it out that he didn’t make this claim, or he hasn’t sat back and had an honest think about the merits of it. Either way, this is more egregious to me than pointing out that he was wrong.

    I haven’t run into any such problems with Python. Unless you slipped on a banana peel and accidentally wrote quotes around one of your integers, you should be fine. Maybe you’re right in theory but in practice it just doesn’t matter.

    And that’s the crux of the matter. I first asserted that, in practice, unit tests are often sufficient:

    And I assure you that if you had all that heartbreak and angst because of a few things that static typing could have caught, your unit tests really aren’t all that good.

    Now, my mindset when I dashed this off was not, as Roger insists on claiming, that “unit tests” per se will catch absolutely everything that static typing could. However, I still maintain that if you’ve been bitten enough times that you believe that you absolutely have to have static typing, your unit tests really aren’t that good.

    When Roger challenged this statement, I could see where that was going, so I made a simultaneously stronger yet more qualified statement:

    My comment was to the effect that anything that a type checker would catch will also be caught by pure functional testing, if the functional tests are reasonably comprehensive.

    But it was too late. He was already hung up on “unit tests” and “finite substitution”. Notice that he never once addressed anything I wrote about implicit type checking during other invariant testing. He probably thinks he has taught me something, but I already knew that writing too quickly invites abuse from assholes. Writing more carefully is not always the answer, though — it’s subject to one of those universal laws, like the one about how the increase in life expectancy from the 55 mph speed limit almost exactly equals the amount of extra time spent sitting in traffic.

  111. @Roger:

    Oops, my bad — for some reason I didn’t see your immediate reply.

    If the tests are functionally comprehensive (at the lowest block level where code might be reused), and show no bug, then there is no bug.

    Look, you can make assertions like this, but the mathematics is simply that unless you know all your errors by some other means you cannot find all the errors by finite substitution, even for the class of errors caught by static type checkers.

    No, I really mean this. If you’ve tested every possibly input that could be used during functioning, then you’ve tested every possible input.

  112. @Roger:

    One more thing — although there are a couple of different definitions of “static type checker” this whole thread started off because Jessica was having issues with very simple issues, and complaining about things that even compile-time in the C language would have caught. That was what I was first addressing.

    When you made the assertion that “A type checker is a miniature proof searcher, and there are many universally quantified claims that it can prove” I let that slide, because I believe things are headed this direction in languages, and wasn’t sure exactly what capabilities you are discussing.

    But you can’t seriously claim that, for example, the C++ type checker is remotely equivalent to a theorem prover. The things it finds at compile time are trivial. The things it finds at run-time, well the fact that it found them at run-time makes it no better than letting some other random kind of exception happen. And the things that the C++ type checker can find at compile time that matter are trivially caught in the context of doing other tests.

  113. Notice that he never once addressed anything I wrote about implicit type checking during other invariant testing.

    Sorry, I’m not interested in hearing you blather about unit testing in a completely unconvincing and anecdotal manner. You made a false statement about type checking, and you ought to take it back instead of calling me an ‘asshole’ for insisting that (shock) you render the literal truth in your postings. Suffice to say, the overwhelming evidence of experience suggests that your opinions on the effectiveness of unit testing as a replacement for static analysis are well off the mark.

    No, I really mean this. If you’ve tested every possibly input that could be used during functioning, then you’ve tested every possible input.

    The set of possible inputs is usually infinite. Given that functional testing is a form of black-box testing you cannot know which inputs to test. In a statically typed language the types form part of the functional specification, and the compiler tests that part automatically. Obviously, if you can look into the code and figure out how all the errors will occur then you can write your tests to uncover them. This is not functional testing.

    Maybe you’re right in theory but in practice it just doesn’t matter.

    Thank you for sharing this unsubstantiated assertion.

  114. You made a false statement about type checking.

    I already explained the context and what I meant.

    and you ought to take it back instead of calling me an ‘asshole’ for insisting that (shock) you render the literal truth in your postings.

    So I hurt your feelings by making what you perceived to be a false technical statement? Or by calling you an asshole after you persisted in calling me “wrong” after I clarified my position?

    Suffice to say, the overwhelming evidence of experience suggests that your opinions on the effectiveness of unit testing as a
    replacement for static analysis are well off the mark.

    What evidence?

    No, I really mean this. If you’ve tested every possibly input that could be used during functioning, then you’ve tested every possible input.

    The set of possible inputs is usually infinite.

    Absolutely! I was wondering when you would get around to this, because you’re the man who is insisting on perfection in writing, and yet you said:

    I’ll make it easy for you: if you have a procedure that performs a type-incorrect operation on some strange edge case (the result of a bug, perhaps) then we can easily conceive of tests that are comprehensive functionally that will miss the case. A static type checker would alert you upon compilation. You are wrong.

    Look, if you want a conversation about the pros and cons of static vs. dynamic, I’m there. But if all you want to do is have a pissing contest about who made overstatements or mistakes, please tell me how the C++ typesystem throwing a runtime exception has the slightest thing in the world in common with a theorem prover.

  115. The set of possible inputs is usually infinite. Given that functional testing is a form of black-box testing you cannot know which inputs to test. In a statically typed language the types form part of the functional specification, and the compiler tests that part automatically. Obviously, if you can look into the code and figure out how all the errors will occur then you can write your tests to uncover them. This is not functional testing.

    Quote from a “demonstration” unit testing session involving Robert “Uncle Bob” Martin (who is (for those who don’t know) one of the most rabid supporters of unit testing in the world and who has held the belief, and may still do, that “it has become infeasible, in light of what’s happened over the last 6 years, for a software developer to consider himself ‘Professional’ if he does not practice test driven development.”) :-

    RSK: “Maybe something will occurr to us later. Right now, I think I see a bug. May I?” (Grabs keyboard.)
    … snip code …
    RSK: “Hmm. That doesn’t fail. I thought since the 21st position of the array was a strike, the scorer would try to add the 22nd and 23rd positions to the score. But I guess not.”
    RCM: “Hmm, you are still thinking about that scorer object, aren’t you? Anyway, I see what you were getting at, but since score never calls scoreForFrame with a number larger than 10, the last strike is not actually counted as a strike. It’s just counted at a 10 to complete the last spare. We never walk beyond the end of the array.”

    from here.

    So from my perspective, you’re simultaneously right and you’re wrong.

    If you just look at the code, then yes it’s black box testing in the purest sense. Provide only inputs and test only outputs (for appropriately broad definitions of input and output for example including mock functions). However, in my opinion, both the test and the object feed off of one another. You add tests which require code to satisfy which you search for edge cases which results in more tests which you require code to satisfy.

    If you don’t let the code and your experience suggest extra test cases when unit testing then yes, you’d be better off with brainless automated checks(and i’d suggest that contract driven development is probably better for you than even type checking) because you’re testing in a brainless automated way.

  116. > Thank you for sharing this unsubstantiated assertion.
    You are absolutely welcome.

    Okay, what I should have said was, “in practice it just hasn’t mattered for me personally,” but honestly that proviso goes without saying. My substantiation consists of the fact that I have no non-trivial examples of dynamic typing causing problems, therefore my absence of examples furthers my point. (Unless you want a listing of all the code I’ve written where I *didn’t* encounter problems?)

    Static typing is not even remotely a cure-all, and in fact it sometimes serves to disguise other errors; here I’m thinking of uninitialized variables and the like. These can require very fallible logic checking in the compiler, or the use of special tools like Valgrind (which, by the way, is a form of finite-substitution test, oh the horror!) And making the memory allocator zero the RAM automatically only further disguises these errors when they exist in your code. With Python, you get this for free because variables don’t even exist until they are assigned a value, so uninitialized variables always fail noisily instead of causing subtle problems. I know this doesn’t directly address your point, but it does demonstrate that it’s not a clear-cut case of static typing always catching more errors.

  117. Patrick,

    Please stop insulting Roger with the a word. Yes, he has demonstrated that he can be difficult to get along with. I believe you will attest that I can be difficult to get along with, and I will attest that you can be difficult to get along with. Sauce for the goose.

    Yours,
    Tom

  118. Jessica Boxer wrote:
    I know you open source guys hate to hear this, but Microsoft solved this problem fifteen years ago with pre-compiled headers.

    Waitaminute, waitaminute! Linux does not have pre-compiled headers?! Then WTF are all those .o files cluttering up my /usr/lib doing?!

  119. @Tom:

    I freely admit I’m an asshole sometimes. If person (A) says something in one context, and person (B) questions them on it in a more general context, and person (A) goes back and explains, not only what they meant, and the context they meant it in, but also a little about their perspective on the general context, yet person (B) won’t let it go, then it gets tedious. I don’t think either you or I ever told the other one “It’s good to see you finally admitted that you were wrong” when no such admission was forthcoming, but I will freely admit that our little exchange shortly before this one put me in more of a foul mood when this exchange came along. So I will try to avoid that word, but I will note that in this instance, other more mild-mannered people issued rebukes as well.

  120. Patrick,

    > yet person (B) won’t let it go, then it gets tedious

    Letting go early on can surely prevent much tedium! A lesson I clearly need to learn. :)

    > I will note that in this instance, other more mild-mannered people issued rebukes as well.

    Yes, as I said, Roger can be difficult, although he is also mild mannered. But I’m not sure they are more mild-mannered than you are. You are pretty mild-mannered. Maybe they are just more mild-mannered in this thread.

    Yours,
    Tom

  121. # Phil Says:
    > Waitaminute, waitaminute! Linux does not have pre-compiled headers?! Then WTF are all those .o files cluttering up my /usr/lib doing?

    1. Where did I say “Linux does not have pre-compiled headers”?
    2. A .o file is not a pre-compiled header file.

  122. Roger Phillips Says:
    > Was my comment rude?

    Sure, but the rudeness wasn’t the problem. You said effectively: admit you are wrong. As soon as you say that you have changed the conversation from an honest search for the truth to a pissing match to prove who wins. Or to put it another way, you have returned to third grade.

    Sorry, I have the guts to admit when I am wrong, I have done so on this blog many times. And I am magnanimous enough to allow someone who realizes he was wrong slip away, learn, and come back as an equal.

    I can’t quite get you. You are obviously EXTREMELY smart, what is wrong with just making your case.

    1. >I can’t quite get you. You are obviously EXTREMELY smart, what is wrong with just making your case.

      I’ve seen enough of Roger Phillips to form a hypothesis about this. I think he’s one of those ferociously bright semi-autists CS is stuffed with for whom everything tends to turn into a tremendously intellectual dick-size war. They’re not bad people, but they are chronically abrasive and difficult to work with. You have to know going in that it’s what you’re dealing with and accept that (a) they’re just not wired very well for human interaction, (b) the fact that they snap at you occasionally is basically just them and nothing to do with you, so (c) you shouldn’t take it too personally.

  123. 1. Where did I say “Linux does not have pre-compiled headers”?

    Well, you said that Microsoft has had this problem licked for years by supporting pre-compiled header files. When you said that, you implied that other development tools don’t support pre-compiled header files, which, as I pointed out days ago is most certainly not true.

    2. A .o file is not a pre-compiled header file.

    Correct. Anyone wanting to learn about gcc’s support for pre-compiled header files can read about it in the online documentation.

  124. I do programming for fun, not as a job, so maybe I don’t get many of the subtle distinctions between static and dynamic typing etc. I just use fun tools and Python happens to be more fun than C++.

    Productivity wise I can say I’ve created more open-source applications since I learned Python because I actually have _completed_ what I started. :-)

    I think points like that matter to a small extent. Especially when I don’t want to waste hours debugging a subtle pointer error in C or C++ or figuring out the best method to serialize an object when Python provides a one-line method to do it painlessly.

  125. @Hari:

    I do programming for fun, not as a job, so maybe I don’t get many of the subtle distinctions between static and dynamic typing etc.

    That’s because there is a continuum of differing functionality between what people think of as statically typed and dynamically typed languages. The wikipedia page is a reasonable starting place, but it’s just a starting place.

    There are probably a dozen or so different attributes that all get conflated together when people talk about these things. Off the top of my head: Explicit vs. implicit variable declaration, explicit vs. implict typing on the declaration, restriction of a variable to a single type or not, execution time vs. non-execution time checks, checks specified in the language definition vs. “common sense” checks (made by the compiler with high-warnings, or at runtime or with something like a lint program), implicit vs explict vs disallowed type conversions, implicit vs explict vs disallowed coercions, type/memory safety (which does not even imply a requirement for explicit type conversions in all cases)

    As an interesting data point, when it comes to building a chip, there are two major competing languages used for the digital logic inside the chip. VHDL is an originally DOD-defined subset of ADA. Verilog just sorta growed. It has some Pascalish begin/end roots, but no real type checking. It’s extremely easy to get a Verilog program to compile compared to a VHDL program, and extremely easy to write a Verilog program that compiles that doesn’t do exactly what a careless perusal of the code indicates that it would. Both the languages are equally mature (ADA RFQ came out from DOD in 1981; first Verilog appeared in 1984; before mid-90’s both languages were well-entrenched).

    Now, most chips are created using Verilog. Here’s an example survey.

    Given that the cost of an incorrect chip description is exceedingly high, both in actual dollars for the mask set (which can exceed a million dollars in small geometries) and in lost time-to-market (build the mask, fab the chip, package the chip, test the chip == many weeks, perhaps 3 months), and given that there are some reasonably bright people in chip design, doesn’t it seem to make sense that they would bring the best possible tools to bear on the process?

    The answer is they do. But apropros the other side discussion about IDEs; about whether “integrated” vs “discrete” is the way to go, most chip design flows have, of necessity, several discrete tools. (There may be an IDE, but it simply presents a convenient front-end for tools from different vendors, or even built in-house).

    When it comes to the sorts of things that static typing usually catches, there are several excellent Verilog lint programs that can point to possible inconsistencies in your code. These can and do evolve independent of the actual synthesizer/simulator/language definition, and allow user tweaking of the results.

    One other thing about Verilog vs. VHDL is that (precisely because of the reduced verbosity of things like type checks) Verilog is somewhat easier to programmatically manipulate — to generate and to roughly parse for special application-specific checks. If it’s easier to write a code generator for it, then it’s easier to generate correct code in some cases, and if it’s easier to parse it, it’s easier to check it for things you care about.

    So, I agree that when it really matters you should have the best programs you can find or write do some reasoning about your code. But that’s not necessarily the compiler. Also, in a lot of cases, it doesn’t really matter. If hari does some programming for fun and sticks it out on the internet, is it a net gain to the world, or a net loss because there was a bug in it that affects one person, that the static typer would have caught if he had written it in C?

    1. >There are probably a dozen or so different attributes that all get conflated together when people talk about these things.

      And you left out the really important one! Garbage collection and unlimited-extent types.

      It’s true that this isn’t logically entailed by any of the “dynamic” traits you’re talking about (nor outright excluded by the “static” ones), but historically it tends to travel with “dynamic” ones. Given how common and severe memory-management bugs are, the presence or absence of GC is hugely important. I’ve often observed that pro-“dynamic” people in language flamewars seem to miss this and incorrectly attribute the productivity gains to the attributes you talk about, which are in reality much less important.

  126. And you left out the really important one! Garbage collection and unlimited-extent types.

    Well, I did mention memory safety. Garbage collection is one tool a language can use to help get there.

    I’ve often observed that pro-”dynamic” people in language flamewars seem to miss this and incorrectly attribute the productivity gains to the attributes you talk about, which are in reality much less important.

    It’s obviously very important, but IIRC one of the things starting this discussion was relative merits of C# vs. Python, and, though I’ve never used it, I understand C# is fully garbage-collected.

  127. # Morgan Greywolf Says:
    > When you said that, you implied that other development tools
    > don’t support pre-compiled header files,

    I also said I thought Roger was smart, that doesn’t mean I don’t think you are smart, on the contrary, you are obviously smart. My comparison was with Eric’s tool, not gcc’s capabilities. You just put me in a box and extrapolated based on your own biases.

    > Correct. Anyone wanting to learn about gcc’s support for pre-compiled header files can read about it in the online documentation.

    I think it is interesting that nearly every Windows C++ program I have seen uses pre-compiled header files and I don’t recall a single gcc program that does (though I am sure there are some.) This is interesting, don’t you think? Defaults, tools and culture count for something.

  128. # esr Says:
    > and incorrectly attribute the productivity gains to the attributes you talk about, which are in reality much less important.

    I think this is a pretty interesting insight. I might add that many of those who advocate for dynamic typing do so in absence of any experience with a garbage collected statically typed language. I’d also say Patrick is right on the money, static type, dynamic type is not black and white, it also isn’t gray, it is a feature matrix. I also think that the ability to imply types is close to gc in terms of the expression richness of data hiding of good languages. Consider this code fragment. (There is a good chance this will get messed up by WP, but I’ll do my best:)

    var femalesOverForty = db.Customers.Where(item => item.Gender == “f” && item.age > 40)
    .Select(item => new { name, zip})
    .OrderBy(item => item.zip);

    foreach(var item in femalesOverForty)
    Console.WriteLine(“Name {0}, zip {1}”, item.name, item.zip);

    This is all statically type checked, even though the types are never actually declared, in fact the types are extremely complicated.

  129. This is all statically type checked, even though the types are never actually declared, in fact the types are extremely complicated.

    As I wrote earlier, I definitely agree that languages like C# are helping to move the ball in the right direction. Explicitly declared yet implicitly typed variables, anonymous types, and some of the other features in MS C# not yet in the standard are a great trade-off of verbosity vs. functionality, and as Eric pointed out, these days automatic garbage collection is de rigueur.

    These are some of the reasons I quizzed Roger over his lumping of C++ with some of the more modern languages. Sure, you can bolt a GC onto it or use “smart pointers”, but either way is extra work, and the variable declarations are verbose, and templates are needed to get around the requirements for explicit typing and restricting a variable to a single type. It’s a mess, especially combined with all the syntax baggage caused by starting with C and adding on. The primary act of programming is reading code, and most C++ is not all that readable.

  130. Patrick, thanks for the long and detailed explanation on the different facets of this issue.

    The implications/consequences of each decision of the language, compiler or interpreter designer is something that is a fascinating subject to read about.

  131. @hari:

    > Patrick, thanks for the long and detailed explanation on the different facets of this issue.

    You’re welcome, though, as they say, most of what I have written is probably either obvious or wrong.

    > The implications/consequences of each decision of the language, compiler or interpreter designer is something that is a fascinating subject to read about.

    Yes, I find engineering trade-offs of all kinds fascinating — the intersection of creativity and constraints.

  132. I think he’s one of those ferociously bright semi-autists CS is stuffed with for whom everything tends to turn into a tremendously intellectual dick-size war.

    “Intellectual dick-size war” is not how I would describe it, though it is probably true in some cases (not all). It’s more like what I like to call Comic Book Guy syndrome (after a Simpsons character who unpleasantly fits the archetype). Certain people just bristle when “someone is wrong on the internet” and feel the need to set the record straight. This post may be an example of the syndrome on my part.

  133. Jeff,

    I’ve submitted a couple of possible improvements to your software below. Feel free to add them to your respository or reject them if not suitable.

    > Certain Nearly all people just bristle when “someone is wrong on the internet” about something that presses one of their buttons and feel the need to set the record straight.

    Yours,
    Tom

  134. Well I’d like to sidestep the flame-war, and get back to genuflecting hero-worship. I think deheader is an awesome, simple, and elegant idea. Perhaps its not the be-all/end-all of making C/C++ code leaner/better, but it seems axiomatic to me that, all things being equal (i.e. compile+run works with or without some header in some cpp file), let’s take the unnecessary header out!

    SO, I downloaded a copy of deheader, and started some experiments on the system I work with, which comprises millions of LOC++, “organized” in O(100) libraries, built over years by an ever-changing body of O(100) developers. At this point, the whole thing takes hours to compile from scratch.

    I checked out all the files in one smallish library (20 files, containing 172 #includes), figured out just the right -m[ake] command to give deheader, and let it rip!

    It took about half an hour, and in the end it decided to remove 109 includes. BUT, deheader was a little overambitious; I had to restore 6 includes to 5 files to get the library compiling again (same platform, same environment, same make command). I don’t know python enought to spend a lot of time debugging what might have gone wrong, but here are two “feature requests” I’d like to see in deheader, to bring it to a state that I could recommend to my O(99) developer colleagues to trim down all of our code:

    * More reliability (fix whatever bug is letting deheader think that removing an include is OK, but then when I try to compile it later, it fails)
    * A new switch (-c?) that would, instead of -r[emoving] includes, would -c[omment] them out, i.e.
    /*#include REMOVED by deheader, user=ruberad, date=12/16/10 */
    * A diary or logfile option. I’d like to watch just filenames and spinning batons in the executing shell, but have a growing, verbose log that I can tail -f in another shell, or analyze later that will show me at each step what file is being deheaded, what set of #includes are being excluded, what make command is invoked, what the output of the make command is, and what decision deheader makes based on that output.

    1. >* More reliability (fix whatever bug is letting deheader think that removing an include is OK, but then when I try to compile it later, it fails)

      No perfect solution to this is possible, for various reasons described on the manual page. The best we can hope for is an acceptably low rate of false positives, which it seems to have achieved for you. I am working on improving that, by adding an internal dependencies table derived from the Single Unix Specification and field observation of some common cases.

      >* A new switch (-c?) that would, instead of -r[emoving] includes, would -c[omment] them out, i.e.

      That’s a good idea. Goes on the to-do list.

      >A diary or logfile option.

      I actually generate your “diary” output already at verbosity level three.

  135. Oh, and by the way, for this single library, (once I repaired the includes that shouldn’t have been deheaded) deheader reduced compile times by ~15% in solaris and ~10% in windoze.

  136. >* No perfect solution to this is possible, for various reasons described on the manual page.

    Hmm, I understand how cross-platform issues can pop up, but I don’t understand how deheader can compile with the command I give it, and think removing a header yields success, and then I compile afterwards with the same command, and it fails.

    I will experiment with CLFAGS for warnings and other suggestions from the manpage; if I could get it to be 99% reliable in the same shell, I’d be very happy!

    >* I actually generate your “diary” output already at verbosity level three.

    I’ve paged through that output, and I see a bunch of build command outputs, and a bunch of “deheader:” lines at the end. I don’t know if it’s a STDOUT/STDERR asynchronicity thing, but it would be more helpful to see each test compile surrounded by “deheader: testing compile of foo.cpp without ” and “deheader: test compile of foo.cpp without SUCCEEDED|FAILED”

    If you had written it in Perl instead of Python, I could maybe have tuned it up myself! In any case, thanks again, I’ll keep working with it, and checking back.

    1. >Hmm, I understand how cross-platform issues can pop up, but I don’t understand how deheader can compile with the command I give it, and think removing a header yields success, and then I compile afterwards with the same command, and it fails.

      Er. Quantum fluctuations?

      Seriously, when I’ve analyzed cases like this it has turned out to be because the regular build system was doing something deheader cannot know about, like setting -D or -U flags.

  137. So I hurt your feelings by making what you perceived to be a false technical statement? Or by calling you an asshole after you persisted in calling me “wrong” after I clarified my position?

    That is not what I said.

    What evidence?

    If you don’t believe that large programs frequently contain bugs catchable by static analysis, then we’re not using the same software.

    Absolutely! I was wondering when you would get around to this, because you’re the man who is insisting on perfection in writing, and yet you said:

    Okay, let’s do a case split on the statement “functionally comprehensive is understood to meanthe same thing as functionally exhaustive”. Case true: you can almost never be functionally exhaustive, so your whole argument falls apart. Case false: the block of text you quoted is not in contradiction with my statement about the cardinality of the usual domains.

    Look, if you want a conversation about the pros and cons of static vs. dynamic, I’m there.

    I don’t see the point of such an “argument” other than people pretending that their personal preferences are derived from some reasoned stance on type systems. You’re productive with Python; good for you. You will only understand the value of static typing when you make the effort to seriously leverage its advantages. There is no argument that proves its usefulness in practice.

    But if all you want to do is have a pissing contest about who made overstatements or mistakes, please tell me how the C++ typesystem throwing a runtime exception has the slightest thing in the world in common with a theorem prover.

    I haven’t read this thread in a while, so maybe you’re referring to some more specific comment, but this seems to be related to the claim that the C++ type checker is basically a miniature theorem prover. You then seem to have jumped off from there into a confused point about runtime exceptions. The C++ type checker (imlpemented correctly) is essentially an automated (and complete) theorem prover for some specific class of theorems. You may regard this class as trivial. Nonetheless, it includes theorems for which there is no proof by finite substitution of normal forms.

  138. Okay, what I should have said was, “in practice it just hasn’t mattered for me personally,” but honestly that proviso goes without saying.

    Sorry, I prefer not to conflate my “personal experience” with the truth.

    Static typing is not even remotely a cure-all, and in fact it sometimes serves to disguise other errors; here I’m thinking of uninitialized variables and the like.

    Static typing does not disguise uninitialized variables. You’re confusing type checking with name analysis.

    These can require very fallible logic checking in the compiler, or the use of special tools like Valgrind

    This is not a function of static type checking. Again, you’re pointing out a case that the type checker cannot handle in general. However, there is no reason it cannot degrade gracefully. Better yet, if you’re worried about this kind of thing you ought to be programming in a pure functional language.

    And making the memory allocator zero the RAM automatically only further disguises these errors when they exist in your code. With Python, you get this for free because variables don’t even exist until they are assigned a value, so uninitialized variables always fail noisily instead of causing subtle problems.

    It is not true that you cannot have subtle problems due to the misplacement or omission of assignments. I suggest you investigate Python’s scoping “semantics”, if you can call them that. Maybe they fixed this in Python 3. I’m not holding my breath.

    I know this doesn’t directly address your point, but it does demonstrate that it’s not a clear-cut case of static typing always catching more errors.

    I never said this, nor would I. Obviously, a good suite of tests is important for any program. I would even test a program I had “proven” correct. Unit testing and static typing are not mutually exclusive or even in competition.

  139. If you don’t let the code and your experience suggest extra test cases when unit testing then yes, you’d be better off with brainless automated checks(and i’d suggest that contract driven development is probably better for you than even type checking) because you’re testing in a brainless automated way.

    I believe you have missed the point of this discussion. Unit testing and static typing are not in competition.

  140. >If you’re not worried about precision you could mine the manpages of multiple systems for the header requirements for different calls, and ensure the smallest set of headers covering all systems are present. That way the tool could double as a rough checker for header conformance.

    Roger Phillips wrote this earlier in the thread. It is exactly the approach I have ended up taking, and is implemented in the brand-new 0.5 release.

  141. This tool doesn’t actually work at all. Imagine having only one job and doing it *this* badly.

    1. >This tool doesn’t actually work at all. Imagine having only one job and doing it *this* badly.

      There’s a regression-test in the distribution that demonstrates correct operation. Are you sure you’re applying it correctly?

Leave a Reply to Tom Dickson-Hunt Cancel reply

Your email address will not be published. Required fields are marked *