Defect attractors

There’s a phrase I’ve used on this blog more than once that I had reason to Google just now and found that (to my surprise) the top hits are mostly my writings. It is “defect attractor”.

In this post I’m going to explain why I think this is an important concept that needs to be in the toolkit of every software engineer, and talk about the practice it implies.

The first thing to know about “defect attractor” is that I didn’t coin it myself. Google notices the first use I know about, by Les Hatton. In his work on the error statistics of large C++ programs, Hatton described class inheritance as a defect attractor.

I’m a fan of Hatton’s work. He has breadth of perspective. He asks interesting questions, finds sharp answers, and writes about them lucidly. I understood instantly what he meant by that phrase and seized upon it with glee.

A “defect attractor” of a program, language, API, or any other kind of software construct is a feature which, while possibly not bad in itself, spawns defects in the design or code near it.

The concept unifies a large class of things experienced software engineers know are problems. Portability and backward-compatibility shims are defect attractors. Global variables are defect attractors. Special, corner, and edge cases are defect attractors – in fact, when we complain about something being a special case we are usually expressing unease about it being a defect attractor. “Special case” is the what of our complaint, defect attractor is the why.

Hatton was quite right, class inheritance is a defect attractor. This is true on the level of language design where it spawns questions like how to handle diamond inheritance that don’t have one good answer but multiple bad ones. It’s also true in OO codebases, where Hatton noticed that defects cluster noticeably around code using inheritance. Language designers have reacted sensibly by moving to trait- and interface-based object systems that don’t have inheritance – Go is a notable recent example.

More defect attractors: endianness-sensitive data representations, binary wire and file formats in general, and floating point. These are things where program after program using them makes the same dumb mistakes. Experience doesn’t help as much as it should.

In C: pointer arithmetic. Casts. Unions and type punning. And of course the Godzilla of defect attractors, manual management of dynamic memory allocation. Experienced programmers know these are going to bite them on the ass and that much of the labor of C programming is not the expression of algorithms but mitigation attempts to blunt the attractors’ teeth.

In any language, the goto statement is a famous defect attractor. So are text-substitution macros in languages that have anything resembling a preprocessing stage.

Once you’ve grasped what a defect attractor is, it’s a short step to good practice: stay the hell away from them! And when you see a known defect attractor in code you’re auditing, go to high alert.

Slowly we’ve learned this about some individual defect attractors like gotos. The consciousness I’m trying to raise here is that, as engineers, we should be more generally aware of what kinds of features and techniques defects cluster around; we should know to avoid them and to be suspicious when we can encounter them.

Clarity of language promotes clarity of thought. I want this phrase to spread because it’s clear. Thank you, Les Hatton.

I’m sure my commenters will have a good time pointing out defect-attractor classes I’ve missed. Just try to keep in mind that “things I don’t like” and “things that cause defect clusters” aren’t identical, eh?

254 comments

  1. Yeah, I’ve slowly learned that most of what you listed are major defect attractors.

    Inheritance being generally bad was something that took me a long time to get. I think it has to do with when I went to college and learned how to write programs of more than trivial complexity, and Java was really in vogue. The one place I still see some value in inheritance is a lot of graphical programming — UI toolkits and such. Beyond that, I’ve scarcely seen the need to use it in many years. I’m also super wary of languages that are 100% object oriented, like Java and C#, and I’m much more attracted to OO being a feature you can use in the few specific cases it makes sense, like in Python.

    1. Just a minor aside. The idiom is “en vogue”, not “in vogue” like you wrote. It’s better english and proper french when you write en vogue.

    2. The philosophy behind inheritance was to avoid repetitive switch-case statements. But at some level that was the philosophy behind the whole OOP paradigm.

      E.g. you have a function executed when the user deletes a document, if it is an invoice document, do this, if it is a credit note document, do that. You have a function that is execude when the user modifies a document, if it an invoice document, do this, if it is a credit note document, do that.

      So they came up with encapsulation so that maybe we could bundle the data representing the document with the functions that access it, so we’d have Invoice and CreditNote documents with their OnModify and OnDelete methods.

      Inheritance and polymorphism, the other two principles, came from the idea that while the Invoice document and the CreditNote document are different things, they also have similarities. Both will have a customer, lines, a sum etc. They are not entirely different animals.

      So, they thought, let’s just make a generic concept, a Document class, which will be their common ancestor. And they inherit stuff from it and modify it as they see fit.

      This is an immensely good fit for how humans just love categorical thinking. At least schoolteachers do. Much of my schooling was learning these categories like the dolphin is a mammal and mammals are vertebrates etc.

      Without all these cladistics, it is a good question why would one even use OOP. Just to encapsulate?

      Of course these trait-and-interface based things follow a different philosophy: just give me something that acts like a dolphin, it has to be able to jump through hoops, it has to look streamlined, but does not actually have to be a dolphin. Just something that can do the job I would expect a dolphin to do. And encapsulation still makes sense. Because data is what the dolphin is and methods are what the dolphin does.

      Or not. In that kind of design by contract, you would not expose the data anyway to another programmer, only the methods. The interface is methods, not properites. Up to you what properties to have as long as you supply the methods. And then why even use OOP, the interface and the class implementing it is nothing but a collection of methods, could just as well be done with naming convention…

  2. It follows that static code analysis tools are a way to crystallize our knowledge of defect attractors and stay away from them.

    With the obvious drawback that newcomers only know that they should not do X and Y because “it’s not good practice” or even “it’s not preferred” as opposed to understanding fire burns by getting close and noticing it’s hot… which leads to them getting burned when working on a greenfield project – or a brownfield one where the tools aren’t part of the chain.

    1. >With the obvious drawback that newcomers only know that they should not do X and Y because “it’s not good practice” or even “it’s not preferred”

      If you have to say something like that, “defect attractor” is a better term. It doesn’t just say “there’s a known problem here”, it says what the reason for avoidance is.

      I think Hatton was riffing off the concept of an attractor in a state space when he coined it. I liked that reference, too. There’s a kind of poetry to it.

  3. You say that “binary wire formats in general” is an example. I can understand why you might say that (e.g. they might be harder to debug). But where does that leave something like HTTP/2? (Which, regardless, de facto requires encryption.)
    The HTTP/2 FAQ says:
    “Binary protocols are more efficient to parse, more compact “on the wire”, and most importantly, they are much less error-prone”

    How respond you?

    1. >How respond you?

      I agree with the efficiency and compactness claims. I disagree violently about “much less error prone.” At minimum we can say that minimizing the complexity of the logging and diagnostic tools around a wire format is good for keeping long-term defects down, and textual wire formats clearly do that.

      Sometimes efficiency is so important that you have to use a binary wire format. But never fool yourself about what you’re trading away when you do that.

      1. I think that the intention with HTTP/2 in particular is that it will be less error prone than HTTP/1.x, as there is meant to be one way of doing things. Moreover, you won’t have that silly issue with line endings (LF/CR vs LF only), case, etc.

        As for compactness, surely, generally speaking, a text format with compression (presumably optional for debugging) is going to be more compact than a binary format (with or without compression)?

        Personally, I’m not going to start writing any wire formats any time soon.

        I think that binary formats for files might also be a potential defect attractor. Comparing something like the old MS Excel formats to ODS (OpenDocument Spreadsheet), I’d pick the second any day.

        1. >As for compactness, surely, generally speaking, a text format with compression (presumably optional for debugging) is going to be more compact than a binary format (with or without compression)?

          Well, yeah, but if you compress a text format you lose some of the advantages of textuality.

          >I think that binary formats for files might also be a potential defect attractor.

          Oh hell yes. In fact I think I’ll add that.

            1. The defect attractor concept neatly encapsulates one of my biggest systemd complaints: that its added complexity provides more places for bugs to hide. Systemd is a defect attractor for devops. But apparently, its advantages outweigh even that major drawback.

          1. > if you compress a text format you lose some of the advantages of textuality.

            Not really, because sticking
            | gunzip -c | tee -a log-that-bitch | gzip -c
            or equivalent in the pipeline somewhere isn’t all that difficult.

            You just have to have a compression library that’s been thoroughly tested as bullet-proof. This makes gives a compressed text format the main advantage of binary formats while retaining the ease-of-inspection of a purely textual format.

        2. Recently, I tried unzipping an ODS and looking at the XML within. It was textual in name only. The whole document was on one physical line.

          Real textuality requires line breaks in the right places, so that the resulting serialization is (1) readable in a text editor or viewer without horizontal scrolling, and (2) processable with line-oriented tools.

          1. Likewise, I’ve occasionally had need to go into the guts of .docx files, which are structured as all-one-line XML. Good lord was that hard to parse the first time. (I’ve taken to manually using find-and-replace to add line breaks around each block, just for my own sanity – it’s a good thing this is not a task I need to perform often.)

              1. I don’t know what that is, so I couldn’t tell you(I’m probably in the 10th percentile of tech-savviness on this blog – I’m a mediocre coder, and avoid Linux). Usually I’m inspecting the XML to understand, rather than to directly modify, so I work on copies – under those circumstances, there’s not much worry about corrupting the file, so it’d probably be fine.

                A quick Google suggests that it just inserts whitespace, so doing a quick find-and-replace on whitespace outside of tags when you finish should presumably stitch it back together neatly?

                1. I think xmllint (it’s a Linux tool; I don’t know if there’s a Windows version) has an unformat setting, but I don’t know; never tried it. What it does when you specify --format is make it nice and pretty, with indentation corresponding to levels of semantic depth in the tree. Makes the doc a hell of a lot more readable.

                  What Word should do is take the pretty-printed version and process it just as though the whitespace wasn’t there. This is Microsoft, however…

                  1. Some years ago I was writing Ruby code to interop w/ C#, with both ends signing and validating XML payloads.

                    Spent some time bashing my head against it until I realised that Microsoft had interpreted XML canonicalisation specs differently to the Ruby folks (and pretty much everyone else). So the reason for the mismatching signatures was that the ‘canonical’ XML was different on each system …

                    So, yeah, it wouldn’t surprise me if Word didn’t do the right thing here.

                  2. I just tested it, and inserting whitespace didn’t cause any observable change to the Word doc.

                    Unfortunately, this is rather impractical for the usage where I was doing this – it’s this ancient, gnarled document that’s about 170 pages long, where we make constant edits in Word, and where several thousand of copies are stored in various places on an annoyingly size-limited shared drive at the office. Also, the master is a .doc, not a .docx – I translate to .docx to examine the structure, but due to weird compatibility concerns with outdated macros and different pieces of third-party software, we specifically need to stay on Word 2010 with .doc format.

                    Industry is scary.

              2. I may be mistaken but I think xmllint may insert significant whitespace where none was intended.

                Consider this HTML example: <p><b>foo</b><i>bar</i></p>. If not specifically taught about HTML block and inline tags, xmllint will probably format it like this:

                <p>
                <b>foo</b>
                <i>bar</i>
                </p>

                and a web browser will interpret that as bold-foo, space, italic-bar.

      2. As it turns out, Postel’s Law is itself a defect attractor, which is part of the justification for HTTP/2’s wire protocol being not only binary, but also specified in detail; any client or server that doesn’t get the details right is considered out of spec. This ensures uniformity across implementations and helps client and server writers avoid nasty surprises.

        1. One of the most subtle defect attractors in history, and if anyone was ever to write an extended discussion of them, this is an example that would be all-but-mandatory. It only shows itself as a defect attractor at very large scales. Arguably, in many cases, a larger scale than you probably need to worry about. But when you get to that scale, oh boy, is it ever a defect attractor.

          1. Well, it depends on your goals.

            In the case of the web, abiding by Postel’s law made it easy for average people to slap together a web page (admittedly, often with really crappy HTML), at the expense of making web browsers more complicated (to ingest said crappy HTML and try their best to produce something readable for the user).

            Is the purpose of the web to make it easy to publish information, or to make it easy to write a web browser.?

            I’d argue for the former.

            Similar observations could be made about RFC-822 email. I’m old enough to remember when the X.400 people were sneering and laughing at the RFC-822 people.

              1. Evidence that Postel’s law is the primary cause of the security flaws (rather than, say, using languages that are prone to buffer overruns)?

                Considering that web browsers, by their very nature, are exposed to every attack that can possibly be flung at them, I actually don’t think their track record is all that bad.

              2. I suspect most of those security flaws have to do with turning the web browser into a multi-language run-time engine for Turing complete languages (either natively or through plugins) rather than rendering slightly mangled HTML.

                There is a difference between “Be liberal in what you accept” and “Be a f*g whore and do everything that comes in off the wire”.

            1. Is the purpose of the web to make it easy to publish information, or to make it easy to write a web browser.?

              This is precisely why Postel’s law is only a trap at scale. At first, that goal is accomplished. But over time, as the browsers each developed deeply different ways in which they were liberal with what they expected, it became increasingly difficult to write pages that worked the same in each browser, and it was getting worse over time.

              If you want to argue that the web had to start that way, I wouldn’t fight it, but it also is the case that eventually it had to stop being that way. HTML5 is now not liberal in what it accepts, and thus no longer something that can be cited as an example of a simple, straightforward success of Postel’s law. It simulates it, by writing what is technically a conservative algorithm that looks like it is liberal based on decades of experience, but is in fact clearly specified and unambiguous

              As my phrasing implies, I would have no problem calling it a complex and nuanced success of Postel’s law. I find it plausible that it was a net win at the beginning, but something it had to abandon at scale. (A very, very large scale!) My position isn’t so much that Postel’s law is “wrong”, but that it isn’t a “law” and applies less often than is supposed, and that protocol designers need to be aware of all the issues involved to make correct engineering decisions, including the fact that very, very strict protocols are often the correct decision.

              As it happens, I’m fighting through this right now. I’m creating a protocol in which I am being very strict about what gets accepted. But I also keep all the failed messages around, so we can see them, see why they failed, and potentially even correct them and use them later, which for this application is a necessity. Simply being straight-up “liberal in what I accept” would lead to disaster in, oh, probably about a year, and unfortunately it would be a disaster that involves large sums of money, and a likely trashing of a reputation I’ve built up carefully over the years. I need a richer understanding of the space here to drive the solution correctly.

              1. > [I]t became increasingly difficult to write pages that worked the same in each browser,

                That has not been my experience at all.

                At one time, you had to browser-sniff for all but the most simple pages.

                That’s largely a thing of the past now (IE was the last holdout. Edge, by comparison, is pretty good).

                This is true regardless of whether the page happens to conform to HTML5.

        2. >As it turns out, Postel’s Law is itself a defect attractor

          No, I don’t think so. I think people claiming that are victims of a prominence fallacy, overweighting the cases where they’ve seen a problem while underweighting lots of cases where Postel’s Law avoided problems so effectively that they failed to notice it happening.

        3. There are two separate issues:

          1) Binary
          2) Specified in detail.

          The *latter* is where the uniformity across implementations comes in.

          I’m not educated enough to know whether binary wire protocols are inherently good or not, but the notion that they are *inherently* less error prone is crazy.

          1. Goals depend a bit on what your definition of a defect is.

            You could hit goal #2 with a spec like “the name of the object is encoded in 8 ASCII bytes from offset 0 through 7 of the structure.” But then you utterly fail when people want names like ” My ???? …Emoji Collection.…????. “. In a binary world, now you would need two well-specified formats. In a text-based world, a single spec can cover both cases, and maybe some existing implementations already work without being updated.

            With a text-based format, a lot of encoding details are natural for humans to guess, so there’s a wide range of possible inputs that fit in the same incomplete encoding spec (even if some of them were not intended by the author). People retroactively solve problems with old text specs, e.g. UTF-8 encoding to rescue ASCII-based text data formats from the existential threat of UCS-2.

            With a binary format, the only possible expressions are those envisioned by the spec’s author. By now, we all know how (not) good spec authors are at predicting the future.

  4. I quite like your statement, “Clarity of language promotes clarity of thought.”

    I made a comment a few days ago that a good developer will write well (meaning ‘clearly’ in this case) in at least one natural language. I honestly believe that Clarity of Language and Clarity of Thought are a ying and a yang – they promote each other, and lack of either will be the detriment of the other.

    1. That’s an interesting idea, and one I hadn’t considered. I can’t think of any counterexamples, either.

    2. >I quite like your statement, “Clarity of language promotes clarity of thought.”

      Straight outta Korzybski. Read up on General Semantics sometime.

      1. My maternal language isn’t english; when doing the GMAT (Graduate Management Admission Test, which is a test to enter MBAs), most questions were about understanding and how to clearly express oneself.

        I had no problem with this, but since then I’m attentive to the way would-be managers and would-be engineers express themselves, and I see a clear correlation between clarity of language and clarity of concepts.

      2. Related: learn a foreign language either only on a very basic level or very well. There is an intermediate dangerous level where you thing you got it, native speakers think you sort of got it, and yet you will get nuances wrong that will either make the impression you don’t really know what you are talking about or are being deliberately rude.

        1. Yeah. Stammering out “¿Donde estan los baños?” isn’t likely to make anyone think you speak Spanish. Placing your Mexican meal order in reasonably well-pronounced Spanish at a normal speaking speed may well get you a flood of Spanish back.

  5. Can we say the use of regular expressions is a defect attractor? I find the whole concept immensely messy and makes for unreadable code.

    Any examples where the programming language itself is a defect attractor? :-)

      1. My principal contact with C++ was when I was at school and we were taught Turbo/Borland C++ and ended up using classes as a more refined struct. Even our teachers weren’t too familiar with advanced OOP and C++ was presented more like an “advanced version of C”. I haven’t done a lot of C++ programming since then, and hardly done any programming in modern idiomatic C++.

        1. C is much less a defect attractor in the work I do day in and day out than C++ would be. I can grasp C, at least. C++ is too big to hold in the mind.

          1. C++ is a nice tool for expressing programs composed of C primitive operations, enabling a single experienced developer to rapidly write large amounts of C code in a kind of machine-oriented shorthand.

            ++a; // navigate a tree structure, lazily load pages as required, check boundary conditions on data values, retrieve one data item and store it in 'a'; on errors, jump 12 levels up the call stack, executing 30 destructors along the way, terminate a thread or two and email an admin. Unless this line of code is in a template, in which case you better hope you and whoever instantiated this code understand all the parameter class invariants in the program at once.

            It’s pretty terrible in the hands of a developer who doesn’t understand the implications of the equivalent C code, or who didn’t write (or doesn’t remember) the support code hidden in the syntax. Even though it’s not necessary to write the boilerplate code, it’s still necessary to mentally model everything it does.

          2. Can you grasp where all the undefined-behavior landmines are?

            Granted, C++ has those too, but in modern C++ they are somewhat avoidable. And Rust fixes most of them once and for all.

            1. I can grasp C quite well, considering I do it day in and day out. And yes, my code has more than a few comments of the form /* only you can prevent nasal demons */.

              I can’t grasp enough of C++ to have any feel at all for what’s undefined behavior.

              And I’m not able to use “modern C++”. The one project where I’m stuck using C++ is barely able to use C++11. I’m not about to try rewriting 1.2 MLOC to take advantage of “modern C++” when one major effect would be to destroy our ability to sync with upstream code, some of which is 15 years old.

              Rust is right out.

                1. Heh. No, it stems from a comment that doing something undefined lets the compiler do anything, including making demons fly out of your nose. See the Jargon File entry for a fuller explanation.

      2. I’d call C++ a defect in itself, in addition to attracting all manner of defects.

        Its primary appeal is to those who believe that memorizing the needless details of a train wreck of a language is the same thing as skill in coding.

        A C++ “expert” is a pathetic character, full of pride in knowing every molecule of the dungheap on which they reside.

        1. What a colorful characterization.

          Nice to know the last 25+ years of my life is a dungheap.

          Thanks for clearing that up. I shall repent.

    1. >Can we say the use of regular expressions is a defect attractor?

      I don’t experience them that way. This doesn’t necessarily mean you’re wrong, it means we’d have to collect statistics on comparative defect rates to resolve the question.

      >Any examples where the programming language itself is a defect attractor? :-)

      C++. PHP. Javascript. Visual Basic. And, though I regret to say it, Perl.

      What do all these have in common? A language design will be a defect attractor when it’s so full of special cases, exceptions, and irregularities that holding the programming model in your head correctly is difficult.

      You avoid this problem by having few keywords, regular behavior, and strong orthogonality. I could name two common languages that I think do exceptionally well at this, but I expect the resulting arguments might derail the thread.

      1. Would you agree that using complex regular expressions instead of a simple tokenizer/parser system is a defect attractor, rather than generalizing the concept of regular expressions?

        1. >Would you agree that using complex regular expressions instead of a simple tokenizer/parser system is a defect attractor, rather than generalizing the concept of regular expressions?

          It depends. Sometimes a simple parser is the right tool, sometimes REs are. I’ve refactored across that divide in both directions.

    2. Even though I’m a fan, perl is widely considered to be a “write-only language”, and I must agree. I can look at old code and wonder what could have possessed the author to have written such garbage, and then realize I wrote it.

      1. Comments are so you never have to say “why did that idiot write it that way?!” about yourself.

      2. >Even though I’m a fan, perl is widely considered to be
        >a “write-only language”, and I must agree.

        In my experience, any language can be “write only”. It’s a matter of the degree of discipline required to not write “write only” code.

        This one of the reasons why limiting the set of keywords helps reduce opportunities for defects.

        In C, for example, you might write:

        if ( x != y) { printf(STDERR, “Warning: x (%d) != y (%d)\n”, x,y); }

        While in Perl, you could write it that way, or you could write it:

        ( $x == $y ) or warn “Warning: \$x ($x) != \$y ($y)\n”;

        I, too, am a fan of Perl, but my main language is C (for work reasons). I confess, over time, I have gotten sloppy when writing in Perl, especially things like abusing the logical-or operator for flow control.

  6. Thanks for this. As an amateur programmer I frequently find that the discussions of programming are a little above my head. This article really hit my particular sweet spot; something I both don’t know and can put to immediate use with the simple scripting languages I currently use.

  7. A near concept I often use is the following couple terms:
    Pattern: usual solution to an usual problem,
    Anti-pattern: a solution which, with the benefit of experience, promotes small cumulative conceptual problems with time, and should be actively avoided, save some precise cases where it is objectively the best solution.

  8. > In C: pointer arithmetic. Casts. Unions and type punning.

    Implicit type conversion in general.

    Multiple kinds of equality test (e.g., Javascript, where you have == and ===, or the Lispy languages, most of which have a whole zoo of equality tests).

    Switch-like structures are prone to errors when there’s fall-through, or when the compiler does not enforce the presence of a default clause to ensure exhaustive handling of all possible cases.

    1. > type conversion

      The first language I mastered well enough to get paid to write in was Pascal.

      For my sins, my largest customer pays me to maintain legacy business software written in an ancient dialect of BASIC.

      Even though I’ve been twiddling the code for years, “WTF just *happened* here?” is a regular thing…

  9. Going through parts of my career, I’d also make the claim that every problem worth solving involves at least one defect attractor. That is, any problem which doesn’t require dealing with one almost certainly has already been solved.

    For example, general-purpose filesystems work almost certainly needs a lot of bit-banging because you need to maximize the density of metadata to minimize the read/storage cost. Customers get cranky when they find out you’re using 10% of the space for metadata when they paid for it. This impacts caching, look-up speed, etc. Fixed-width column database formats can also have vastly-improved performance by being able to seek to arbitrary records in O(1) time.

    At the same time, you probably shouldn’t be writing your own database format/engine to store your email. There are lots of off-the-shelf libraries/servers available to do that for you.

  10. My attempt at a list of defect attractors:

    * Character encodings
    * Time in general
    * Time without explicit timezone
    * Daylight saving time
    * Physical measurement in non-standard units

    1. Doing date/time stuff manually is such a PITA. In recent years, I use platform features to do this for me. The date/time stuff in modern scripting languages like Python (and even crappy ones like PHP) is pretty robust. But doing that crap by hand is so fraught.

      1. That’s definitely gotten a lot better.

        In the old days, you never knew if the system clock was going to be set to local time, UTC, or UTC with a time zone offset (and if the last, whether or not it allegedly took care of DST/Summer Time automatically… I don’t recall ever seeing a system that could handle the bizarre situation that used to exist in Indiana)

        Things improved a lot once it became easy to get UTC over the net.

        1. The Indiana situation was not as bizarre as one would think. Basically there were:
          EST-EDT, EST only, and CST-CDT.

          The only question was which counties were in each one from year to year.

          1. Now imagine a mobile computer traveling the roads of Indiana that needed to use both local time and UTC for different purposes.

            I guess it could be done using GPS (which most mobile computers now have, of course).

    2. > Character encodings

      Using more than one character encoding, in particular.

      As I recall, early versions of MySQL used Swedish collation rules by default, which could lead to some peculiar sorting results if you weren’t aware of it.

      1. Oh early versions of MySQL were a defect attractor in itself. Referential integrity constraints that weren’t.

        And, while we are on the database subject: MongoDB. This one makes the appearance of working in small-scale experiments and taking your data as fast as you can pump it, but then turns on your back when you try to do anything useful with it or even just get your data back out.

          1. Anything that involves synchronization. Caches, multiple threads, distributed systems…

    3. * Physical measurement in non-standard units

      Yeah, get with the program America, and go metric like everyone else.

      1. The US will agree to go metric when you switch to driving on the right side of the road. :-)

        1. But the non-us world is not only Britain and it’s former colonies. I meant… I ride right side, and most UE countries, as far as i know, do as well.

        2. Sweden went from driving on the left to driving on the right in 1967. We are still waiting for your switch to metric.

    4. Physical measurement in any units. An important thing that many people grasp is that “word problems” don’t just have numbers; each number is paired with a unit. You can’t add ten seconds and three meters (both of which are standard units). If you detach the numbers from their units, you can lose sight of this, and think that thirteen somethings is meaningful.

  11. Great article.

    Question: are there cases where it is possible to say “yes X is a defect attractor, but we have a great library that handles it so no worries”.

    An example that comes to mind is the decoding of binary wire formats (e.g. http2). Also date-time manipulations as mentioned above.

    1. I’d say this is one of the great things about having some standard libraries–they can take the defect attractors out of the normal programmers’ reach, and hand them off to be solved once with great care by an unusually good/careful programmer who maybe won’t shoot his foot off.

    1. Is that a defect attractor because it’s technically problematic, or is it a defect attractor because trillions of dollars of commerce(and major world empires!) can be attacked by people attacking the cryptographic security surrounding them?

      IOW, would other technical fields that are considered relatively tractable also feel like defect attractors if they had to deal with the same level of black-hat attack that crypto has to deal with?

      (I genuinely don’t know)

      1. Cyrpto is tricky because, unlike lots of other constructions, the algorithms are extremely fragile. One false move and they don’t work at all. They don’t degrade gracefully. Either the implementation is correct and perfect or it is useless.

        For example, perhaps an off-by-one error in a load balancing algorithm means that the load still gets balanced but it’s distributed slightly unevenly. Whilst this might cause problems, the algorithm can still be serviceable. In a crypto algorithm it probably means that you lose almost all of your strength but the output might still “look fine” to a casual human eye.

        Even a perfect expression of the algorithm can be useless: they’re fragile in the face of errors in the “non-functional” parts as well. For example, a timing error in a comparison function can unravel an otherwise perfect (and theoretically unbroken) algorithm. C’s strcmp procedures don’t always process the whole of each string: they stop when they find the first difference because at that point they know the answer. This timing error wouldn’t affect the functionality of any other algorithms; it just happens to take different amounts of time to compare things depending on what data it is comparing. It still gets the correct answer every time. In crypto this can be a disaster because information about what’s being compared “leaks” out via how long things are taking. If key data or plaintext data is part of that comparison then you’re hosed.

        The academic complexity of the methods and algorithms employed in crypto, combined with the requirement of flawless implementation and the difficulty of directly observing a problem in the output make it an extremely tricky place to work.

        1. Even crypto doesn’t use perfection as a standard. Physical laws predict that there is always a side channel attack. The security questions revolve around how much bandwidth those side channels carry, and whether the side channel data can be extracted from environmental noise at a usable distance. Even readily exploitable bugs can take minutes to produce data (let alone a working compromise), and in some cases a minute is all that is required for a successful security response.

          That said, there sure are plenty of crypto bugs.

    2. Is there a difference between “defect attractor” and “Stuff that is inherently hard to do”?

      1. >Is there a difference between “defect attractor” and “Stuff that is inherently hard to do”?

        Yes. There are some easy coding tasks that persistently spawn the same errors. In C, setting up and iterating through a pointer-linked list is one such.

        1. Then Crypto isn’t “defect attractor”, it’s “inherently hard to do right”.

  12. Inheritance is good when used properly. Combined with immutability, inheritance is invaluable when I need sum types in OO languages.
    Real defect attractors are things like, inheritance when composition would do, inheritance just to add some stuff to an existing class, inheritance with non-abstract base classes…

    1. > Just try to keep in mind that “things I don’t like” and “things that cause defect clusters” aren’t identical, eh?

      To me, phrase “when used properly” seems like a sign of a defect attractor.

        1. Mmmmm, I think I’d want to de-equivocate “Agile, Scrum, XP” before adding it to the list. I used to think of “Agile done right” as quite different from “Agile as actually practiced”, but now I’m thinking that they really ought to just be classified as two different things with the same name. Agile in its more original formulation isn’t a defect attractor, and Agile as commonly practiced isn’t a defect attractor, it’s just straight-up defective.

      1. – It doesn’t always make sense to have a null value for every type. For instance, a Java Integer can take the value null, even though it doesn’t make mathematical sense.
        – Null pointer checks tend to be skipped because they’re not enforced by the compiler. But if they were enforced in a language like Java, the size of the code would explode, because almost everything is nullable in Java.
        – An alternative construct is an Option<> type. So a nullable integer would be Option<Integer>. See Kotlin.
        – An even more general construct is a tagged union type with multiple cases. The handling of the different cases would be enforced by the compiler. See Swift, Rust, Scala, F#.

        For Tony Hoare’s comment, see https://www.infoq.com/presentations/Null-References-The-Billion-Dollar-Mistake-Tony-Hoare .

        1. It doesn’t always make sense to have a null value for every type. For instance, a Java Integer can take the value null, even though it doesn’t make mathematical sense.

          It does make programming sense, though.

          “Null” means “this variable is unset and trying to use it for math is nonsense”.

          A definite “this is not a valid object because it hasn’t been set” is … useful.

          (cf. “The first element in an enumeration should be “NOT_SET” or equivalent to make it obvious that something got missed and the data isn’t real.”)

          1. SQL has NULL too, and it’s just as controversial there as it is in other subsets of programming languages.

            Usually the argument against NULL in SQL is that you should be normalizing your tables so that columns with NULL values become non-existent rows in a separate table with a common key, and you use join queries (or the advocate’s favorite query language that is invariably not SQL) every time you want to access something that might be NULL.

            Real-world DBAs point out that in most RDBMS implementations that sort of schema layout represents a crazy amount of overhead in both space and time, and that once you learn the sometimes inconsistent rules about NULL and other scalar values, it’s fine (in the Gunshow comic sense).

            1. No kidding. My instinctive reaction after reading your second paragraph was “oink!”.

          2. Re-read what I said carefully. Allowing nulls might make sense in *some* cases; but in other cases, it might not. And in fact, I argue that the latter case is true the most of the time.

            It’s nice to have the compiler *enforce* null-pointer handling, so you don’t end up with countless crashes because of forgetting to handle nulls. But this is impossible if every variable is nullable by default. You’ll end up with a code explosion, or the compiler will have to carefully avoid solving the halting problem.

            And again, the SQL thing would be a red herring if columns were non-NULLable by default.

          3. Your NOT_SET proposal for enums is a special case of my last two bullet points, if I understood it correctly.

            You’d prefer:
            enum my_enum {NOT_SET, A, B…};
            to:
            enum my_enum{A, B…}

            Enums and nullable values can be viewed as a special case of [tagged unions](https://en.wikipedia.org/wiki/Tagged_union)

  13. The term “defect attractor” has another meaning for me. Back when I was in tech support, after a change of managers, I told my new manager that I had a “cosmic ability to attract weirdness”. She said she’d prove me wrong, but never did, because it was true. I think part of it was that I was a “lemon-picker” who would actively seek out the weird stuff (which was interesting to me) instead of the easy work, and after a while I got a reputation as the guy who could figure out the weird stuff, and people would send it my way. I do know that at a previous temp job at ${EMPLOYER} the Subject Matter Experts got to where they’d tell me they hated when I called them, because they knew it was going to be ugly (or I wouldn’t have had to call them). I bet they’re glad that I’m a sysadmin now, and keep the weird to myself.

    1. On the one hand, bad implementation, no question. There is nothing intrinsic to a serialization routine that requires it to invoke code at all, let alone arbitrary code.

      On the other hand, it is a problem that many people have fallen into, which suggests it’s at least something worth cataloging.

      Still, if we say that one of the characteristics of a “defect attractor” is that even experienced people have a hard time avoiding problems in its domain (threading, backwards compatibility), then this is not a defect attractor. I have no problem at all writing serialization code that does not lead to code execution, let alone arbitrary code execution. I’d think of this more as a place where people simply don’t realize there are problems in the space and blunder into them ignorantly; once you do know where the problems are, they’re quite avoidable.

  14. A phrase I use a lot that (I believe) I came up with independently is “Considered Complex” riffing off of Dijkstra’s infamous “Considered Harmful”.

    Something is considered complex if it threatens to break separation of concerns. Manual memory management is considered complex because it forces you to consider memory lifetime every time you need some dynamic memory. goto is considered complex because it can break your normal control flow.

    I would go so far as to say that mutable objects should be considered complex — a mutable object complects every piece of code that can read it. Certain immutable objects have the potential to still be efficient, e.g. consing something onto a (singly) linked list doesn’t interfere with anyone else following a reference to the original head of the list.

    Obviously, you need to be able to mutate stuff if you want to get anything done, but I do question the wisdom of making that the default.

    1. Data structures that are mutable *and visible to multiple threads* are certainly a defect attractor.

      If they are *mutable by multiple threads*, they are more like a defect black hole.

      In a big system, you end up with elaborate logging systems that timestamp accesses (using Zulu time and the MONOTONIC timebase please!) to debug the deadlocks, and then a wrapper around your mutex class to enforce a standard ordering, etc.

      Entire programming languages (Clojure) have been invented to avoid this tarpit. But of course, we use C++ instead.

  15. Martin Fowler and Robert C. Martin have a concept of “code smells” which seems similar to, but softer than, this concept of defect attractors.

    For example, they list things like commented-out code, or multi-step builds. Not so much mechanics or language features that attract defects, but software dev practices that do the same thing.

  16. I have the luxury of coding to represent mathematical constructs, so the governing rules are well-defined.

    I often use C++ because it’s the only language I know, and have ready access to, that provides nearly enough operators.

    Even so, a program in combinatorics often has multiple definitions of + say, but that’s ok because each type is designed to be aware of which one applies to it.

    Inheritance works when it makes mathematical sense: for example, a group is a kind of semigroup. But often C++ inheritance is too clumsy to fit the purpose.

    At times, I simply cannot see how to make C++ express as elegantly as the mathematics – perhaps the conventional operator precedence cannot be matched – so I don’t try; that’s the only time I use function names instead of operators.

    1. Use template metaprogramming instead if you find yourself stuck in a rut with inheritance. Hell, you should probably favor template metaprogramming by default.

      1. >Hell, you should probably favor template metaprogramming by default.

        Template metaprogramming needs to have a stake driven through its heart. C macros are bad enough; macro substitution doesn’t become less of a defect attractor when it’s Turing-complete.

          1. >Lisp macros?

            Not as bad as the C/C++ version because you can’t generate substitutions that aren’t well-formed in the base language. There’s a remaining problem with name collisions between identifiers in the generated LISP and around its callsites (the term of art for this defect is “identifier capture”). That can be mitigated, but – yes, LISP macros are notorious defect attractors.

            1. Not as bad as the C/C++ version because you can’t generate substitutions that aren’t well-formed in the base language.

              Also true of C++ templates. If you use C preprocessor macros in C++, all bets are off, however.

              There’s a remaining problem with name collisions between identifiers in the generated LISP and around its callsites (the term of art for this defect is “identifier capture”). That can be mitigated, but – yes, LISP macros are notorious defect attractors.

              Or eliminated with e.g., Scheme macro hygiene. Sadly this was not adopted into the Common Lisp standard. Recent versions of the Scheme standard even include one of several ways to explicitly break hygiene for selected identifiers.

        1. Template metaprogramming czn only be used to declare generic types, functions, and constexprs parameterized over types or integers. Yes, the compiler checks. Templates are not a general text-substitution facility like C macros; they avoid many of the gotchas associated with C macros and so are much safer to use.

          1. >[Templates] avoid many of the gotchas associated with C macros

            But, because the template language is Turing-complete, have plenty of hideous gotchas of their own.

            1. You have to work really damn hard — and be too clever by half — to take advantage of the template system’s Turing-completeness. Otherwise the old C++ programmer’s slogan applies: You don’t pay for what you don’t use. It’s possible to implement powerful constructs perfectly safely in templates without even approaching the rabbit hole of Turing completeness.

              1. > You have to work really damn hard — and be too clever by half —

                You’ve just described 1/3rd of the programmers who know more than 2 languages.

            2. Turing-complete doesn’t mean much by itself: See Rule 110, and how hard it is to program anything in it. I don’t know if I’m being pedantic and missing a deeper point, but it’s compatible with what Jeff said above.

              Conjecture: Any sufficiently powerful type system is almost Turing complete.

            3. C macros suffer from being excessively dumb, especially when people try to use them to implement templates. Templates suffer from being excessively complex when people try to use them to implement a compiler for some bespoke functional metalanguage.

              To me, they are tools that are adapted for specific jobs to be used or not used depending on the project, the same way regular expressions are. I use all three in non-trivial C++ programs, and will probably continue to do so until someone teaches a C++ template how to include the source code of its arguments or line number of its invocation in error messages, or teaches a C macro how to make compile-time implementation decisions based on argument introspection.

              You could probably write a complete RFC822 parser in a C++ template or a POSIX regexp, but you should do neither.

              1. >You could probably write a complete RFC822 parser in a C++ template or a POSIX regexp, but you should do neither.

                What you should is look at Go to learn how a C-like language looks like when it’s designed with GC and clean abstractions from the ground up and doesn’t need any of that top-heavy elaborate crap.

                I’m not saying Go is perfect. But at the very least it points the way.

                For a good entry point, read the code for the UPSide policy daemon. You should review that anyway.

                1. AFAIK the number one complaint about Go is that it still lacks generics. That and its lack of sane, sensible packaging, “go get github.com/foo/bar” still being a thing. I expect that will become even more a pain point given the recent GitHub/Microsoft news.

                  Anyway, it seems as if the broader programming community disagrees that Go “doesn’t need any of that top-heavy elaborate crap”. That crap is now part of standard programming practice; without it, it becomes difficult to impossible to write even a decent strongly typed container library, let alone any sort of sophisticated application that doesn’t compromise strong type guarantees — strong type guarantees being a defect repellent.

    2. >But often C++ inheritance is too clumsy to fit the purpose.

      Yes. That’s because in an OO language like C++ without traits (method declarations that can be shared between peer classes) inheritance gets used to express two very different things: the terms of art are is-a relationships and has-a relationships. An is-a relationship is something like “a group is a kind of semigroup”; a has-a relationship is something like “this class can be serialized (has a print method)”

      When the same codebase has both is-a and has-a relationships expressed in the same way, massive and subtle confusion can ensue. This is one of the reasons large C++ APIs have a bad tendency to be train wrecks.

      1. >inheritance gets used to express two very different things
        In this respect, Java is slightly less a defect magnet than C++.

        Inheritance “is-a” is done by “extending a class”, and you can only directly inherit from one class.

        Composition “has-a” is done by “implementing an interface”.

        While both have a lot of similarities, I find that having separate terms helps to clarify the difference between them and avoid confusion.

        Of couse, Java pretends to be strongly typed and then implicitly “promotes” datatypes behind the scenes, so that’s always a fun thing to try to explain to students….

      2. Cladistic vs. morphological categorization. Dolphins aren’t fish but work similar to fish due to being adapted to the same ecological niche. We tend to understand things and think about things better when we do cladistically, but we *work* with things morphologically, if you don’t have a hammer to drive in a nail, a rock will do.

        Sounds like defining the is-a clades should be a first step, of building a model that is also relatively easily understood by a later reader, they convey the big picture, and the has-a or does-a morphology as a later implementation.

    1. >Article concludes by quoting quotes Raymond from a couple of years back:

      Thanks for the heads up. I left a comment slapping McCain around for viciously ignorant ranting about Satanism; I don’t want to be associated with that kind of crap.

      1. Well, now we know why the odious Coraline didn’t make it to Penguicon last year when she was invited as a GoH.

        1. > now we know why … Coraline didn’t make it to Penguicon last year …

          Huh? Didn’t you read my comment on your Medium.com post “MPGA”? There was a long set of tweets she made about having OD’ed on her anti-anxiety meds (as a consequence of worrying about a friend, IIRC) that landed her in the hospital about 2 days before the ‘con.

          This was not any secret; I didn’t have to dig very far on Twitter to find the thread.

          1. I read the comment, but didn’t go hunting for the tweets. I took McCain’s post as independent confirmation.

        2. Why would they have been invited as a GoH? No specific contributions mentioned other than writing the Code Of Conduct.

          1. Because Penguicon has been overrun by the SJWs. She was supposedly there because she promoted inclusion in open source communities.

            1. >Because Penguicon has been overrun by the SJWs. She was supposedly there because she promoted inclusion in open source communities.

              To be fair, there was none of the overt crap this year, unless I missed some at the opening ceremonies I didn’t go to. I think you would have had a good time.

      2. Your link doesn’t go anywhere. A little poking around suggests that wordpress is choking on the rel=”nofollow” for some reason.

        1. >Wow, you’ve got Fake Catherine Burns all in a tizzy over your identity.

          Entertaining, isn’t it? I seem to be causing some serious dissonance there. Woman’s got issues.

  17. Thinking about the examples of defect attractors listed in both the article and comments, there seems to be one common feature: nearly all of them involve (for various reasons) some sort of conflict between two sets of abstractions, neither of which is a true superset of the other. The examples of portability shims, pointer arithmetic, and type punning are IMO the most blatantly obvious examples of this. If there were any clear reason these should be filled by (in your words) “special, corner, and edge cases” — it’s because all of these are the sort of work which must exist along the seam where two abstractions meet, but don’t harmonize. [The same logic is true of manual control of automatic memory management, but that’s not a “must” sort of thing as much as it is anti-pattern.]

    However, one notable class of defect attractors seems to be handling various sorts of non-linear structures — and almost always, having to convert them into sequential ones. Here the clearest example might be diamond class inheritance, but the goto command also fits, as a sort of “escape hatch” to craft control flow methods which aren’t modeled by structured language syntax. More subtle might be the question of programming language macros: an “ideal” [i.e., maximal power] macro system would allow you direct modification of the syntax tree, while the “ideal” [i.e., simplest] way to store and edit code is the inherently linear text format. So every macro system has to choose between working on tree structures like Lisp, or on sequential text like C; both systems will have some undesirable consequences, limits, and side-effects which means that in real terms neither answer is ideal. Likewise, all other such problems will never have a fully acceptable solution: not only will they spawn the same panoply of edge cases, there will be cases which can only be properly modeled in only the non-linear abstraction!

    The possible exception to this overall generalization seems to be a class of problems where our abstraction is not consistent (or worse, cannot be consistent): examples might include multi-language character encoding; time and timezones; and nullable pointers / values. I say “possible” because arguably this is still a conflict of two abstractions (it’s just far less clear that the difficulty comes from having to “convert” between them because most people think of them as one complete-but-inconsistent abstraction, not two consistent-but-incomplete ones). I’d even argue that binary wire/file protocols fits in here… but only when you’re trying to mix text and binary data. [Pure binary, with no text? Fine by me. Pure text, not in a binary format? Inarguable. Mixing the two? Shut your fool mouth.]

    I’ve been trying to come up with any sort of counter-example to this “two abstractions” model of defect attractors and so far have come up dry.

  18. Multiple inheritance is a bug. Single inheritance works very well indeed, as any Smalltalker or Objective-C developer can tell you.

  19. What about good old global variables? I’m sure I heard someone refer to them as “the new goto” at some not-very-recent point.

    And mixing different encodings and languages in the same document, yeah – Mark Pilgrim (I think) said something like “now get ready to cry a lot when you realize there’s no such thing as plain text”.

    1. Sure there is. UTF-8.

      It used to be there was no such thing as plain text, and there’s accordingly legacy cruft, but UTF-8 pretty much solves every issue around “plain text” except the one of getting people to use UTF-8.

      All that’s necessary now is to shoot everyone who resists transitioning to UTF-8.

      1. >It used to be there was no such thing as plain text, and there’s accordingly legacy cruft, but UTF-8 pretty much solves every issue around “plain text” except the one of getting people to use UTF-8.

        Not quite every issue. I’m a fan of UTF-8 myself (“It’s the new ASCII!”) and agree with your prescription (except for the part about actually shooting people) but it does mean the byte length and presentation length of a string are different concepts even in a monospace font, and and computing the latter can be…complicated.

        1. but it does mean the byte length and presentation length of a string are different concepts even in a monospace font, and and computing the latter can be…complicated.

          Complicated is underselling it. More like impossible.

          Depending on the language of the text, ligeratures and certain sequences of characters may vary between them… and short of stuffing everything into XML <english>…</english> tags (yuck…), no real way of solving it.

          1. Oh, I wouldn’t say impossible. After all, you can always format it for presentation and then see how much space it actually takes…

            1. > actually format it for presentation and then see how much space it actually takes….

              HOW?! Your code can’t reach out with a ruler and see how long a line gets printed.

              I guess you could actually render to an invisible “frame buffer” and then count how many pixels at your imaginary screen resolution you’ve used up, but that strikes me as … Painful.

              1. The system will provide this functionality. Windows has ‘Uniscribe’ which lets you get into all sorts of sordid details with the world’s scripts and their rendering. I assume Linux desktops have similar, but that’s not my world (neither is complex script processing on Windows, but I remembered the name of the subsystem).

              2. HOW?! Your code can’t reach out with a ruler and see how long a line gets printed.

                Postscript.

  20. @Adrian Smith:

    Global variables, not necessarily. Overuse or improper use of global variables, yes.

    Goes back to the statement above by hxka that the phrase “When used properly” is an indicator….. Ok, I just talked myself to the other side.

    I’d guess global variables *would* be defect attractors.

  21. CVS and SVN, anyone?

    Similar to another comment in the thread, these are pieces of software in which people always proclaimed that you “just” need to use them the “right way” to not make a royal mess of the repository. In very rare single-developer (and usually single-branch) cases, it might indeed be manageable… anything more than that is pretty much guaranteed to make soup of the repositories.

    Now, if only someone made a tool to lift and cleanup these messes…

    1. >CVS and SVN, anyone?

      Hm. I think I prefer to reserve the term for techniques within tools rather than tools. That is, to zero in on what actually causes the defect clusters.

      For example: CVS relies on client time (which, when it was often not NTP-sychronized) rather than originating all timestamps in consistent time at the server. This means that commit order is not reliable and you cannot detect changesets by looking for identical timestamps. I’d call that a defect attractor.

      1. And Subversion’s “tags” rely on them never being written to again once tagged off (a concept utterly foreign in pretty much every other VCS — even CVS has no concept of “updating” tags!).

        I suppose it is true, that these tools have very specific instances of defect attractors themselves, and their tagging and branching systems are chief among them.

        Still, I don’t see a huge difference between calling VCSes themselves defect attractors and entire programming languages, as you said earlier in the thread. Said VCSes have designs so full of special cases, exceptions, and irregularities as well, I’m willing to call them out on it.

        Branches: Painfully difficult in CVS, using arcane commands I don’t think anybody remembered even when CVS was en vogue. Very easy to trip over. In Subversion, no merge support for pretty much its entire lifetime in the spotlight (it did gain a merge command after Git and Mercurial already took over the world…). Migrations to a DVCS often involve sorting out what was a past and current branch and whether to keep any of them — much fun when projects use generic branch names like “test”

        Tags: Okay in CVS, as long as master files are not manipulated by anything but the server. In Subversion, okay as well… as long as nobody tries committing to a “tag” (which are really branches with a special name). In SVN’s case, it often happens accidentally when a newbie-to-Subversion starts hacking away at it; it can happen intentionally when projects try to “re-release” something within 5 minutes of tagging — headaches ensue for the DVCS migration.

        General timeline mishaps: Try to not delete all the files on a commit, please. If it happens, it is rare for any action to be taken other than copying back the contents. Another pain point when migrating to DVCS (though more manageable than anything tags and branches cause). Using One Large Server with globally-incrementing revision numbers in Subversion was/is not unusual practice; Mozilla and Apache did this. Namespacing out projects so they can live in /trunk/firefox, /trunk/thunderbird, /trunk/hadoop, /trunk/httpd — maneagable, though a pain point if these top-level name spaces are renamed.

        User names: This is a hard one, because it’s a problem Git isn’t entirely free from, but still… every user expected to have commit access is expected to have an account on the server (not necesarily in /etc/passwd — svnserve lets you have a separate user store). These names are not always easy to distinguish who is whom, especially on sites like Subversion where someone can choose a goofy username and it ends up in the VCS logs. If the VCS master files are moved to a different server, it’s not unusual to see a whole new set of goofy names that happen to be the same actual people as the former goofy names. Git also has a similar problem with its user name and email, but these aren’t usually so strange, and .mailmap was even devised to cope with changes in real name and/or email address.

        Master file manipulation: Perhaps the feature is newer than I think, but some CVS repositories suffered permanent data loss when an operator was informed or decided that a source file was no longer needed. Instead of informing CVS of an intended delete (which gets recorded into the ,v file properly, and a new file of the same name can also be properly made)… they would go into the server and delete the control file.

        There’s more issues with them than I can even remember off the top of my head… much of it was always excused as “Just use it right”, but that was effectively impossible with more than a single developer (and even the single developer doing it right is a rare sight).

        Git has problems, and nasty stuff, but I think it avoids being a defect attractor. Much of the basic usage is simple enough, and the distributed nature allows upstream repositories to be the gatekeepers of good practice.

        1. >Still, I don’t see a huge difference between calling VCSes themselves defect attractors and entire programming languages, as you said earlier in the thread.

          I think we get the sharpest use out of the term “defect attractor” when we apply it to the smallest, most local features with that property within a system.

          VCSes have specific low-level defect attractors in their design that are still graspable as problems in isolation – you just did a good job of pointing at several.

          While languages also have such low-level defect attractors (leading 0 meaning octal in C, ugh), what makes a language a defect attractor in itself is a kind of additional global intractability that is not reducible to any particular subset of individual lower-level problems small enough to be kept in mind at once. You are overwhelmed by the whole as much or more than you are punctured by the parts.

        2. You’re also forgetting (in Subversion at least) the ease of checking out only part of a tree. After doing this on a couple of different directories, your local copy is in such an inconsistent state that if you then try to modify it, there is zero guarantee that you can patch your modifications onto the current trunk in a sane way.

        3. I think most of the VCS problems were more in the project management domain than in the software development domain.

          CVS evolved out of shell scripts on top of RCS, and its feature set was “the emergent behavior of the shell scripts, plus as many hacks we can accumulate without changing the on-disk format.” Many (most?) of its features would be considered straight-up bugs in any other project. At the time there were a few uncelebrated efforts to fix the CVS format incrementally (e.g. by adding a shim layer DB for proper rename support), but they got nowhere before SVN came along and made them irrelevant.

          SVN was designed by a committee that didn’t understand software development or data storage. They liked to publish non-composable APIs for doing every possible task, but the only reliable part of their API ended up being the part we use to extract all the data out of SVN and move it to some other DVCS. Parts of SVN suffer from evolving expectations over time (especially wrt character encodings), and defects follow from that.

          Fossil was designed by the people behind SQLite. It has an amazingly efficient on-disk format compared to Git, but nobody cares because nobody outside of SQLite seems to have noticed Fossil’s existence.

          Git doesn’t attract many defects because its design instantly breaks the repo when any defect occurs. Users tolerate this because its design also encourages the existence of backup copies of the repo everywhere. Git is a great test case for storage-stack data fidelity.

          1. Fossil isn’t designed for bazaar development. It’s designed for cathedral development. Since very few people do cathedral development anymore, it’s not of much interest to most people.

            1. > Fossil isn’t designed for bazaar development. It’s designed for cathedral development.

              In what ways does Fossil not support bazaar development?

              (my original reply to this comment seems to have been lost)

              1. > In what ways does Fossil not support bazaar development?

                Fossil does *support* bazaar development, but is not primarily *designed* for it.

                https://www.fossil-scm.org/index.html/doc/trunk/www/fossil-v-git.wiki says:
                > Git encourages a style in which individual developers work in relative isolation, maintaining their own branches and occasionally rebasing and pushing selected changes up to the main repository. Developers using Git often have their own private branches that nobody else ever sees. Work becomes siloed. This is exactly what one wants when doing bazaar-style development.

                > Fossil, in contrast, strives to keep all changes from all contributors mirrored in the main repository (in separate branches) at all times. Work in progress from one developer is readily visible to all other developers and to the project leader, well before the code is ready to integrate. Fossil places a lot of emphasis on reporting the state of the project, and the changes underway by all developers, so that all developers and especially the project leader can maintain a better mental picture of what is happening, and better situational awareness.

      2. I’d go more generic. Time handling is a defect attractor. It’s necessary, and there exist tools to make it easier to handle properly, but programming around it is inherently tricky. The problems with it are partially from human society rather than specific to programming though.

        Someone upstream mentioned combining two different concepts under one label and I would contend that local time relative to the day/night cycle is fundamentally different from global time, but they are both just called time.

    1. It wasn’t that long ago that Microsoft wouldn’t promote any sensible VCS. And now they’re buying GitHub and using Git internally because Microsoft <3's open source. They're gonna hug it and kiss it and squeeze it and name it George and…

    2. The company that spent two of the past three decades trying to shut down the open source movement buys the company that shut down the open source movement.

    3. >A bit off topic but anyone have any thoughts on Microsoft buying Github?

      We may soon get a lesson about the perils of becoming a bit too dependent on any single service provider.

    4. GitHub was dead to me the second I learned they removed the “United Meritocracy of GitHub” rug from their office. MS can have their rotting carcass for all I care. Everyone else will wish they had listened to ESR and not stuck their projects and all their associated metadata on a proprietary server with no easy way to get all the data out. I pity them. (Well, except for the SJWs–especially the ones who had that rug removed in the first place. They can burn in whatever foul pit they call hell.)

      1. Frankly, at this point it strikes me that the hacker hatred for Microsoft has become a tradition in itself. Unmoored from the original reason for it. In particular other companies have since behaved in much worse ways, e.g., Microsoft never limited the user’s ability to run programs on windows to only the ones it approved, without getting anywhere near the same amount of hate.

        1. To an extent, it’s also a sort of deep-seated mistrust of MS. “Embrace, Extend, Extinguish” was their MO for so long that most hackers just assume that’s what they’re doing, or that in some other way any move they make is calculated, somehow, to lock people into Office, Windows and other MS products. Apple, for all their evil, has never (or nearly never) tried to manipulate the market to make it impossible to not use a Mac/iPhone, and does not have a long track record of eating industry standards for the purpose of perverting them into lock-in enablers.

          For me, at least, the lock-in and continual attempts to perpetuate it and make it unavoidable have left such a bad taste in my mouth that I’m still suspicious of MS, despite them having shifted to Lawful Neutral of late. I can just not use Apple products; I often don’t have that luxury with MS. Thus, anything that smacks of Microsoft eating competitors, or embracing, extending and extinguishing industry standards is going to draw flamage until they’ve demonstrated an equally long history of not meriting the suspicion (rather as was the case for IBM, and some hackers *still* don’t trust them.)

          1. Apple doesn’t eat industry standards for the purpose of lock-in because, by and large, they don’t have that kind of market power. Case in point: They just this week announced they were deprecating OpenGL support in OS X 10.14. Outside of game engines like Unity – which Apple has given substantial financial help in porting to their 3D graphics API, Metal – most comment I’ve seen is that people who depend on OpenGL for cross-platform games will either move to Vulkan and hope that it gets support on OS X (right now, it’s unofficial and iffy) or simply drop the Mac as a platform.

            The same goes for other things. Apple started out being quite cross-platform compliant with early OS X releases. They’ve moved away from that farther and farther with each release, as they think they can get away with it. I can’t think of any case where their doing so has created any substantial lock-in outside of very specific niches.

            1. Oh, Apple has no proclivity toward “playing nice”, except insofar as they have to just to get their foot in the door. But even at that, their strategy is more lock-out than lock-in.

              Which is not to say that I don’t think Apple would try to lock customers in, if they somehow lucked into the market share needed to do that – to the extent that lock-in could be used to enforce the One True Apple Vision of How Things Should Be, they’d do it in a New York minute. Just that between Apple and MS, I get twitchier when MS gains control over infrastructure that helps to keep options open.

          2. It’s widely recognized these days that Facebook and Google are the new evil empires. Microsoft is, if anything, a tamed dragon that makes some great products. If you had told me even five years ago that the best open-source editor for programming would be a Microsoft product, I would have laughed at you, but that’s where we are today with Visual Studio Code. Some hackers even consider Windows 10 “the best Linux destop experience” because of WSL.

    5. Thinking about this some more, it really seems like what we need here is a distributed forge. Consider how ridiculous the current system is. You have git, which is this open-source distributed version control system. And then we have companies like Github build these completely centralized forges on to of it.

  22. I think the biggest defect attractors you didn’t mention are multi-threaded code in general (especially with interacting threads) and non obvious side effects in functions that look pure. The former is surely the biggest source of bugs in modern programs since the demise of manual memory management (for non dinosaur languages.)

    Although lots of people will disagree I’d also say lazy evaluation of various kinds — simply because it makes the code flow in ways that seem counter intuitive to what is written in the text. This is compounded by the fact that debugging the code is very confusing as the execution point jumps around for no obvious reason. In C# I am thinking of yield return, and await in particular, features that I believe are heading into Javascript.)

    BTW, another place I think is a major attractor of defects is poor variable naming. An example would be the use of variable names like i, j and k for loop variables rather than names that represent the value (row, column, blogPostNum etc.) With loops in loops I see a lot of problems in the non default flow parts of code. I understand that is heresy for some programmers, but I’m just talking from my experience.

    1. >I think the biggest defect attractors you didn’t mention are multi-threaded code in general (especially with interacting threads)

      My experience with Go suggests that the problem is a little more specific than this. That is, multi-threading is a serious defect attractor if your primitive set makes thread safety difficult to think about. And it is, with the conventional mutex/mailbox approach.

      In a language with CSP-based concurrency primitives, like occam or Go, multi-threading seems to be a lot more tractable. I think. I haven’t yet written anything with really complex multithread interactions in Go, so I could be wrong about this at scale. But for small programs the difference in effort and defect rates is really marked.

      Furthermore I’ve seen other concurrency primitives – like nurseries in Trio – that look pretty effective. I think it may be the case that mutex-and-mailbox is pessimal rather than CSP necessarily being optimal…

      1. I’ll have to learn more about this, thanks for sharing. One thought though — bugs appear in programs where programs are hard to think about and understand. I think one of the challenges of multi-thread/multi-process programming is that our brains aren’t really good at that kind of thing. Our thought is naturally single threaded, and so it is hard for us to model a multi-thread program in our head.

        There are exceptions — for example, if you are doing multiple homogeneous things in parallel (for example, testing lots of keys to decrypt text, or ray-tracing a scene) it seems easy to think about lots of things doing the same thing at once.

        But as soon as different things are going on at the same time I think it is hard for our brain to handle — since we are designed to focus on one thing at a time — like catching the wildebeast.

        To that end, I wonder if there is any evidence that women — who seem to have brains better adapted to multi-tasking — are better at handling the complexity of multi-threaded programming?

        1. > One thought though — bugs appear in programs where programs are hard to think about and understand. I think one of the challenges of multi-thread/multi-process programming is that our brains aren’t really good at that kind of thing.

          I don’t think this is intrinsic at all. I think it’s just a matter of training, i.e. people think concurrency is hard if they didn’t learn concurrency before learning single-threaded programming, and we mostly still teach single-threaded programming first, so lots of people come out thinking concurrent programming is hard.

          Invert the prevailing single-threaded baseline assumption and relearn to code accordingly. Assume that nothing visible to the program is in a deterministic state, except in the special cases where it can be proven otherwise. Assume that permission is required (and granted only as a result of explicit action) before accessing state that is both mutable and shared. Those assumptions, rigorously enforced, can make code easier to read, not harder (as any Rust fanboy will attest, since their favorite toy language is built that way).

          Same thing for exceptions, about which people also often have similar complaints. Instead of assuming that code progresses linearly, assume code will jump out of the calling function at any point. Set up error handlers before the errors happen instead of after (as most programming courses still teach). It’s just like the old trick of always writing the “else” branch of your ‘if’ statements first (so you know you’ve covered all cases), except the compiler writes the boilerplate for you.

          In both cases, code is easier to understand due to the heavy use of invariants (maybe a harder to write in the first place, though).

          Taking this to the extreme, transaction-based concurrency models combine concurrency with exceptions: every statement is allowed to proceed, will be delayed until it can proceed, or will terminate with an exception (due to deadlock or timeout). Side-effects and multi-thread interactions are easy to understand in such models because they are strictly prohibited.

          1. FWIW, I don’t think I agree at all. Of course better or different training may give a better outcome, however, my original point was that “single threaded” tends to be the nature of how we think, so thinking multi-threaded — irrespective of interlocking issues — is harder for us.

            I also want to point out that interlocking issues are NOT just a matter of managing access to shared data elements. Much of the challenge of multi-threaded code is ordering the various interaction points appropriately. Thread 1 must finish A before Thread 2 starts B, and thread 3 must finish before B but start after A.

            These sorts of complexities of course are handled via shared data signalling, but it is far from a matter of wrapping shared data into locked structures. Consider a quicksort run in a parallel set of threads. The sorted array is the shared data structure, but the real challenge here is managing the threads in such a way that each partition finishes before the inner partitioning can start, but without holding up other partitioning processes unnecessarily. This is far from a simple matter to manage and is quite hard to conceptualize (even though the idea of multiple homogeneous processes doing the same thing is the SIMPLEST form of concurrency.)

            So I’d say that the problem was intrinsically hard, not just badly implemented (whether in language or training.)

      2. > I think it may be the case that mutex-and-mailbox is pessimal

        Probably. The first thing I usually do with mutex-and-mailbox is implement the concurrency model I really need on top, then forget the bottom layer exists. That’s assuming there wasn’t some ready-made primitive to use instead.

        In addition to being difficult to use, naive mutexes also force the compiler to stop the CPU dead, synchronizing all memory access and flushing out all load/store optimizations, because any random memory the program could access might have been read or changed underneath by another thread, another CPU, or even some other hardware device. Newer C++ has more synchronization primitives to deal with the partial sync and reordering operations available, though many of those are even harder for novices to use than mutexes.

      3. Pfff.

        We’re doing pseudo multithreading in *bash*. How hard can it be?

        :)

        (no, really, we are)

      4. This brings to mind my current non-fiction book, Karl Fant’s “Computer Science Reconsidered”. This is perhaps a decade late, as the book was published in 2007, but I first came across a reference to his work in the context of asynchronous digital logic circuitry design last spring. The actual book that talks about the nuts & bolts is rather expensive, even used, but this one talks about the underlying theory.

        In effect, what he’s saying boils down to “Concurrency is actually simple, it is sequential processing that is complex. If you’re having trouble representing concurrency, this is not surprising, most available tools for this suck because they attempt to represent concurrency within a sequential framework.”, which agrees with your description above:

        [M]ulti-threading is a serious defect attractor if your primitive set makes thread safety difficult to think about.

        He laid out some reasons behind this, probably the biggest being the state space explosion that occurs. He goes on to lay out what he thinks a good language for this should be, but I haven’t really had time to think about that part yet.

        This also reinforces my dislike for global variables (and global state in general); I think it can be convincingly argued that global variables are defect attractors.

        1. >He laid out some reasons behind this, probably the biggest being the state space explosion that occurs.

          “State space explosion” is a good metaphor for the problem – it calls to mind “combinatorial explosion”, and is an instance of same.

          >I think it can be convincingly argued that global variables are defect attractors.

          That is so true that I think I will edit the OP to include it.

    2. >I’d also say lazy evaluation of various kinds — simply because it makes the code flow in ways that seem counter intuitive to what is written in the text.

      No argument from me. I’ve been wondering when someone else would notice this.

      1. I think there are two kinds of lazy evaluation:
        1) yield/return and async/await – this is essentially the procedural programmer’s answer to infinite lists and lazy evaluation (when used correctly), but it is much more difficult to follow.
        2) lazy functional evaluation like in Haskell, etc. – i.e. “don’t calculate the value of this expression unless you need it”: this is a logical progression based on dependency ordering, so execution progression is much like a depth-first tree traversal. At most you need to look at where the function was called (on up the call stack) to find out what the other bit of code is.

        Think about e.g. using C# yield/return vs Haskell to return the first four elements of a (presumably infinite) list. Haskell is very simple (take 4 [1..]), while C# is longer (3 lines or so for the equivalent of [N..], using yield return), and maybe another 5-10 for the logic of ‘take K’…and then another line if you want to use it as a list instead of enumerable.

        I find that lazy evaluation in the functional model is much easier to understand (not trivial, but generally not important), while in the procedural model it is both critical to understand (when used), and more difficult to follow, particularly when debugging.

        I think a lot of this is similar to what you said above about multi-threading: if your primitive set makes lazy evaluation difficult to think about, then lazy evaluation is a defect attractor.

        I think this pretty much holds true as an evaluator for “Is X a defect attractor given primitive set Y?”, at least for cases where X is a concept and Y is a language (or equivalent tool):

        If your primitive set (Y) makes X difficult to think about, then X is a defect attractor (in Y).

        There are, of course, other defect attractors (such as variable naming, mentioned above by Alex Smith) that are not concepts in the same sense – but I would argue that, at least in that case, variable naming is (all or most of) the primitive set, and the function of the code is the concept, so variants on the above are likely to identify defect attractors.

        1. >What else do you think caused the decline of labor unions?

          As a matter of economic history, unions normally prosper – and can collect above-market rents – only when labor is a limiting factor of production. This is now a rare situation, and will get more so.

          The post-war U.S. (roughly 1945-1980) makes an interesting case in point. The industrial infrastructure of everywhere else was either underdeveloped or been bombed flat in WWII or both. In the U.S., capital markets were deep and well-developed. U.S. Factory automation had progressed enough to drive out most unskilled labor, but not skilled labor. For many categories of goods the U.S. was the only competent producer. Thus, U.S. skilled labor was a major production bottleneck for the entire world. This created a huge opportunity for rent-seeking.

          U.S. unions mistook this contingent circumstance for a law of nature and pushed the premium on U.S. labor ever higher. They were not alone; government action (in effect, bureaucratic rent-seeking) did at least as much (possibly more) to push labor costs up through regulation and taxes.

          Gradually countries outside the U.S. built up their capital-goods stock; the leaders in this movement were Germany in Japan, where investors had fewer sunk costs in inefficient plants and organization because those had all been smashed in the war. Shipping costs dropped. The result was inevitable; once foreign labor could compete effectively, the power of most U.S. unions collapsed.

          The effect might have been mitigated if the bureaucrats had been willing to give up their part of the rent. Of course, they were not.

        2. FWIW, although I find many ideas in functional programming elegant, I have never written substantial code in it. Consequently, not really groking the gestalt of the whole thing I don’t really know what it is like to debug code like that.

          The properties of these programs obviously make it more naturally parallelizable, so perhaps it is a solution; in a sense, perhaps it is a paragdim that fulfills the suggestions above about a different way of thinking about parallel code. However, I have this gnawing feeling that they are a serious pain in the ass to debug for the same reasons they are in lazy evaluation and the pseudo functional features in many other languages (Linq in C# for example) look awesome but practically speaking are quite difficult to maintain.

          And I think that the fact that very few REALLY large software projects or even really widely used software projects use them as a language (despite them being pregnant with possibilities — especially in a multi core world) is at least somewhat telling.

          But YMMV.

    3. I, j, and k come from math. If you’re doing math, they’re completely appropriate. But for the sake of search-and-replace, I use iii, jjj, and kkk.

      but yeah, when I’m iterating over something, I do this:

      for row in rows:

  23. > I think most of the VCS problems were more in the project management domain than in the software development domain.

    I agree. Teams I’ve worked in have used SVN, Hg, git and Fossil. Used by developers for developers’ purposes, we had very few problems. Yes, we had and still have design/code reviews – and unit testing, etc. We could manage this very well. Then management “got their hands dirty” and entangled us in layers on layers of process that only take time away from actual requirements/design/code reviews, designing, coding and testing.

    All these extra tasks increase the surface area for defects and rush us through designing, reviewing, coding and testing, further increasing the opportunities for defects.

    1. So basically you’re saying that process is a defect attractor and manglers are defect generators?

      1. Excessive process certainly is. My experience is that, when the proper process and procedure become too cumbersome, people dispense with it, and we go back to the kind of cowboy coding/sysadminning/etc that all the process is intended to shield against.

  24. I agree completely that classes and inheritance are defect attractors in general, and specifically when they are used for business domain objects. The basic ideas of OO were not bad for making a windowing system: everything is a window, then add functionality. But when you try to use OO for fundamental business objects, like Account or Customer, you end up in inheritance hell.

    1. How about “corporations that want a skilled labor force that will work for peanuts”?

      It’s high time developers started organizing labor unions.

      1. The whole idea of a labor union is that all of them cannot be fired, that their labor is needed. This is not true when they can be replaced with imported labor, but even more importantly, this is not true in the face of globalization where things can just be made elsewhere. What else do you think caused the decline of labor unions? If you could organize a truly global labor union no country would defect from, you could perhaps win this, but lacking that stuff is simply made wherever there isn’t a union.

        This is not a question of ideology. This is very simple fact that any profit-maximizing business goes wherever the expected quality work can be done the cheapest. Basically your only chance is something like world government.

      2. Because businesses that have unionized have consistently increased quality of their product.

        The answer isn’t unions, the answer is punishing organizations that promote crap code.

        Including companies with crap hardware/firmware.

        We *all* need to up our games and expect our colleagues to do the same.

        1. On the whole I don’t have the impression that software quality is worse than a decade ago. I don’t really see customers really fleeing from any product or company, nor any reason to. I actually find the AI-esque stuff that appeared in the last few years like Alexa quite impressive, except the spying part.

          If cheap imported labor can do code the usual apps well enough now, first-worlders need to seek out new frontiers, new kinds of apps where their experience and intelligence pays a higher dividend. Sort of remember how impressive the AJAX Web2 thing used to be when it first appeared.

          1. The really funny thing is the first time I tried to respond to this the browser *tab* locked up. Not the whole browser, just the tab.

            >On the whole I don’t have the impression that
            > software quality is worse than a decade ago.

            Originally I was going to agree with you.

            But no, I think it IS worse. Maybe not in actual “defects per line”, but in overall usability.

            The webmail interface in Office 365 is horrid. As in “fire them and put them on the terrorist watch list” horrid. I told the customer I was working for to use my regular email as O365 was utterly unusable–and that was on a Windows machine using whatever the stock Windows browser is.

            Microsoft’s RDP is STILL a festering pile of stupid, and it’s had the same stupid problems for well over 10 years.

            I’m fighting with Kickstart (Red Hat) on a project, and I’ve been fighting with that pile of pustules since the late 90s. It’s not gotten significantly better, and for something that’s been around that long, that IS worse.

            We can do better. We *should* do better, and we should demand it from our vendors.

            But we’ve gotten used to crap code.

            > I don’t really see customers really fleeing from
            > any product or company, nor any reason to.

            The only reason to flee from something that you know the downsides of is *to* something that is better.

            If your average IT guy has 5 years of experience with Windows, and 0 with Linux, even if Linux was 10% better, why would you switch?

            > I actually find the AI-esque stuff that appeared in
            > the last few years like Alexa quite impressive,
            > except the spying part.

            Krakatoa was pretty impressive too.

  25. > So basically you’re saying that process is a defect attractor and manglers are defect generators?

    I don’t know what the poster I replied to meant, but I mean “over worked” process.

    In my department, we had process before – process by engineers, for engineers. Outside auditors even gave us a CMM Level of 4.

    Now, we have process by managers. (I will give my direct manager credit for pushing back, even though that ultimately failed.) The new processes are bloated. Our CMM Level has been downgraded to 3. And management audits are focused on “compliance with the process” instead of “does the process comply with the needs of product development.”

    Despite this, we still turn out products that our customers are very happy with – on time and on budget. But, managers above the level of my manager focus almost only on *their* process. (They tell us “Great work, but still lacking in process compliance”)

    To quote many engineers in the past, “The more you overwork the system, the easier it is to stop up the works.”

    1. And how is that any different than something like manual memory allocation?

      Some engineers, in some contexts can do it quite well. But someone like me? Oh Gawd, you don’t want *me* doing the embedded firmware on your pacemaker.

      Processes, some people can build good solid ones. Others, well, it’s like C++. This feature exists, therefore I must USE it.

      I am currently working for an organization that does “agile” development. We deploy 100 percent of our code to production *at the end of every project*. We just plan our work in 2 week sprints.

      Sometimes.

  26. In database-related work, reporting, accounting, stuff like that, it seems most defects come from the data filtering conditions (e.g. WHERE condition) being too loose. Not too strict. We want to make a sales report but we forget to filter the movements to only sales type, not purchases or others. We forget to filter that it should exclude intercompany sales.

    The reason behind it is that tables generally correspond to higher levels of conceptual entities (“movements”) and not the lower ones (“movements properly understood as sales”).

    In SQL and similar technologies the solution is to define lower levels of conceptional entities i.e. filtered tables as views and beat it into everybody to use these, not the source tables. It doesn’t work in every technology used in this field, though.

    When it is done, developers get their queries right at the first try because they only need to add only the conditions that are relevant to that particular case.

    I suspect the “damn I forgot to add one more condition” must be also a frequent defect in the non-database world as well.

    1. >I suspect the “damn I forgot to add one more condition” must be also a frequent defect in the non-database world as well.

      In any kind of data reduction, yes. Also in automated translation of anything; the most common class of defect is probably leaving the precondition for s rule too loose.

  27. I need to build a class that pushes events into a queue, which events eventually result in a call to a routine defined later than the original class

    How do I do that without inheritance and virtual methods? Or something considerably more dangerous such as void type pointers?

    1. >I need to build a class that pushes events into a queue, which events eventually result in a call to a routine defined later than the original class

      Result how? Is each event supposed to have is own hook call? When is the hook supposed to fire?

    2. virtual methods and inheritance is another way of saying ‘interface’. There is nothing wrong at all with doing polymorphism with interfaces. Less of a ‘defect attractor’ than handling raw C function pointers.

  28. Never use ntoh* or hton*. Never. Always unpack a stream of bytes into the machine’s word. Like this:

    int i = buffer[0] * 256 + buffer[1];

    Never like this:
    int i = (int)ntohs(buffer);

    That’s not a defect attractor. It *IS* a defect.

    1. All right, I’ll byte. After all, isn’t ntohs pretty much that piece of code, on architectures where it’s needed, and a no-op on architectures where it’s not?

      1. No. ntohs is 100% defect attractor. It encourages people to treat one data type (an array of bytes) as a different type (a short integer). This is fail, fail, fail when you’re trying to write portable code. Whereas, constructing the int like I did above will *always* work, and simple peephole optimization will turn it into the integer load or byte swap as needed.

      2. Eric would concur if he had the chance, with his work on gpsd, if I had not already fixed the code. It didn’t work on ARM processors because of the use of ntohs and trying to second-guess the compiler. The compiler is better at assembly language than you are.

      3. I think the problem is alignment and casting (presumably) a byte pointer to a wider type, although the example is incomplete since calling ntohs on a pointer to (presumably) unsigned char would be rather unusual.

        But if you are processing structs travelling between systems I don’t see the problem with copying a byte stream to a struct and then converting the relevant members in place.

        1. That only works if your structs are fixed width, and if you only have to byteswap integers. As soon as you have a variable-length string, or have to convert between 1’s and 2’s complement negatives, or deal with floats, you’re going to have to implement byte stream processing the right way anyway.

  29. > Fossil isn’t designed for bazaar development. It’s designed for cathedral development.

    In what ways does Fossil not support bazaar development?

    I’ve only worked on small team projects, though I’ve read that one of the BSD distributions uses Fossil.

    Where I work, we use Fossil “under the radar” in a peer-to-peer “web”, pushing our commits to team members, with out use of a central server. At one time, we did use git in the same kind of setup, but we find Fossil both more reliable and easier to setup and use.

  30. Side note: It is very annoying that one of the top results on a Google search for “Eric Raymond” is an article entitled “The decline and fall of Eric S. Raymond.” The author is basically criticizing you for an article that was critical of SJWs 3 years ago. But how exactly have you “declined” or “fallen”? Money? Status in the hacker community? Happiness? I did not see any evidence for any of the above.

    1. >But how exactly have you “declined” or “fallen”?

      If it makes you feel better, I get treated like a culture hero and have to do a bit of fending off of fanboys when I go to conferences – that hasn’t changed. Was reminded of this at South East Linux Fest this last weekend. So don’t be too upset on my behalf; reputationally the SJWs can scream all they like among themselves but haven’t left a mark on me.

      You might consider asking people you know to contribute to my Patreon. I wouldn’t say the amount of money I make is decreasing, but I still have to pause and think before I can take my wife out for a nice dinner.

    2. I think the only decline is in the esteem in which the author of that article holds Eric. That’s certainly what I read the article as arguing for, at any rate: he’s saying Eric’s no longer worthy of approval or emulation because he doesn’t bow down to the SJW orthodoxy.

      I’d say that view is not shared among the regulars here, with the possible exception of Jeff Read.

      1. ESR: Will do.

        Jay: That’s exactly what I thought. That sort of begs the question of why anyone would care what the author personally thinks.

  31. Well, looking at recent GnuPG fiasco, parsing text output of command line tool instead of using programming library is an defect attractor, and so is whole GnuPG due to not having actual library one can compile into tools…

      1. GnuPG outputs stuff in fd 1 (STDOUT), fd 2 (STDERR) and fd 3 (just fd3, status text)
        Some programs were squishing fd 3 into STDERR and then pick it up based on prefix. Like Enigmail. I dunno how exactly, haven’t read deep into it, but if you put comment that looks valid status and enable verbose mode in config, your app will happily parse comment as valid status. And will think that email is signed correctly when it isn’t. Verbose mode is a standard recommendation for “Better security” with GPG.

        This is a very surface reading of the situation.

    1. Ad-hoc “Unix philosophy” integration (vs. well-specified APIs) is definitely a defect attractor.This is why best practice on Linux is to favor system services that communicate over dbus, rather than shelling out to a command and piping the output.

      1. >Ad-hoc “Unix philosophy” integration (vs. well-specified APIs) is definitely a defect attractor.This is why best practice on Linux is to favor system services that communicate over dbus, rather than shelling out to a command and piping the output.

        How many ways is this claim not merely wrong but crazed? They are uncountable. There are times I think Jeff is just trolling us all and this is one of them.

        To pick on only the least of the issues, DBUS is nowhere near being the be-all and end-all of structured IPC mechanisms. It is obscure, heavyweight, and inflexible. GPSD has a DBUS transport for clients but there are no known uses of it.

        JSON over sockets is probably more common now.

        1. Of course dbus isn’t the be-all and end-all of IPC. For one thing, it copies way too much; ideally IPC should be zero copy. But it has the advantages that it’s here now, it’s well supported by frickin’ everybody, and it supports communication modes, like broadcast, that Unix-domain sockets don’t on their own.

          Migrating system services that used to be user-visible mainly through command line tools into daemons that expose their functionality through strongly-typed, well-defined dbus APIs is one major goal of systemd. And look what a failure that was — only every distro that matters uses it.

      2. Problem is not so much in parsing text as in GPG not being made with parsing it’s output in mind. Or good interface in general, apparently.

        1. >Problem is not so much in parsing text as in GPG not being made with parsing it’s output in mind. Or good interface in general, apparently.

          Yes, the GPG UI is deeply horrible. A prime example of How Not To Do It.

  32. On another note, anyone who is using C or C++ for anything else than programing the most low-level bits of an OS is asking for trouble. Unless you are working for a defence contractor with more money than sense (and are looking to pad your work hours by doing thing like manual memory manipulation or manually passing and checking the size of arrays), in which case carry on…

    1. > On another note, anyone who is using C or C++ for anything
      > else than programing the most low-level bits of an OS is
      > asking for trouble.

      Where I work, we use C for the low level parts of the code for electronic control modules. The microcontrollers we use are more powerful and more specialized than, for example, an AVR micro (like in the Arduino). With the exception of a few statically allocated variables, all our variables are function local. No use of malloc or similar features. And we have no other need or use for pointers. The only arrays we use are for buffering inter-module messaging, so we only need to check the length of received message before we copy them out of the hardware buffer. The buffer type is an union of an array with each of the message structures, so access to the message fields is handled by the compiler. Likewise, the fields in hardware registers are accessed by structures. We don’t use goto. Above these low level parts, we generate C code from UML diagrams. Then all this C code is reviewed and put through QAC and Polyspace analysis. In the end, the vast majority of defects that do occur are either misinterpreting the requirements or misinterpreting the interface to the hardware.

  33. So, what are some defect repulsors?

    (And why isn’t WordPress letting me subscribe to new posts in these threads anymore?)

      1. >Are there, perhaps, some universals?
        The number 1 universal is reviews. Review code. Review procedures. And, especially, review best practices.
        When I say “review procedures and best practices”, I mean to review the actual procedures and practices to fit your team’s needs. If something isn’t being followed, figure out why, then change the *procedure* or *practice* to fit the needs of your team. And by “team”, I don’t include the “process geeks”. (I am a recovering process geek. I have seen the damage my process sins wrought upon the developers and I repent.)

        1. >The number 1 universal is reviews.

          I’m going to speak heresy and say “no”.

          The NTPsec team has a defect rate so low that aerospace program managers would murder their relatives to match it. And we don’t use reviews as a routine tool. Same story on GPSD, which was in many ways the predecessor of NTPsec (they share senior people).

          Yes, our devs do occasionally ask each other to review tricky changes. Mostly, though, we rely on other techniques such as CI and unit tests to prevent the sorts of problems that code reviews find after the fact.

          Having said this, I note that the GPSD/NTPsec devteam is exceptionally skilled and (excuse the immodesty, but this is both true and important) well led; I make no warranty that our process would work everywhere.

          1. > Mostly, though, we rely on other techniques such as
            > CI and unit tests to prevent the sorts of problems
            > that code reviews find after the fact.
            After the fact?
            We perform code reviews daily, before any pending changes are merged into “trunk” or an update branch. We also run static analysis (QAC and Polyspace) on the code before the review and again after merging.
            If there are any merge issues, we review the issues and the proposed corrections, then re-do the merge.
            Once merged, a form of CI we call “software in the loop” automatically runs to test the functional logic. (Our unit tests are performed using actual hardware.) Also, we re-run the static analysis.
            In parallel, we perform the integration tests on the merged code using the target hardware.
            By the time the code reaches CI and integration tests, we very rarely find more problems. Our daily reviews are very effective.

          2. The NTPsec team has a defect rate so low that aerospace program managers would murder their relatives to match it. And we don’t use reviews as a routine tool.

            Well, okay, I think this only implies that reviews are not the #1 universal defect repulsor. They might still be in the top ten, though.

            And, “be exceptionally skilled and well led” is a defect repulsor, but I’m not sure how useful it is to state that. :-)

            To speak more at the general principle: “be excellent” is a trivial defect repulsor, just as “slack off and screw up all the time” is a trivial attractor. I think the aim here is principles that programmers could habitually follow without fail, whether it’s “scrum every morning” or “use inheritance whenever a class appears to suggest it’s possible” or “write a comment saying what a complex subroutine expects as input”, and to categorize those as either attractors or repulsors.

            1. >Well, okay, I think this only implies that reviews are
              > not the #1 universal defect repulsor.
              Even if code reviews are not #1, reviewing how well your practices (best or not) and processes are serving your team’s needs should be #1. As a recovering process geek, I used to believe that “if they just followed the process, everything would be awesome”. I learned the hard way that I was wrong. If the team isn’t following the process, there’s a reason. Find out why. Ask them what would work better. And then ask another team what would work better for them, because process isn’t “one size fits all”. I know it’s a huge pain. And when today’s process geeks complain, I remind them “I did your job, I know it’s a huge pain. But I did it because the job is to serve the needs of the developers, not make them serve your ideal process.” Last time I had to remind them, their manager – and my former manager – said “She’s right. Suck it up and quick your whining.”

          3. > I’m going to speak heresy and say “no”.

            I’m pretty sure you’re right. I see the number one defect repulsor as what I’ll call “conscious awareness”.

            An exceptionally skilled and well led team is already aware of likely 90+% of defect attractors that would affect the project, and would have tools in place to catch those as well as other unanticipated failures (or would put them in place in short order when starting work on a new project). The remaining problems missed by the tooling usually would occur when making those tricky changes, where having more eyes helps (and they know when to ask for that help).

            Well-led teams in general would tend toward this model, though obviously if most members are less skilled, preemptive (in e.g. git, pre-merge) code reviews for all changes are a good way to build this conscious awareness into the process, as well as teach the concept (so long as they don’t become dogmatic checklist-following wastes of time, which is the usual failure mode with poor leadership that wants to “do it right”).

            The common corporate failure mode for the above is “We don’t have time/money to do that.”…which, amazingly enough, always results in wasting more time/money down the road to fix the problems introduced (or the company goes out of business).

            One of the big problems I’ve seen is that everyone talks about “best practices”, but no one comes out and says why/how they arrived at the practice under discussion, even though the unspoken goal is to $DO_BETTER*. The only unvarying best practice I know of is to always look at what you’ve done and try to find a way to improve (speed, quality, etc.) next time; Scrum’s sprint retrospectives extend this to teams.

            * $DO_BETTER is usually one of: save time, save money, improve quality, provide better estimates, track more things so management can “plan better”.

  34. Thank you for the CS lesson, Eric!
    My favorites:
    “Special, corner, and edge cases are defect attractors.”
    “Clarity of language promotes clarity of thought”

Leave a comment

Your email address will not be published. Required fields are marked *