Brute force beats premature optimization

I made a really common and insidious programming mistake recently. I’m going to explain it in detail because every programmer in the world needs the reminder not to do this, and I hope confessing that even “ESR” falls into such a trap will make the less experienced properly wary of it.

Our sutra for today expounds on the sayings of the masters Donald Knuth and Ken Thompson, who in their wisdom have observed “Premature optimization is the root of all evil” and “When in doubt, use brute force.”

My main side project recently has been SRC, a simple version-control system for small projects. One of the constraints I was thinking about when I designed SRC was that the status command – the one that gives you a display saying which files have been modified since they were checked in – needs to be really fast.

The reason it needs to be fast is that front-ends for CLIs like SRC’s tend to rely on it heavily and call it often. This is, in particular, true of the VC front end in Emacs (which, as it happens, is also my code) – if the status command is slow, VC will be laggy.

It turns out that all file status checks except one only have to look at inodes
(existence/nonexistence, file size) and so are as fast as you can get if you’re going to hit the filesystem at all. The exception is modified/unmodified status – whether your workfile has been edited since it was checked out.

My original plan was just to do a stat(2) on both the master and the workfile and compare modification dates. That too would only touch the i-node and be fast, but there are a couple of problems with it.

One is that stat(2) times only have 1-second resolution, so a workfile could look unmodified to “src status” even if had been changed within a second of checkout. That’s not good.

The other problem is that, having gotten used to git and to build systems like waf and scons that evaluate “modified” based on content rather than last-write-time, I’ve gotten to really like that feature and wanted it in SRC.

OK, so the naive brute-force way to do this would be to read out the last stored version from RCS and compare it byte-by-byte to the workfile. That’s easy to do in Python, where one read call can pull the whole file into memory as a string.

I didn’t do that, which was probably my first wrong turn. Because of recent experience with repository conversions, I was mentally fixated on very large test loads – my gut told me to avoid the overhead of reading out the last stored version from RCS every time I wanted to check status, and instead do the obvious trick with a content hash. That is, keep a hash of the last stored version, hash the workfile on each status check, and compare the two.

Have you already spotted my error here? I didn’t measure. Before time-optimizing, I should have coded the naive version, then timed a bunch of status checks to see if there was any noticeable wall-time lag.

I say “probably” my first wrong turn because I quickly coded a hash-based implementation that worked without fuss, so I got away with it for a while. But…

…the way I did it was inelegant, and that bothered me. At that time I couldn’t think of a reasonable way to store the hash in the RCS master, so I dropped it outboard in a stamp file. That is, for each RCS master foo,v there was a parallel foo.srcstamp that existed just to hold the content hash from the last checkin.

I didn’t like this. I wanted each unit of history to correspond to one unit of storage, preferably to an eyeball-friendly flat file. If you do not understand why I wanted this, sigh…go slam your noggin against a copy of The Art of UNIX Programming until you achieve enlightenment.

Then I wrote an SCCS back end – and had the kludgy idea that I could recruit the description field in an SCCS master as a key-value store to simulate the named symbols that RCS/SRC has but SCCS doesn’t.

I still haven’t done that. But I wrote trial implementations in the RCS and SCCS back ends that gave me that general key-value store – RCS also has a description field that can be hijacked. Tested them and they worked.

The inevitable occurred. I thought: “Hey, why don’t I put the checkin hash in the key-value store, and get rid of those ugly stamp files?”

So I coded it, and it passed my regression-test suite – though the function where all the hash-check magic was taking place, modified(), seemed ugly and unpleasantly complex.

I shipped this as 1.8 – and promptly got a report that status checking was broken, A few minutes of repeating the test sequence I’d been given with increased instrumentation established that the problem was somewhere in the logic of modified().

I stared at modified() for a while, tried to fix to fix it, and failed. Then I had a rush of sense to the head, replaced the guts of modified() with the stupidest possible brute-force content comparison, and shipped that as 1.9.

Do you see what happened here? I got mugged by complexity creep – ended up with code I couldn’t mentally model properly, afflicted with tricky edge cases, and it broke. My own damn fault, because I optimized before I measured and one thing led to another.

Don’t do this. Well, not unless you really want Master Foo to materialize and whack you upside the head with his shippei (you should be so lucky).

Now I’m going to do what I should have done in the first place – not write the fscking optimization until someone tells me the stupid-simple method actually causes a performance problem. And, you know, it may not ever. Seek-induced latency is not the scourge it once was; SSDs are wonderful things that way.

Furthermore, the thing I should have remembered is that RCS optimizes for retrieving recent versions by storing delta backwards from a whole-text tip version, rather than forwards from the initial version as SCCS does. So under RCS tip checkout is mostly a copy from a span of the master file eather than the longest possible sequence of change-delta integrations, and probably reasonably fast.

The only scar tissue left is that I’ve still got about 40 lines of now-unused key-value store implementation in the code – sitting quietly, doing nothing. I’m sure it’ll come in useful someday, especially if someone turns up with an actual need for named symbols under the SCCS back end.

I shall regard it as a test of my discipline and detachment, not to do anything with it until I actually need to.

Thus, we gain merit by not coding.

55 comments

  1. Do nothing… if it’s the right sort of nothing. — Doctor Who, Warriors Gate part 4

  2. Now, take those 40 lines and rip them out! If they’re not doing something, they shouldn’t be there. Simplicity man, simplicity.

    1. >Now, take those 40 lines and rip them out! If they’re not doing something, they shouldn’t be there. Simplicity man, simplicity.

      I would…but the odds that I’ll need them someday seem significant, and the integration with the rest of the code is just tricky enough for me not to want to have to re-create it.

  3. Think of the unused but potentially valuable code (that you wouldn’t want to recreate) as aji.

    1. >Think of the unused but potentially valuable code (that you wouldn’t want to recreate) as aji.

      That…is a very good analogy. Go you. (Heh.)

  4. Forgive me for I didn’t read all of what you wrote, so quickly did I realize I was over my head. I have a simple burning question: what part of this is intuitive? I mean, how does one arrive to such a place as this where they talk of front end CLI and 1 second resolution?

    I’m a woodworker. I’ve made mighty fine furniture. I’ve designed outdoor furniture which requires neither fasteners nor adhesives. I’ve also rebuilt many an internal combustion engine such as many of my generation. Yet many of my same generation are like you, that they have quickly adapted to the subject technology. I remain befuddled how one becomes so flooded with desire to dabble in electrons and bytes. So, perhaps you will explain to the left behind the captivation of what is VC and SRC and the like. It’s not like I am looking to begin the sojourn towards learning of software and shit but that I am, and remain, on the other side of the glass. Simply put, I do not see the attraction. You could say, well someone has to otherwise no one would have a computer. In that I agree. Yet, here it is like I dropped into a secret enclave which involves a secret handshake and knowing glance. Ok, I get that I am the outsider. I really don’t want to intrude. But what is it that makes you guys go? This especially since so much of the computer world seems so counter to what I consider the way of the world. O sure, what you have has become the new way, but only because of sheer numbers. Yet I see even within your ranks a wide diaspora of thought and eloquence. I ask more than what makes you tick but why. Engines I understand. Wood, metal, physics I understand. But while I am the benefactor of those who labor to create computer hardware and software, I still wonder why. Why the devotion as if the creation is living, breathing?

    BTW: as a clue to my age, my way of thinking; my last attempt to dabble in the architecture was with punch cards and off site main frames. O the horrid stories I could tell of that time. Even then, but especially since, it is so very counter intuitive of how these infernal machines work…or not work.

    Illustriously yours,

    1. >I have a simple burning question: what part of this is intuitive?

      I’ve done just enough woodwork myself to know that the satisfaction of having crafted an elegant work of software is a great deal like the satisfaction of having made a fine piece of carpentry.

      You allude to having designed furniture that uses no fasteners or adhesives. This tells me that you feel strongly about respecting your materials, allowing form and strength to arise naturally from the properties of the wood and the geometry of joints.

      I totally get this; any software engineer worth the powder to blow him out of the water would. A clever, simple algorithm deep inside a piece of code is like a well-made joint hidden inside a piece of furniture. Maybe no one else will ever see it, but you know it’s there. You know that you turned the interplay between functional constraint and materials into a kind of art.

      When you’ve done carpentry for a while, thinking about a design problem causes your brain to start solving those 3-D puzzles effortlessly. You have an expert sense for how pieces can fit together, the capabilities and limits of your tools. Software is like this too. In the mind of a skilled software engineer, algorithms and data structures from the mental kit he has absorbed over many years come together naturally.

      Yes, a lot of it is intuitive – not when you start, but by the time you’ve gotten good. In fact, how much of the work is intuitive correlates closely with how skilled you are. You are not going to tell me that isn’t true of carpentry, are you?

      You look at wood and tools and a functional requirement and see possibility, and there is pleasure for you in bringing that possibility into being. We are the same way; the medium and the tools are intangible but the satisfaction is no less real.

      Carpenters and software engineers are both makers – poets of form. You don’t have to reach far to understand us – just think about what it’s like to be you when you are immersed in a job.

  5. > I would…but the odds that I’ll need them someday seem significant, and the integration with the rest of the code is just tricky enough for me not to want to have to re-create it.

    Isn’t that what you use version control for, to be able to rip out unused code? You can always go back in time.

    Admittedly, keeping it there will prevent textual level integration, but if you don’t test it you wouldn’t know if your “integration with the rest of code” still works after new changes…

    1. >Isn’t that what you use version control for, to be able to rip out unused code? You can always go back in time.

      In theory. In practice, the joints between the implementation and the rest of the code seem quite likely to get stepped on in future changes, making a simple revert of the removal into a tricky bit of hand-patching that is likely to spawn bugs.

      Greg Lyantz is right – that code is aji. Best to leave it be and collect the gains later.

  6. Do you have some good sutras or principles or anything for debugging in general?

    I pulled a classic rookie mistake one of these days, had a WhateverItIs variable, forgot about it a few months later, added a WhateverItIz variable in a larger scope, changed all calls of the first one to the second one and simply overlooked the one letter difference in one case….

    The funny part is it would be easy if I would not panick and try everything randomly instead of thinking. So I suppose the Zen of debugging would start with deep breathing and a few OMs. Then I suppose I could make checklists of what I typically tend to screw up and go through them, but I haven’t made this mistake for ages and would not have done it if it was not a rarely used tool to be made and used in a quick hurry. Not some kind of a product, just one fire and forget database fixer. I could use defensive practices to avoid it. Or I could not work in corporate walled garden “4GL” environment where the use of automated tools to catch mistakes like this is limited. But what would be some kind of a good debugging checklist? Like 1. Don’t panic, breathe 2. do X and so on?

    Do hackers use logs or classic step-into, step-over debuggers? Do hackers use tools that for example comb the source and report variables that have a suspiciously small Levenshtein distance? Or just try to use good practices to avoid stuff like this and live with it when it still happens?

  7. Getting rid of unnecessary code is important. But so is avoiding unnecessary work. Removing code is work, so I can understand why esr wants to give it some time before going through the trouble.

  8. Nb. doesn’t checking file content is needed only if you are inside 1 second of filestamp resolution, and the file length is the same? Or did you get rid of the whole stat optimization, and just always brute-force compare contents now?

    Also, benchmarks are needed.

    1. >Or did you get rid of the whole stat optimization, and just always brute-force compare contents now?

      I did. There were a couple of wrinkles I glossed over in the OP.

      One is that the master-file timestamp does not reliably coincide with time of last modification of the workfile. Suppose for example that you checked out a file, themn added a name tag to a revision. That second operation alters the master and touches its mod date.

      Now, suppose you notice that the workfile and master timestamps differ. It might still be the case that the file should be considered unmodified; you might have saved a modified version, then reveted tha change in your editor, leaving the original content in place with a ;ater timestamp.

    1. >http://www.debuggingrules.com/debuggingrules.jpg

      Those are good. I would amend the last to: “If you didn’t test it. it isn’t fixed.”

  9. For those of us less versed in the verbiage of Master Foo, can someone define “aji”? Google is returning nothing useful.

  10. >> >Or did you get rid of the whole stat optimization, and just always brute-force compare contents now?

    > I did. There were a couple of wrinkles I glossed over in the OP.

    IIRC git uses stat optimization for status, but diff always gives the real results. But that was with many files; with one file like in the SRC case stat-cache optimization might be not needed.

    Also, did you do benchmarks, or at least checked that brute-force gives answers in sub-second (i.e. unnoticeable) time?

    1. >Also, did you do benchmarks, or at least checked that brute-force gives answers in sub-second (i.e. unnoticeable) time?

      It does, so far. I can’t see any noticeable lag. Of course my examples are small.

  11. “For those of us less versed in the verbiage of Master Foo, can someone define “aji”? Google is returning nothing useful.”

    It’s a term from the game Go. Google “aji go” and you’ll get relevant hits. If you just google “aji”, you’ll get mostly references to the sauce.

  12. I would…but the odds that I’ll need them someday seem significant, and the integration with the rest of the code is just tricky enough for me not to want to have to re-create it.

    What do you evaluate the odds of the code base that it integrates with looking substantially the same as it does now when/if you need that code again?

    I personally find when i leave code lying around unused like that, life(and the code base) has moved on before i have a need to care again so i have to rewrite it in essence anyway. Probably not as big an issue for SRC but i definitely feel the impulse to just press delete and have done with it.

  13. Nb. It might not be the case for SRC, but in OpenSSL unused code led to security problems…

  14. Rick,

    ESR described it perfectly. When you are working with bits and bytes, you’re bulding a model of what you want in your mind and creating its representation in the real world. It’s a pure act of creation.

    If you’re a master craftsman, you take pride in what you build and how you build it. If it’s truly elegant, you’ll know that other good programmers will recognize it as such.

    Heck, I’d say it can even give you a high.

    Having said that, I’m always in awe of craftsmen who can create real physical things with their hands.

    ESR: Next you’ll be telling us you don’t actually walk on water. Way to harsh my illusions man!

  15. A postmature optimization story:

    I recently had a program that was taking ~30 seconds to do a relatively short task. I ignored it at first. After debugging its actual results I pointed (I think) cProfile at it. It told me the program was spending 80+% of its time in the constructor for an object type X. (X was a bitstring, but that’s not relevant to this story)

    There were a number of functions in the program that accepted on input “an X, or anything that can be converted to one”, and simply created one using their input and let the constructor figure out what was compatible before continuing. The approach was an intentional application of brute force. One such function was running in an inner loop and eating huge amounts of time — and 99% of the time it was starting out by reconstructing an X that didn’t need to be converted, because its own caller had already performed the conversion.

    I replaced the constructions with a small function that checked for the desired type first and converted only if necessary. Program run time went from 30 seconds to 2. Apparently those copies were vastly more expensive than I thought.

    I would *not* have guessed at the cause of this. Moral: Profilers are awesome.

  16. You see this principle a lot in the demoscene, too. While a lot of demoeffects relied on fancy interrupt tricks and precise timing, many were a simple question of hand-unrolling loops and pre-baking graphics for speed.

    Recently I’ve been on a kick of digging up old DOS games and figuring out how they worked, and why PC gaming in general was considered just plain not as good as gaming on contemporary consoles like the Nintendo Entertainment System.

    One of the reasons was the planar layout of memory in adapters like the EGA. It was difficult to blit graphics in arbitrary locations on the EGA because memory was laid out in bitplanes: first would come all the least significant bits of every pixel on the screen, then the next least significant bits, etc. So without a whole bunch of bit-shift and logical operations, you couldn’t draw a graphic on EGA except if it were aligned horizontally to an 8-pixel boundary. The NES had a planar graphics memory layout too, but it also had hardware sprites — small graphics that the video chip could be programmed to draw at arbitrary pixel locations, yielding movable characters like Mario, etc. But it was difficult to draw things at arbitrary pixel locations on the EGA: it could be done, just not fast.

    Most action-game creators on the EGA took the easy way out, and simply drew all sprites — players, enemies, etc.) aligned to 8 pixels. I noticed this both in the original Duke Nukem and the DOS port of Castlevania (the latter of which I didn’t even know existed until recently). This worked and could be reasonably fast, but it looked janky. By contrast, for Commander Keen (actually for a planned Mario port that Nintendo nixed), John Carmack wrote a game engine that had smooth scrolling and smooth sprite animation, very NES-like graphics and was fast. While the smooth scrolling is described on Wikipedia as making use of EGA registers to achieve pixel-level scrolling of a framebuffer larger than the actual display size, the secret to smooth sprite graphics eluded me until I saw a Commander Keen data format wiki with details.

    Carmack simply stored four copies of each sprite graphic in memory, each at a two-pixel offset relative to the last. The x-coordinate of the sprite’s intended location selected which copy to draw to the screen. These copies were pre-baked into the on-disk sprite data for the first Commander Keen, and calculated when the sprite data was loaded in subsequent games.

    It was a massive brute-force hack, but a bloody fantastic one.

    Of course, very early into the 90s, the VGA would become standard and every kid with QBASIC would be able to go to mode 13h with its chunky 8bpp graphics and scribble on the framebuffer to their heart’s content. And Carmack’s EGA hack would be underappreciated by future generations. But how glorious it was for a few brief shining years!

  17. “The notes I handle no better than many pianists. But the pauses between the notes—ah, that is where the art resides.”

    –Arthur Schnabel

  18. @Jeff

    >and why PC gaming in general was considered just plain not as good as gaming on contemporary consoles like the Nintendo Entertainment System

    Hm, I belonged to a subculture of snobs that PC or Amiga (if you are into the demoscene, really the Amiga era was perhaps the most interest period if it) is far better than consoles because you get intelligent strategies like http://www.myabandonware.com/game/storm-across-europe-rx while on the console you get silly childish arcade stuff like Mario. I wasn’t even aware that that a lot of people thought otherwise. Come to think of it, Amiga Power with the Ed Comments and all that snark was pretty good at creating a strong in-group feeling – and international spinoffs of the same style in various languages. Consoles weren’t even on my horizon. After it was easier to convince parents that they need to buy a Commodore or PC so that we can learn “Computer stuff, you know, dad: the future!” and games are merely accidental. They understood right enough that consoles are just for entertainment without much education value.

    But yes, for an Amigo, this EGA stuff looked like an Amiga game made by utter amateurs. VGA / SVGA changed that.

  19. TheDividualist,

    The NES actually had strategy games as well, Nobunaga’s Ambition being among the most famous.

    Oddly enough among PC enthusiasts the Amiga was mocked as a glorified game console. Indeed its system architecture was console-like, with hardware sprites and other features oriented towards generating impressive game graphics and sound. And Amigans derided the PC as nothing but a fast CPU, a glorified calculator and not a complete system because it didn’t have a chorus of custom chips harmonizing with DMA transfers and interrupts.

    Turns out a fast CPU and RAM can get you pretty far — farther than the Amiga could reach without constant NRE iterations on its custom hardware. And with the bankruptcy of Commodore it was doomed to be left in the dust. (but again, what a glorious run it had!) This cycle would be repeated again in the 3D graphics space, with SGI workstations featuring specialized polygon rasterizing and shading circuits giving way to PCs with Windows and general, programmable GPUs.

    There’s a harsh lesson here about openness and generality that not enough in the industry yet understand.

  20. I remember back in the early 90’s when we had a menu-driven 3D rendering program that ran on Silicon Graphics workstations. It ran reasonably fast given the hardware of the day and the rendering demands, but when we profiled it we found that it was spending a huge amount of time in its callback-testing loop.

    The solution was to add a very short wait using sginap(), equivalent to usleep() on other Unix systems, inside the loop. It may no noticeable difference to the responsiveness of the program, but dramatically reduced CPU usage. I would not have expected that result just from inspection.

  21. We have a saying in my office, about the first thing we typically try on a novel problem: the simplest thing that could possibly work.

  22. Moon v1.0 – Get three men to the moon, land two safely, let them look around for a while, get all three back safely with a pound of moon rocks. Don’t worry about any future missions for now.

  23. It is one of the Commandments of programming – NEVER, EVER optimize until it works and you have enough regressions to verify it.
    Then refactor. Mainly to make things clean and hit the quality plateau (Programmer’s Stone). Do a regression test.

    Optimization is merely a special case of refactoring. Refactoring addresses every kind of junk, first ugly code you can’t understand or just would be more clear. Text that astyle or indent doesn’t quite handle. dangling routines that should be inlined or cookie cuttered that should be subroutines. I could go on, but each starts with a problem.

    Resource waste is the next thing. Memory, CPU cycles, disk accesses, Code size. Some are tradeoffs – this is the art and instinct. There are two aspects – worst case where it outright fails (malloc returns NULL) or slows because of swaps, thrashing, etc. But here is where you either instrument the code or use profilers, then attack the elephants before the mice.

    The strange thing is to hash, you need to read in the entire file anyway (probably into a cache buffer). Linux can mmap a file. Date is one thing, but filesize would be another in the metadata. It might be useful if the RCS side would have a slot for the hash for each version, but until/unless, it doesn’t make sense to keep computing and discarding.

  24. @Mike E:

    “I charge $100 an hour for lessons….I also offer lessons for $50 an hour, but I don’t recommend them.”

    –Arthur Schnabel

  25. >There’s a harsh lesson here about openness and generality that not enough in the industry yet understand.

    Yes – with the added lesson that closed and specific one, while inefficient and ultimately losing, tends to be _emotionally_ more likable, which is kinda irrational, but people are people. Todays Commodore is Apple.

    What is the reason of the irrational emotional appeal of the closed and specific? I think because it is community-forming, to know a lot of people use exactly the same hardware, or software, or anything that you do. So people could discuss things, or write articles, while knowing everybody has the same exact stuff as they do. This makes communication tremendously easier, often facilitates economies of size, and forms identities, too. It’s like how every US motorcycle club is Harley based. It’s all Schelling points.

    One way to enjoy the benefits of both is to try to form a Schelling point of something not necessarily very closed but still specific built on top of something very open and generic. I think this happened when Ubuntu and later Android stepped on the scene.

    So I guess there are BOTH lessons to learn. Openness and generality brings tremendous rational advantages, but the human mind seeking Schelling points, ease of communication and community-forming likes to rally around something specific and closed from Commodore to Apple which is ideally built on top of something open and general. Kinda balancing both.

  26. @ Jeff Read

    Thanks Jeff for the backstory on Commander Keen. I played that game quite a bit back in the day, but didn’t know that Carmack was its creator.

  27. >http://www.debuggingrules.com/debuggingrules.jpg

    I’d change “Stop thinking and look” to “Stop looking and think.” Think about what you think the code does, what you want it to do, and how you would code it so it does that. This means that you understand the problem well enough and you will spot where the code differs from your expectations, for example if it takes care of cases you hadn’t thought of, or does the task in a round-about, overly complicated way. As a rule of thumb, the uglier the code, the less you should look at it. Otherwise you’ll be overwhelmed by the code’s complexity.

  28. >>http://www.debuggingrules.com/debuggingrules.jpg

    > I’d change “Stop thinking and look” to “Stop looking and think.”

    Note that there is a book[1] explaining the rules in more detail. It is about looking first; also the rule is not to stand alone (as the only rule). The quote for the chapter explaining this rule is from “Sherlock Holmes: A Scandal in Bohemia”:

    “It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts.”

    Also, the book is based on lots and lots of real-world experience (not only software).

    [1] David J Agans “Debugging: The 9 Indispensable Rules for Finding Even the Most Elusive Software and Hardware Problems”

  29. Or in other words, you can think up a thousands of possible reasons for failure. You can see only the actual case.

    This is especially important in the case of bugs, where the reason that there is a bug is because code doesn’t work like you thought it would!

  30. “It is a capital mistake to theorize before one has data.”

    “This is especially important in the case of bugs, where the reason that there is a bug is because code doesn’t work like you thought it would!”

    It depends. E.g. sometimes bugs are usefully thought of as a violation of invariants of the function/module/datastructure/whatever, and sometimes thinking will let you enumerate all the invariants or at least a promisingly large subset of them. In such a case, enforcing those invariants (perhaps with assertions, or with compile-time checkable constructs, or with directed review of the code) to make the bug announce itself can be a simple matter of programming, which does take time but in the case of a challenging bughunt might still take less time than unsystematic openminded review and observation and testing.

  31. TheDividualist,

    I think what we’re talking about here is related to the fact that “x86-based PC” has proven a stronger attractor as a Schelling point than other platforms — which is why it was able to quickly surpass the Amiga and even completely absorb the Macintosh.

    I would also suggest that the most “Commodore-like” Linux user communities accrete around distros like Slackware, Gentoo, and (formerly) Arch, not Ubuntu, because of the manner in which these distros enable infinite flexibility while staying, more or less, out of the user’s way, which is what made the Commodore platforms so enticing back when they were contemporary.

  32. To Rick the woodworker:
    I started with punch cards and remote access behemoths too. Today we carry more memory than one of those ever had, in our pocket..
    The attraction is *the solving of the problem*. You solve the problems of creating fine furniture. I would suggest that a substantial portion of the solution time, is actually done in your head: which wood, what size, what thickness, what finish, construction order etc.. Creating a stable blind fastener free joint requires some fair 3D visualization capabilities. Congratulations are due. Only a master carpenter will recognize the elegance of the solution. The purchaser probably will neither know nor care. .But you will!
    And often the solution to the “problem” has been known for years if not centuries, but the solution requires the careful, dedicated application of skill and awareness of the medium, to complete the solution. You know that to achieve a smooth glossy finish to a table-top, you *will* be required to sand, and sand and sand, and stain, and finish. Leave out a step or do it badly and you will produce junk. “Sure, you can sand it with a belt sander and then finish it!”
    A client comes to you and asks “Can you build me a …..?” And you tell him, with different words “I can solve that problem”. Creating software which is useful is *solving the problem”. Our esteemed blog-owner has been doing that for years. Back up a number of posts and read how he achieved better accuracy for time-keeping, and about other problems which he has solved or worked on. Some of them are the equivalent of taking a blue-print in imperial units and auto-magically converting it to metric. **
    Like your work however, the satisfaction is not the public adulation (although master, this grasshopper is not worthy!), but the internal satisfaction of having done a good job and done it elegantly and cleanly. That is a major part of the satisfaction. It’s the thing that drives us to fix something which we feel is ‘not quite right’ although everyone else thinks it is ‘good enough’. Or the drive to create something which should exist and does not. And to those who know, our fingerprints are on it, and we do not want them to think we do not know what we are doing.
    In a way, it is like doing cross-word puzzles for the satisfaction, but a useful object is also created by the doing. Yes, it’s kinda zen. Download a copy of “The Inner Game of Tennis” by Timothy Galway. At first pass it will appear to have nothing to do with this discussion. Read it again and it might… Or the other one, Zen and the Art of Motorcycle Maintenance, which has nothing really to do with motorcycles.

  33. Because I’m a Brit, and because the Masters have been invoked, and because I worked at Elliott Brothers in the 60’s, I have a quibble about the attribution of “premature optimization is the root of all evil” omitting its true originator, Tony Hoare. Here the story:

    http://ubiquity.acm.org/article.cfm?id=1513451

    I would choose elegant brute-force first, then iterate as needed and measured. If given the option. But so often have I seen “ugly code first to get it working, then make it nice” being shipped before the part after “then”, all the while expecting junior programmers down the line to maintain it. There is no such thing as throw-away code if one is driven by commercial concerns – it will get shipped. If you are the only one who can understand it while it is fresh (say, 2 hrs to fix a bug a seriously capable senior programmer had been unable to fix during your 2 week vacaion) you have failed, however clever the design. I wish I could say I don’t succumb but I am easily led…

  34. > sometimes bugs are usefully thought of as a violation of invariants of the function/module/datastructure/whatever, and sometimes thinking will let you enumerate all the invariants or at least a promisingly large subset of them. In such a case, enforcing those invariants (perhaps with assertions, or with compile-time checkable constructs, or with directed review of the code) to make the bug announce itself

    Adding assertions to code, or tests to testsuite, to check whether invariants hold, is “looking”. Examining algorithm in your head to check if each step preserves invariants is “looking”.

    Changing blindly < to <= because ‘it had to be off by one error’ is “thinking before looking”.

  35. >Adding assertions to code, or tests to testsuite, to check whether invariants hold, is “looking”

    I agree that adding the assertion corresponds to ‘looking’. My intended point was that sometimes I need to do a rather significant amount of thinking before I can ‘look’ by adding that assertion, which feels to me rather like doing a rather significant amount of thinking even before I start ‘looking’. There is probably room for reasonable people to disagree about this interpretation, though: e.g., I recognize that I do need to look at something — perhaps not at the garbage collector code itself, but at least its interface or specification — before I can think at the write-an-assertion level of actionable detail about what invariants might be blown.

  36. “It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts.”

    The trouble with the converse is that you end up data mining – cranking out “theories” which are overfitted for the particular subset of data you have at hand, but which then naturally fail when they’re extrapolated to new data. Even in debugging, coming up with theories before examining the code can lead you to better bugfixes, because your theories are more likely to cover not just the case which actually broke but also the cases which would have broken except that you haven’t hit them yet.

  37. Well, you should remember that “rules” are more mnemonic devices to help you avoid pitfals, rather than strict law. They are guidance, not rails.

Leave a Reply to patrioticduo Cancel reply

Your email address will not be published. Required fields are marked *