Zeno tarpits

There’s a deeply annoying class of phenomena which, if you write code for any length of time, you will inevitably encounter. I have found it to be particularly prevalent in transformations to clean up or canonicalize large, complex data sets; repository export tools hit variants of it all the time, and so does my doclifter program for lifting [nt]roff markup to XML-DocBook.

It goes like this. You write code that handles a large fraction (say, 80%) of the problem space in a week. Then you notice that it’s barfing on the 20% remaining edge cases. These will be ugly to handle and greatly increase the complexity of your program, but it can be done, and you do it.

Once again, you have solved 80% of the remaining cases, and it took about a week – because your code is more complex than it used to be; testing it and making sure you don’t have regressions is about twice as difficult. But it can be done, at the cost of doubling your code complexity again, and you do it. Congratulations! You now handle 80% of the remaining cases. Then you notice that it’s barfing on 20% of remaining tricky edge cases….

…lather, rinse, repeat. If the problem space is seriously gnarly you can find yourself in a seemingly neverending cycle in which you’re expending multiplicatively more effort on each greater effort for multiplicatively decreasing returns. This is especially likely if your test range is expanding to include weirder data sets – in my case, older and gnarlier repositories or newer and gnarlier manual pages.

I think this is a common enough hazard of programming to deserve a name.

If this narrative sounds a bit familiar, you may be thinking of the paradox of motion usually attributed to the philosopher Zeno of Elea. From the Internet Encyclopedia of Philosophy:

In his Achilles Paradox, Achilles races to catch a slower runner–for example, a tortoise that is crawling away from him. The tortoise has a head start, so if Achilles hopes to overtake it, he must run at least to the place where the tortoise presently is, but by the time he arrives there, it will have crawled to a new place, so then Achilles must run to this new place, but the tortoise meanwhile will have crawled on, and so forth. Achilles will never catch the tortoise, says Zeno. Therefore, good reasoning shows that fast runners never can catch slow ones.

In honor of Zeno of Elea, and with some reference to the concept of a Turing tarpit, I propose that we label this programming hazard a “Zeno tarpit”.

Once you know this a thing you can be watching for it and perhaps avoid overinvesting in improvement cycles that pile up code complexity you will regret later. Also – if somebody asks you why your project has run so long over its expected ship date, “It turned into a Zeno tarpit” is often both true and extremely expressive.

197 comments

  1. Of course, sometimes a Zeno Tarpit is an indication that you’ve hit the end of your first version, and it’s time to throw it all out and start over, rethinking the problem in light of what you’ve learned from the corner cases. Perhaps it would be better to refactor the program into several programs specialized to handle various cases?

    I watched someone one time spend almost 3 weeks getting a regex just perfect on a data set that would almost certainly never exceed 30 members at time… I’d written it up using brute force techniques on the data classification in about 30 mins. I doubt that particular little script would ever accumulate enough run time to make the extra programming time worthwhile.

    Sometimes the answer to Zeno’s Tarpit is to let the program get sucked under.

  2. “Zeno tarpit” is better. It accents the futile aspect that a Pareto reference does not.

    I remember learning about Zeno’s paradox in elementary school, but I thought of it as merely a sort of brain teaser. Years later I learned that it was written to argue that motion is an illusion. Aaaaah, then the penny dropped….

    1. >“Zeno tarpit” is better. It accents the futile aspect that a Pareto reference does not.

      Yes, I think so. That’s why I didn’t go with “Pareto pit”, despite the attractive alliteration. The sense of futility is more important than the Pareto distribution.

  3. The incremental approach over a Pareto distribution of flaws, use cases, data, or faults also happens in other fields. I think of machine maintenance, in particular.

  4. “I watched someone one time spend almost 3 weeks getting a regex just perfect on a data set that would almost certainly never exceed 30 members at time… I’d written it up using brute force techniques on the data classification in about 30 mins. I doubt that particular little script would ever accumulate enough run time to make the extra programming time worthwhile.”

    Hackers have a strong tendency to write tools to get a job done rather than manually performing a tedious edit. There are times, though, where it’s faster and simpler just to sit down and do the manual edit instead of hacking a script together. Zeno tarpits are a special danger of this kind of thing.

  5. In the past I would have gone several loops around the rinse cycle … nowadays I live with the edge case problems and fix the results by hand. In the past the goalposts stayed in the same place while working on the code … nowadays they seem to be at the other end of the pitch before one gets the first couple of cycles complete?

  6. (80/20)^n

    “Zeno” well reflects the escaping goalposts issue caused by discovering new and new gnarly corner cases.

  7. The Zeno tarpit is the spot where Reality hits Theory (i.e., a program).

    It just means that you will have to put the solution of corner cases in by hand. The cut-off for me is when coding time starts to equal run time of the program (see comment of Tara Li above).

    Artificial Intelligence was bogged down by this Zeno tarpit decades ago. Their solution is statistical modeling of real life data (see the Watson machine of IBM). Obviously, such statistical modeling will simply store the edge cases in memory (exemplified by memory based learning).

  8. It is funny, because it sounds like a function of the irregularity, “dirtiness” of the input data, and it is sort of strange to see open source hackers to do “dirty” projects of the type the people work in business walled gardens would refuse. It is funny because isn’t it supposed to be the other way around? You guys are supposed to go for the more fun projects and it does not sound like that. Yet it seems in such cases people in enterprise programming jobs are pickier. Irregular data? Stamp it as user error and reject. Then hire temps and they clean it manually… the difference is between actually wanting to achieve a goal that is your own goal quickly and efficiently vs. being an employee where it is per definition not your goal.

    Nevertheless if I had to do this, I would split the job. One person would massage the data into something regular with throwaway Perl scripts and the other would do the conversion with some nicely readable and maintainable code. The advantage would be focusing on vastly different levels of abstraction. Not having to jump to and fro over different layers of abstraction. One focuses on what we are really trying to achieve and the other converting pesky tabs to spaces or something.

    @ESR why do you seem to so often do one-man projects? Is something like apprenticeship, trading the doing of the less enjoyable kinds of work for teaching and borrowed prestige not common in the FLOSS word? In a world where most transactions are not monetary, it would make a lot of sense. Why doesn’t a young padawan clean up your input data with Perl scripts?

    1. >It is funny, because it sounds like a function of the irregularity, “dirtiness” of the input data, and it is sort of strange to see open source hackers to do “dirty” projects of the type the people work in business walled gardens would refuse.

      I have somewhat more history of taking these on than most hackers, because I know I have the stamina to get them done. But my behavior in this respect is only unusual, not markedly rare. There’s sort of dogged toolsmithing and quest for perfection hackers are very prone to even if the results look like crazy overinvestment in the short term.

      >@ESR why do you seem to so often do one-man projects?

      Um. because I can?

      If you have the ability to go from mad gleam in the eye to concept to finished implementation it can be more fun to just dive in rather than start by recruiting a team. Generally, if I build and publish it, contributors will show up. My projects usually start as one-man efforts but seldom continue that way for very long.

      >Why doesn’t a young padawan clean up your input data with Perl scripts?

      Because in the general case that isn’t possible. You cannot usually, for example, fix a a damaged repository with Perl scripts.

  9. > it is sort of strange to see open source hackers to do “dirty” projects of the type the people work in business walled gardens would refuse.

    “We choose to go to the moon, and do the other things, not because they are easy, but because they are hard.”

    Some people do find tougher problems attractive – there’s some real work to do there.

  10. FWIW I view the Zeno story contrarily: that there is a predictable point (just a wee limit – or even algebra – computation) where speed will overtake early-start, no matter the verbal trickery. Zeno’s time intervals shrink, so the relevant series converge, rather than expand/diverge as in your software story.

  11. @Orvan

    There are different kinds of difficult. There is the kind of difficult like doing the math for special relativity, which is difficult but ultimately regular and lawful. And there is the dealing with something chaotic, messy, dirty type of difficult, such as dealing with the data clueless human minds shat out, or the kind of awful mess GPS vendors make which ESR’s gpsd project deals with, and similar things. Cleaning up human-made chaos.

    I would have assumed hackerdom likes the first type of difficult. It is, how to put it, more glorious. More challenging-your-intelligence-difficult, not “dealing with a thousand shitty little exceptions” type difficult.

    I am more or less on the “enterprise” side I guess, and we are supposed to have far less pride than hackers (because we’ve sold out, let’s face it) and yet even on this side everybody seems to be upset, offended and annoyed by having to deal with the second type. It is regularly compared to something like shoveling manure. Like a customer gives you a huge list of catalogue items to import into a database and then you find it is not regular enough, such as the item number should be numerical but sometimes alphanumeric, then the customer says ignore the alpha, then you find duplicates… when such a job is done it is often referred to as “I shoveled that crap into the database”. So even the programmers who sold out their pride to an extent tend to be upset by it.

    This why I am kind of surprised hackers who have not sold it out are not upset by this.

    One level deeper, it is often said a sense of aesthetics predicts mathematical ability. I was thinking something similar could be said about programmer’s minds. Liking regularity and lawfulness, because random exceptions are just… dirty. Disgust reaction and so on. This is a wholly different story from when it is difficult the good way: the laws are complicated.

  12. >This is a wholly different story from when it is difficult the good way: the laws are complicated.

    Of course it is possible that I have misunderstood the story here: that it is lawful data actually, just in a really special way where the distribution of the number of cases per rule looks like a fractal.

  13. @shenpen
    “Like a customer gives you a huge list of catalogue items to import into a database and then you find it is not regular enough, such as the item number should be numerical but sometimes alphanumeric, then the customer says ignore the alpha, then you find duplicates… when such a job is done it is often referred to as “I shoveled that crap into the database”. So even the programmers who sold out their pride to an extent tend to be upset by it. ”

    A database without errors is empty ;-)

    Reality is harsh and dirty, so we should learn to cope with it.

  14. Calling this a “Pareto tarpit” would make my first thought be of getting stuck in attempts to make all changes be Pareto improvements, rather than the coding efforts following a Pareto distribution. That’s something else that makes “Zeno tarpit” a better term.

  15. I’m with Winter on this. In abstraction space, programming theoretically has endless efficacy. In the real (e.g. complex and chaotic) world, we mistake futility for laziness. Zeno’s tarpit is a Twilight Zone portal.

  16. > Hackers have a strong tendency to write tools to get a job done rather than manually performing a tedious edit.

    I posted xkcd’s “Is it worth the time?” (https://xkcd.com/1205/) prominently above my desk at my previous job to help remind me about tradeoffs.

  17. I have a hypothesis that human-made chaos may itself be someday tamed, that we simply don’t understand it well enough today, but in the same way that we didn’t understand planetary motion in 1300 AD, and that referring to it as a proxy for the supernatural is ultimately ill-advised – but I admit it’s fun to do it anyway. And it does indeed feel like stable cleaning.

    I feel as if the Tarpit has a practical limit, at least. There’s a point where the least costly solution is to stop improving the code and solve the remaining case(s) manually.

    1. >I feel as if the Tarpit has a practical limit, at least. There’s a point where the least costly solution is to stop improving the code and solve the remaining case(s) manually.

      Yes, this is why in the worst doclifter cases I try to send patches upstream to clean up the grotty manual-page markup. Sadly, the equivalent move is not really possible for repository cruft – you can’t clean up damaged metadata with normal operations.

  18. You know one of the things about this type of software is that modularization is the key.

    Often what happens is that that original 80% is solved by elegant code, then then next 80% of the rest, munges up the code, then the next 80% of the remaining 20% turns it into a rats nest. and the complexity goes through the roof so that solving for that last 1% puts the other 99% at risk of serious bugs (regression notwithstanding.)

    I think the solution is often to separate out the 80% fractions. So you keep the original program that solves the 80%, but you then provide a transform that doesn’t solve for the next 20%, rather it transforms that 20% into a format that works with the original 80%, and so on down the line. The goal of this second module is not to perform the original program function but rather to transform the gnarly repository into a less gnarly one that is more amenable to standard transformation.

    Elegant code is perhaps above all cohesive in function. By merging the de-gnarling with the standard transformation you reduce the cohesive nature of the program and make it nasty and ugly. Better to add the complexities together than multiply them.

    Of course I haven’t read your code, and you are smart so you probably do this. But just a thought anyway.

  19. > testing it and making sure you don’t have regressions is about twice as difficult.

    I’m not sure what your current approach is, but nosetests is pretty good for this. You can write generators for arbitrarily many unit tests. You might also want a separate validation test set that you never look at (to see if you’re overfitting).

  20. “There are only two documents which describe this phenomenon. They are knows as “Zeno’s Pair O’ Docs.””

    >fwap<

  21. “Frederic:
    A paradox?
    King: (laughing) A paradox!
    Ruth:
    A most ingenious paradox!
    We’ve quips and quibbles heard in flocks,
    But none to beat this paradox! “

  22. @esr
    > Sadly, the equivalent move is not really possible for repository cruft – you can’t clean up damaged metadata with normal operations.

    Sure but if the repo can’t be read reliably you have to have some sort of code that goes in at a lower level to read the bad records. Presumably a separate program could be designed that just did that — correct the invalid data (as well as possible) — and then modify it into a valid repository that can then be processed normally. That way you separate out the two functions of hacking the cruft and doing the repo conversion.

    1. >Sure but if the repo can’t be read reliably you have to have some sort of code that goes in at a lower level to read the bad records.

      Right. Now you’re writing the grotty part of the exporter again; you’re back at the Zeno tarpit, but trying to solve it with two notionally and artificially separate tools rather than one. Trust me when I tell you this is not likely to end well.

  23. @esr
    > Right. Now you’re writing the grotty part of the exporter again;

    Sure, if you are shoveling poop, you are going to stink. But the real cause of the “zeno tarpit” is combinatorial explosion, the multiplication of special cases. Better to add than multiply, 4 + 4 is less than 4 * 4.

    And there is a better reason too: cohesion of function. Better to have one program “export a nice repository” and another “fix the missing meta data in this repository” and another “fix the bad line endings in this repository” and another “correct this common defect in repositories” and pipe them all together.

    After all, this idea of little programs with singular cohesive functions is the Way of Unix, is it not?

    cat email-list | sed ‘^.*@//’ | sort | uniq > known-domains

    Or something like that — it has been a while….

    1. >After all, this idea of little programs with singular cohesive functions is the Way of Unix, is it not?

      That it is. Sadly, the Way is sometimes impractical for the kinds of data structures you find in the guts of repositories. The superficial problem is that you can’t fix those structures with text-bashing tools because they aren’t text. The deeper issues is that those structures have poor locality.

      This is a distinction that might be worth a blog post. A major reason small tools operating on text streams works so well is that in textual data syntactic locality tends to correspond reasonably well to semantic locality. This means you can do a lot by looking at relatively small pieces (like line-at-a-time) using simple state machines or parsers. Well-designed data serializations tend to have this property even when they’re not textual, so you can do Unix-fu tricks on things like the binary packed protocol a uBlox GPS ships.

      Repository internals are different. A lot of the most important information – for example, the DAG structure of the history – is extremely de- localized; you have to do complicated and failure-prone operations on the entire data volume to recover it. You can’t do point transforms on the de-localized data without knowing a lot of fragile context.

      Now, if you are a really clever Unix hacker, the way you deal with this problem is by saying “Fuck it. I’m not going to deal with repository internals at all, only lossless textual serializations of them.” Voila, reposurgeon! All your Unix-fu is suddenly relevant again. You exile your serialization/deserialization logic into stream exporters and importers which have just one extremely well-defined job, just as the Way of Unix prescribes.

      But inside those importer/exporter tools…Toto, you’re not in Unix-land anymore, at least not as far as the gospel of small separable tools is concerned. OK, there’s one exception; if you can write your tools as sufficiently thin script wrappers around a version-control system’s native CLI you’re still in Unix-land. But that is exceptional case; usually, as in cvs-fast-export or a different exporter I’m working on now, you have no option but to get down and dirty with the repository’s native, deserialized, and de-localized data structures. Zeno tarpits are a common – and sometimes utterly unavoidable – consequence.

  24. @Jessica

    I may be completely off base, but I think that ESR is talking about cases where the special case code, in addition to being full of crufty special cases, has to be so intertwined with the clean parts of the processing that making a separate program to do it requires the duplication of a large portion of the functionality. Hmmm, it almost sounds like the Lovecraftian flip-side of Amdahl’s Law.

    Off topic:

    I am digging into a case where someone mimicked the functionality of a code generation mini-language…using the C preprocessor. The SAN must flow!

    1. >I may be completely off base, but I think that ESR is talking about cases where the special case code, in addition to being full of crufty special cases, has to be so intertwined with the clean parts of the processing that making a separate program to do it requires the duplication of a large portion of the functionality.

      Yup, that’s exactly right. I described this problem from a slightly different angle in a direct reply to her. What you’re seeing – the duplication – is, if you look deeper, a consequence of large mismatch between locality of representation and locality of meaning.

      >Hmmm, it almost sounds like the Lovecraftian flip-side of Amdahl’s Law.

      An astute observation!

  25. Not to get too metaphysical here, but sometimes bad files should just die. Resurrecting a corpse can sometimes produce zombies.

    This could become a screenplay in the making; Zombie Code from the Zeno Tarpits.

  26. Zombie Code from the Zeno Tarpits

    Ha!

    Jeremy Bowers: Thanks for the link. Very interesting.

  27. I had a flashback to a company called Zenographics. They made a nifty program called Mirage which could do vector and presentation graphics. Its UI was wonky and optimized for the drafting-table-sized digitizing tablets they sold back in that day. While poking around the example scripts I noticed several lines that were just “DO” and then a list of numbers. It turned out the whole thing worked on a VM, the numbers were bytecodes of a sort, the VM could be hacked, and by experimenting with DO and various values like a kid POKEing into a Commodore, I could get it to do interesting things and add commands that behaved like native commands.

    The Mirage manual had a glossary entry for Zeno (philosopher), which included a succinct restatement of his arrow paradox.

    Later Zenographics pivoted into selling printer firmware before being absorbed by Marvell (the chip manufacturer).

  28. @ESR

    Amdahl the Great Singular of Multiplicity nearly had you during the Emacs expedition; but you eluded him with your computronic magery. Now he will not rest easily within his lair, till he hath eaten of your soul!

  29. This thread is acquiring a proof of the old saw that goes, “in theory, theory and practice are the same.”

    This is written on my whiteboard whenever I don’t need the room fire something else.

  30. I think Zeno tarpits are related to Zipf’s curse:

    The Curse of Zipf and Limits to Parallelization
    http://ceur-ws.org/Vol-480/paper3.pdf

    This paper explores the problem of stragglers” in Map-Reduce: a common phenomenon where a small number of mappers or reducers takes significantly longer than the others to complete. The effects of these stragglers include unnecessarily long wall-clock running times and sub-optimal cluster utilization. In many cases, this problem cannot simply be attributed to hardware idiosyncrasies, but is rathercaused by the Zipfian distribution of input or intermediate data.

    The “special cases” tend to follow Zipf’s distribution which means that no matter how much data you have collected, there will appear a new “interesting case” you never have seen before.

    1. >The “special cases” tend to follow Zipf’s distribution which means that no matter how much data you have collected, there will appear a new “interesting case” you never have seen before.

      Oh hell does that sound familiar.

  31. @Jessica Boxer

    I like the modularization approach. I was thinking something similar, but more in the lines of maintained, readable, nice, unit-test-covered main program, and inputs preprocessed by quick-and-dirty throwaway scripts. Modularity is a good way too IMHO.

    Other, similar approaches:

    – Plugin system, again, the idea being the main program being very readable and maintainable and tested, while the plugins crufted together when the need arises and thrown away when no longer. Here is a bit of a terminology thing: I understand modularity as each module being roughly equal in size, importance and abstraction level. Core/plugin is about strictly different levels of size, importance and abstraction. An ERP with accounting and inventory modules is modular, Eclipse/EMACS and its plugins are textbook core/plugin.

    – A PaulGrahamian approach, use something LISP-like (with Clojure it is becoming popular again, it has shed much of its “50+ years old ex-AI people” vibe) and write a DSL in which the special cases are easy to code.

    I understand ESR’s reasons for doing one-person projects, but I still think separating those roles amongst different people is a good idea.

    Look at game development. There is typically a bunch of math nerds making an engine in C++ and then it is scripted in a scripting language by people who are less of an experts in math/programming but more domain experts i.e. they know what makes a good game. Even if you can use tools like Python where the core and the plugins can use the same technology, even if you have people who are so good that they can do both, I think it is useful if different people do it because jumping between layers of abstraction is cognitively costly and thus error-prone, better if each layer of abstraction is the domain of a different person.

    I have another reason of disliking one-person projects. Code is written for humans to read. Alongside with documentation and all. One way to ensure it is have a co-programmer read it from the very beginning. Otherwise it can be that you think it is easy enough to understand but it is not. Your mind automatically fills the gaps. It is hard to simulate in your mind what other minds would read. This is also why when you write a novel, file it away for a year and forget it, so that when you read it again you spot the gaps.

    Of course it is posssible that people with 30+ years of experience no longer need such a crutch, but let’s put it this way, the rest of us should probably do this.

    1. >Of course it is posssible that people with 30+ years of experience no longer need such a crutch, but let’s put it this way, the rest of us should probably do this.

      30+ years experience correlates with not needing the “crutch” of having other people read your code from the beginning, but it’s not determinative. I think it’s more about how much self-discipline and good coding/documentation habits you have. In theory, relatively new programmers could have the right kind of procedural habits; in practice this almost never occurs.

  32. >transformations to clean up or canonicalize large, complex data sets

    >It goes like this. You write code that handles a large fraction (say, 80%) of the problem space in a week. Then you notice that it’s barfing on the 20% remaining edge cases.
    >…lather, rinse, repeat. If the problem space is seriously gnarly you can find yourself in a seemingly neverending cycle

    You basically just described the last several years of my life. I am very familiar with this. At some point the remaining 20% becomes small and heterogeneous enough that you end up fixing up records by hand rather than adding loads of special cases to your already-gnarled-up code. Or at least, I often do.

    I like the name Zeno’s tarpit. But, I would advocate adding the suffix ‘of doom’.

    Tom

    1. >I like the name Zeno’s tarpit. But, I would advocate adding the suffix ‘of doom’.

      Aaand Tom wins the thread.

  33. > I think it’s more about how much self-discipline

    When was the last time when you named a function or variable hell_i_hate_when_this_happens?

    One of the most hilarious moments was when a good friend of mine exclaimed in the middle of a really dirty data migration project at about 2AM “I remember having seen this case. I just don’t exactly remember which of the 14 different fuckthisshit.prg files that live on my hard disk were supposed to deal with it.” Although we weren’t even 30 then, we both grew up a bit more since that. Sometimes I wish we hadn’t, though. It was inefficient but this kind of gallows humor helps dealing with stress.

  34. @Jessica:

    Elegant code is perhaps above all cohesive in function.

    In my book, elegant (alternately: “beautiful” or “artistic”) code doesn’t just solve the problem in a cohesive fashion, it also explains the “what” and “why” — what was the original problem consideration, and why was this solution selected? (Answering “who”, “where”, and “when” should be covered by your repository, plus cohesive code automatically provides the answer to “how”.) Whether the instructions themselves or comments are used to answer these questions doesn’t matter to me — comments count as part of the source code after all — the only issue is whether the answers are clearly present or not.

    @Shenpen:

    Code is written for humans to read. Alongside with documentation and all. One way to ensure it is have a co-programmer read it from the very beginning. Otherwise it can be that you think it is easy enough to understand but it is not.

    It might just be that right now I’m rather in love with the idea of “literate coding” styles, but I would rather try and push as much documentation into the code files as possible. On the whole the fewer parts of your project which can quietly [pick one or more: go missing / be deleted / fail to backup / have inconsistent edits] the healthier your project will stay in the long term.

  35. Behold in the Zeno Tarpit of Doom the bones of so many programmers of days gone past. It’s not just an optical illusion, they really did each die howling in agony, their jaws agape, before they were reduced to these skeletons. Beware, young padawan, learn from their pain, lest I show your screaming bones to the next generation….

  36. An over simplification of software development life cycle is generalize, implement, test, generalize, integrate, implement, test, generalize, integrate, implement, test… In the first iteration, generalizing the problem is the right place to catch as many of the problem space cases as is possible because there is no integration required with existing code/modules/functions/data. All of the usual performance benchmarking (Thomas McCabe et al) produce good indications of resource requirements (you can quote the project with a high level of accuracy). But all further iterations require the added step of an integration stage where additional resources must be expended; including inputting new generalizations into a preexisting (and expanded at each iteration) code and data base. Very few successful resource prediction tools are available that attempt to measure the size of an existing code base in tandem with the size and complexity of the additional generalizations. Thus it is extraordinarily difficult to reliably determine the expected resource requirement for each subsequent iteration. Indeed, the problem becomes exponentially more difficult at each iteration because the code and data base has grown in both size and complexity and may have also been touched by numerous hands with various levels of proficiency. Various SDLC techniques such as Agile attempt to address the issue from a human organizational approach. Various SDLC testing methods such as integration and unit testing apply methods a module and code level. And Halsted and cyclomatic measurements are available for examining work after the fact. But actual real mathematical tools that define the queuing theory of SDLC are virtually non existent.

  37. I would argue that there is no elegant solution to real-world problems.

    Some years ago I was helping to resolve a technical issue with the person who does the billing at the ambulance service at which I volunteer. I discovered that one of the insurance companies would only accept insurance claims which were submitted as (particularly) malformed XML.

    You can’t just stomp your feet and say “we’re going to refuse to bill you until you fix your code”. The insurance company would love that! So somebody, somewhere, needed up modify the billing software to handle this case for this company. Which, undoubtedly, will require an update just as soon as the insurance company fixes the issue and starts rejecting malformed XML.

  38. So a Zeno Tar Pit (of Doom) is always found somewhere in the long tail of Pareto land where no past information of time and effort spent is ever recorded or known. It’s a kind of limbo for application developers where they can neither escape the project nor can they ever finish it – forever and ever.

  39. … but but but. Zeno’s paradox is a trick. The work is ultimately limited (the time series converges quickly). The problem in esr’s scenario is that the work isn’t limited. Zeno is the wrong metaphor.

  40. Some years ago I was helping to resolve a technical issue with the person who does the billing at the ambulance service at which I volunteer. I discovered that one of the insurance companies would only accept insurance claims which were submitted as (particularly) malformed XML.

    Bob the sales manager doesn’t want to go to the intranet Web site to get the reports. He doesn’t want to do anything. He wants them formatted as Excel spreadsheets, zipped up, and sent to his Exchange inbox every Thursday at 3:47 pm. It is a MISSION CRITICAL REQUIREMENT that this be done. If you don’t want to do it, then you must be okay with losing the contract.

  41. >I would argue that there is no elegant solution to real-world problems.

    @Garret true, however there is an, um, “elegant” social solution, which is to delegate these such as the malformed XML requirement to the most junior full time employed programmer of the invoicing software vendor company who will probably consider it a borderline cruel hazing ceremony or rite of passage or “but at least I have a job”, while the senior employees + the open source hackers won’t touch that disgusting thing with a long pole :-)

    If borderline disgusting problems which everybody dislikes working on would not exist, young and inexperienced people would have one hell of a time finding a job. I wish someone told this to students. Of any major really. “Your first job will be doing whatever everybody else in the company dislikes doing. Good luck.”

    That is why I figure it is a kind of waste when hacker legends work on them, but whatever. Perhaps some repos or documentations are worth rescuing no matter what. Or sometimes it is truly difficult in the cognitive sense, although usually it is only difficult in the timesink sense.

  42. @Jeff Read

    >Bob the sales manager doesn’t want to go to the intranet Web site to get the reports. He doesn’t want to do anything. He wants them formatted as Excel spreadsheets, zipped up, and sent to his Exchange inbox every Thursday at 3:47 pm. It is a MISSION CRITICAL REQUIREMENT that this be done. If you don’t want to do it, then you must be okay with losing the contract.

    This is considered something special? I designed the whole reporting infrastructure of my current job around this, except that is mostly Sunday 03:00, where not much else happens. Because, you see, the alternative would have been that I run the reports and I send them out. Which I occasionally do Sunday 09:00 from my couch, if something fails in the automatism (yet another user error I did not foresee).

    But I am convinced since about 2011 that this is actually the best possible type of reporting architecture / paradigm and don’t understand why Crystal or similar stuff are not primarily optimized around this, or why does it sound like a special exception to you.

    In fact I would insist on this architecture even if they disliked it. You see, if Bob forgets to run the unpaid invoices report and send out payment reminders to the customers, next week they will ask me to automatize the payment reminders because they probably teach that during the How To Make Organizational Problems Invisible With Technology 101 MBA course. And two weeks later of course comes the complaint why does Very Important Customer receive reminders?! However if Bob received the unpaid invoices report in the e-mail, he cannot really claim he forgot to work on it. So this is a trick in dealing with organizational problems with technology, if you foresee people should work on something and you suspect they fucking won’t push it in their inbox because that is also how the CEO gives orders, “sorry I did not check my inbox” is something the CEO won’t accept from Bob.

  43. @Garrett
    > I discovered that one of the insurance companies would only accept insurance claims which were submitted as (particularly) malformed XML.

    Actually this is a perfect example of what I was talking about. Based on this limited information alone, my design for this would be two modules. One that generates the billing in a standard format (such as XML, yikes!) A second that transforms the correct XML into the malformed one that is required by the troglodyte.

    This means that your main code is clean, it means you don’t mix the mess in with the rest, it means that when the troglodyte fixes it (or more likely breaks it more) you don’t have to risk every other company. It also means that both are elegant in their own way. The first for obvious reasons, the second because there is a certain beauty in that kind of transformation. Of course in this case often there are even standard tools to do this, or perhaps better to say different implementation methodologies (perhaps some XML type tools or LISP or something like that) that makes it easier. I think someone up thread mentioned this already.

    FWIW, this sort o thing is pretty common practice where the output comes in a standard (sometimes intermediate language) and the final output transformed according to destination specific rules. The most obvious example of course being GCC. But it is common in, for example, payroll systems or ERP systems where there is a standard format that some bilkers deviate from.

    1. >Actually this is a perfect example of what I was talking about. Based on this limited information alone, my design for this would be two modules. One that generates the billing in a standard format (such as XML, yikes!) A second that transforms the correct XML into the malformed one that is required by the troglodyte.

      And that would probably work, because XML documents tend to have a reasonable map from syntactic locality to semantic locality, though not as tight as primitive line-at-a-time format often do.

  44. Blikers should be billers, or maybe I was right first time.
    Surely there must be a term for a typo that is funnier than the original?

  45. Are you helping or hurting by helping?

    In other words, just because we have expert programmers who can wade into the tarpit and solve asinine problems ad infinitum, are you perpetuating the incompetence by endlessly solving problems that should otherwise suffer a Darwinian fate?

    1. >In other words, just because we have expert programmers who can wade into the tarpit and solve asinine problems ad infinitum, are you perpetuating the incompetence by endlessly solving problems that should otherwise suffer a Darwinian fate?

      There are some cases where the old messy data is quite valuable and you want to be able to transclude it automatically into newer, cleaner representations automatically. Repositories and the historical man-page corpus are both good examples of this.

  46. Regarding Bob the Manager, above; Bob is the consumer of the report, and for most purposes the customer as well. The customer may not always be right, but it’s a good starting point.
    And he wants out in a specific first because that’s what HIS process is optimized for.
    Now, maybe he’s not an important enough customer that HIS needs must be catered to, but, maybe it is. If HIS salary is higher than the report generator’s, the mutual employer

  47. Somehow this reminds me of some recent arguments that seem to hang on what “do one thing” means.
    I’ve always taken it as “provide one transformation”/”do one method”, but I’ve read arguments that seem to presume that it means “accomplish one purpose” (end, telos) or “solve one problem area”.

    Is one of those readings incorrect?

    1. >I’ve always taken it as “provide one transformation”/”do one method”, but I’ve read arguments that seem to presume that it means “accomplish one purpose” (end, telos) or “solve one problem area”.

      My opinion is that both are approximations of a deeper rule which can also be approached through “carve the problem at its joints”. That is, find the places in your data flow where you can assert strong invariants and write simple descriptions, those should define the code boundaries in your toolchain.

      A tool like cvs-fast-export is a case in point. What it does with CVS internals is complex enough to be madness-inducing and definitely a Zeno tarpit, but it earns the right to be one chunk because what it does is easy to describe exactly in simple terms: “Take a CVS history, give me a lossless serialization in git-fast-import format”.

  48. Off topic @ESR:

    Is it possible for someone to become a hacker without internalizing an intuitive understanding of the difference between map and territory (even if they can’t articulate it well enough to apply it generally)? I don’t see how that would be possible, programming mixes meta-levels too frequently even when doing simple stuff.

    1. >I don’t see how that would be possible, programming mixes meta-levels too frequently even when doing simple stuff.

      I think you’re right. Pretty good consciousness of abstracting is required, though it doesn’t necessarily follow that it’s carried through to natural language.

  49. >“carve the problem at its joints”

    I love this. This is also a good approach outside programming, as a general problem-solving or research approach. Every team or group effort depends on having people who when faced with the problem of having to eat an elephant, can not only remind people that it is one bite a time, but also carve it up at its natural joints instead of just taking random chunks out of it, and hand each part out to a different person. This is, IMHO, the core of what I consider management or leadership.

  50. > programming mixes meta-levels too frequently even when doing simple stuff

    I think we should be trying to avoid that as far as possible. It breaks the flow and introduces errors. Skating on one specific abstraction layer only for hours or days on is smooth and productive. Or is it just my experience only? But my game programming example, you either build the engine or script the engine but not both, still seems widely accepted and successful.

    The most flow-breaking thing I tend to dislike: having to introduce vertical changes, like realizing a new feature in my using of my framework also requires extending the framework.

  51. esr> A tool like cvs-fast-export is a case in point. What it does with CVS internals is complex enough to be madness-inducing and definitely a Zeno tarpit, but it earns the right to be one chunk because what it does is easy to describe exactly in simple terms: “Take a CVS history, give me a lossless serialization in git-fast-import format”.

    One might ask the question ‘why’? Git may be the current flavour of the month, but it has it’s own tarpit of strange choices which makes lossless interaction with say hg an impossibility. CVS did things that are still very natural today, but git does not provide a ‘lossless’ replacement for it. My own method of working with legacy data is to maintain a mirror of the legacy records simply so that when an edge case comes up I can view the original and manually adjust the result. Better use of my time than trying to add extra edge case rules, and if I detect a pattern that justifies a new rule it can be added. But this is my problem with the goalposts never staying in the same place. git ‘improves’ something ‘hg-git’ plays catchup and they both have to re-write lots of code when the underlying system changes … was CVS really that bad ;)

    1. >Git may be the current flavour of the month, but it has it’s own tarpit of strange choices which makes lossless interaction with say hg an impossibility.

      Which is a major reason why reposurgeon and my exporters actually target an interchange format other than git itself. Even when git is obsolete they will still be useful. Now that it’s been demonstrated, the concept of a lossless interchange format will not be abandoned, it’s just way too powerful a lever.

      Thus: post-git replacement VCSs will certainly have their own upgrade or replacement for the interchange format, which I or a successor will teach reposurgeon to speak. Instant translation!

      >was CVS really that bad ;)

      Yes. Yes, it was.

  52. >was CVS really that bad ;)
    Yes. Yes, it was.

    This discussion has already touched on making code modular, and CVS has a very nice way of allowing a selection of modules to be managed depending on the target application. Old tools used to pick up the modules that had been updated and allow individual updates to be cherry picked, or each module to be updated, or simply sync the whole lot. ‘sub-repos’ are frowned upon by many as ‘not the way it’s done’, but just what is the way to handle modular projects using ‘modern’ tools. CVS got it right even if the handling of a module was not as flexible as today. This is perhaps somewhat dependent on the target applications, so for my own example I’m looking at eplacing legacy c/c++ applications with modules that bolt into a php based core framework. In my book even the PHP core code could be better managed as a series of separate extensions rather than the current mess in git format. In CVS practice, one just lists the public and private modules in the target project and everything just works code wise.

  53. >was CVS really that bad ;)

    Linus here gives a really interesting answer:

    https://marc.info/?l=git&m=113072612805233&w=2

    “The problem with a centralized model is that there’s one point of contact:
    you can replicate the central database endlessly, but you can only really
    modify it in one place. Which means that anybody who wants to modify
    anything at all needs to have write access to that one repository.
    Now, you can limit write access in various ways (“user xyz can only write
    to these files”), but it still requires an a-priori trust network rather
    than a dynamic one. So every single CVS project (and SVN does zero in this
    regard) always ends up having politics around the question of who gets
    commit privileges, and what the rules for them are. So one of the worst downsides of CVS is _politics_. People, not technology.”

    In other words, Linus seems to argue that it encourages unbazaarlike behavior.

    I am starting to see why ESR is so committed to it, it is a part of the sinister plan for world bazaarification :)

    When corporations grow big and sclerotic, one way to breath new life into is to establish internal markets, where the IT department is a profit center which bills Bob’s sales department for the work they do for them, which makes him think twice about making extra special requests. I think Linus saw Linux kernel development grow so big that it required a kind of an internal market of code commits as well.

    1. >Linus here gives a really interesting answer:

      Linus is correct, but there were purely technological problems with CVS that made it horrible too. No real changesets and serious breakage around file renames were two of them. Plus a UI that managed the remarkable feat of being nastier than git’s. (Subversion may have failed to escape the centralized model but its UI was pretty good – it’s no accident that those of almost all later VCS UIs are obviously modeled on on Subversion’s.)

  54. @Shenpen

    It isn’t just about what abstractions you are using; anything above the level of a toy program will have the question “Is this data or a pointer?”. The more complex the data structures become the more complex and fluid these distinctions become, by the time you are using hash tables, branch tables, or pointer swizzling the distinction between metalevels has been obliterated. This can be mitigated with libraries that encapsulate a few of the levels and hide that complexity, but that just allows you move to higher levels and mix them up.

  55. My best (worst) experience with the zeno tar pit problem is related to music synthesizers which use highly integrated hardware, software, data, user coding problems. When you want to add a new feature to such devices, the zeno tar pit problem is extraordinarily acute. It’ not so easy to serialise the modules because the system itself is a closely coupled real time FSM. So adding new features to an existing system means adding new data that is consumed by a large number of the modules (many new global variables), and making significant changes to a large number of the modules to handle the new conditions. Imagine adding tremolo to an upright piano. Other than the outer cabinetry, which part of the piano would *not* need major alteration? Think of any musical instrument and how hard it would be to add a particular feature that it does *not* have. That’s is why there is such a diversity of musical instruments. Each type of instrument produces a specific sound. Economics drives this. Now look at how prolific electronic music synthesizers have become. Why are there so many different makes and models. It’s all software? Surely one modern synth can reproduce all of those sounds? Thus, data and conditions can be harder than others. Interesting that the word hard has two distinct meanings here. Hard in the sense that the data and conditions are more resistant to change, they are more concrete. And hard in the sense that the effort to change is greater. Music synthesis is harder problem whereas a billing document process is far softer.

  56. esr> Linus is correct, but there were purely technological problems with CVS that made it horrible too.
    There were/are CVS interfaces which addressed a number of the problems, and I’m not saying that some sort of replacement was not needed, but rather that some of the alternative choices concentrated on the wrong problems? I prefer Hg purely because TortoiseHg provides a much more ‘cvs’ like working environment and combined with BeyondCompare I can do most of what I was doing in 2000 with the available cvs toolkit at that time. I’m just restricted to manually checking every module that the cvs package structure took care of. And I run legacy CVS locally with a copy of the history where that packet structure was lost when each packet was individually imported into github. The current code base is in git repo’s but is a problem to manage these days because of the way it was broken up from CVS originally.

  57. @Shenpen:

    Skating on one specific abstraction layer only for hours or days on is smooth and productive. Or is it just my experience only?

    I’m certainly not going to speak for everyone, but myself I find the problem is less about the frequency of switching between abstractions, and more about how comfortable one is at working at each of these various abstraction levels. Whether one has only ever programmed at the “higher” levels of engine-scripting abstraction or engine / application design, a forced transition to working at the other level of abstraction will be highly uncomfortable; after all, you will not be accustomed to the abstractions used at that level. This unfamiliarity by itself is sufficiently flow-breaking to make such a switch very costly … at least, until the abstractions become familiar.

    Having recently been working to teach myself “bare metal” programming techniques with a Teensy hardware project (a custom, Ergodox-derivative keyboard), my experience has been that the switch back to an application programming mode is far less laborious than the “context switch” to the bare metal … where working at the bare metal is as laborious over time as at that initial switch, but is becoming less so over time. This implies that the switch between abstractions is no more costly than any other such change of context, such as learning a new high-level language — not to imply this is ever a low cost switch, but as these other contexts are unlikely to change within a single project, changing abstraction levels takes on an (IMO unwarranted) appearance of being a special case.

  58. @Foo Quuxman

    >It isn’t just about what abstractions you are using; anything above the level of a toy program will have the question “Is this data or a pointer?”. The more complex the data structures become the more complex and fluid these distinctions become, by the time you are using hash tables, branch tables, or pointer swizzling the distinction between metalevels has been obliterated.

    Thanks for reminding me that domains still exist where not everything is object-oriented or database-based.

  59. But my game programming example, you either build the engine or script the engine but not both, still seems widely accepted and successful.

    Shenpen, obviously you haven’t heard of Touhou Project, a well-known and loved series of 2D shooters with insane difficulty levels coming out of Japan. They are developed by “Team Shanghai Alice”, but it turns out that the “team” has one member: ZUN. He does all the coding, art, and music. And he produces phenomenal games.

    The strictly layered development model arises from the fact that major games are no longer programs these days so much as productions created by studios who are more akin to Hollywood studios, managed by non-technical people who only hire programmers out of necessity. The economics of that environment dictate that it is far cheaper to license a pre-existing game engine, such as Unreal, and then put cheap to hire, second- or third-string coders who don’t know much linear algebra or calculus in charge of scripting the various game events and so forth.

    The result of this is Conway’s law: the structure of a modern game tends to resemble the organization that created it, which in the AAA market usually means a complex, bloated, sclerotic sweatshop. Game code these days, accordingly, is not nice and neat but tangled, bug-ridden messes of C++ with maybe some Lua for the menus, hacked together and rushed out the door to make a Christmas release date. It only ever works because there are tweaks in the video card’s driver code for that specific game that make it work.

  60. “The “special cases” tend to follow Zipf’s distribution which means that no matter how much data you have collected, there will appear a new “interesting case” you never have seen before.” Anyone else think there are some intuitive parallels to Godel’s incompleteness theorem there? I can’t find a good way to verbalize the analogy though.

  61. Anyone else think there are some intuitive parallels to Godel’s incompleteness theorem there? I can’t find a good way to verbalize the analogy though.

    As tempting as it is to automatically think “Godel’s Incompleteness” whenever seeing a neverending series of unsolved problems, I try to reserve that phrase strictly for what it’s intended; namely, unknowable problems.

    Meanwhile, I thought the interesting thing about Zipf distributions was that they indicated smoothly exponential frequency of occurrences in some statistical sample(?). The persistence of new cases taking up computation time was incidental.

  62. I wonder if we have reached the level of complexity in coding where chaotic outcomes are becoming like fractals with infinite cascades?

  63. @Paul Brinkley
    “Meanwhile, I thought the interesting thing about Zipf distributions was that they indicated smoothly exponential frequency of occurrences in some statistical sample(?). The persistence of new cases taking up computation time was incidental.”

    It is more complicated. Zipf’s distribution is a power law distribution (what isn’t?). It gives straight lines on a log-log plot of frequency versus order. What makes it such a devious distribution is that it is not “normalized” (like a Normal distribution). If you try to extrapolate the probabilities of cases you have seen to infinity to estimate the number of unseen “cases”, you keep getting a total probability of more than one. The easiest example is word frequencies in English.

    http://en.wikipedia.org/wiki/Zipf%27s_law

    The immediate result of such a distribution is that the number of “word forms” or “cases” is not finite (for the range of samples modeled) and the combined probability of “rare cases” is large. This high combined probability of “rare cases” makes that you will keep encountering new interesting words/cases very often.

    For example, it does not matter how many periodicals you have read in your life, there is a very high probability that the next issue of the Wall street paper or NYT you pick up will contain a word(-form) you have never seen before. The same when you try to process any type of real-world textual data.

  64. @Jeff Read,

    Hm, I don’t really know enough to argue with this, due to the lack of time the only game I played in the last 5-6 years were the Assassin’s Creed series and Mount & Blade Warband with historical mods. (The later is highly recommended, also works on Ubuntu. Any history buff will be astounded by the Brytenwalda or Gekokujo mods.)

    But what amazed me in the AC series is the kind of behavior that I would not expect from a chaotic sweatshop. For example, they got walking right. In a LOT of older 3D games e.g. Oblivion the speed of walking did not really match the speed of leg movements and thus you saw an immersion-breaking “slipping”, as if walking on ice. Getting it right sounds like some decent testing to me. Same with the famous building-climbing of the series, it would be all to easy to screw that up and have bugs when you move up without actually climbing animation, or grabbing something where there is nothing, it seems they really tested it well, and it is pretty amazing if you consider that for AC2 they modeled much of Renaissance Florence, San Gimignano and half of Venice. A lot of buildings to test. So it is a surprisingly well done work, esp. the AC2-Brotherhood-Revelations subseries.

  65. Eric, Scott Alexander has an interesting blog thread running over at StateStarCodex on AI issues. It got me thinking about this post in that context. If some programmer tried to develop an AI to wade into the Zeno tarpit in order to attack a Zipf distribution of edge cases via brute force, might that have unpredictable consequences (e.g. continuously re-writes itself after every failure event until marginal utility begins to evolve).

  66. >>was CVS really that bad ;)

    >Yes. Yes, it was.

    But it was good enough to be usable, in a really useful role. Like a mediocre but good enough compiler or editor, a mediocre but good enough source code control system is much better than none at all. So yes, it was bad, but not as bad as the shops that didn’t use version control even when CVS was free.

  67. I’m not entirely certain that the git/bazaar model works as well in all environments as is otherwise indicated.

    At my employer, the main product I work on is akin to a full OS release in size. We use a centralized model (Perforce), though git is used internally for some workflows as well. Where I see issues with the git model is when you start interacting with bug tracking. If somebody encounters a bug with a certain version of the software, how do you tag that appropriately in the bug tracking system? If you have multiple people reporting a similar issue in multiple trees, determining whether you have a fix for the problem can be difficult.
    As much as Perforce gives me hives, if somebody files a bug from a build at change #1000, and I already fixed the issue (or, probably fixed the issue) at change #1005, I can easily mark one bug as a duplicate of another.
    If I don’t have access to the tree that somebody is using (because it is offline), I don’t have any way of knowing if there’s already a solution for the problem.

    One issue that the git model does provide is integral code review. Since you have to take changes individually, you pretty much have to look at them. This can increase the quality of the code automatically.

    The problem I see that Linus mentions can falls into two categories:
    1) Griefers who check in crap to be annoying
    2) Somebody who checks in a crap change.

    #1 Can be addressed automatically by, perhaps, adding some kind of automatic layer which would validate that there’s a human being who’s traceable on the other end of the line. For example, a phone text-back message.

    #2 Changes which are crap need to be backed out, regardless if it is because the individual change was bad or because the person is incompetent. The solution to this is to make it easy to revert individual changes.

    Then you can give everybody commit access. Voila. No “politics”. Of course, determining what is a “good” change is as much an emotional, and therefore political, decision as a technical decision. But I find software developers don’t really want to consider that.

  68. Shenpen, the first AC game was notorious for its bugginess, which caused it to suffer in the reviews. Successors have improved the situation somewhat, with a few (such as Unity) backsliding into bugginess. We just had a guy from NVIDIA come out and say that NVIDIA’s drivers contain performance hacks and other hacks to get certain big-name titles simply working, because they flagrantly violate the OGL or D3D state models and were only tested with some of last year’s video cards.

    And getting walking speed right is comparatively easy. You know a studio has its shit together when the character’s walk uses IK to take terrain into account (as in The Legend of Zelda: The Wind Waker) or when the character’s walk has realistic interpolated transitions for when the character abruptly changes direction (Uncharted games). Uncharted is itself an unusual case because the developers are old Lisp-heads. Much of the higher level code is written in Scheme, while the engine code itself is written in C++. This gives the developers an easy way to script things like those aforementioned interpolated animations, and by taking a subset of the Scheme scripting language they create an easy-to-learn DSL that they can give to the art department to let them place and configure game assets, etc.

    As for whether a well-tested game could be developed by a sweatshop — what do you think is a major cause of sweatshop conditions? Playtesting video games is exhaustive, grueling work that’s often farmed out for very low pay to eager 20-somethings looking to get a foot into “the industry”. This Penny Arcade comic exaggerates less than you might think:

    http://www.penny-arcade.com/comic/2010/01/25

  69. @ESR @all

    I’d like to play a little game here, a little test. I’d like to check how easily or how difficultly do open source hackers solve problems of the type that enterprisey folks tend to face. So consider the follow a little brain-teaser game:

    You got a company in the US that buys and sells widgets and all the transactions, purchase and sales, are kept in one table. It has fields like what is the article number, what is the date of the transaction, who is the customer, or what was the sales price if it is a sales transaction, but now two fields are of the biggest importance: quantity and value. When you buy something you got a positive quantity, it is coming into your inventory, and when you sell it you got a negative quantity, it is going out. And the sum – a simple SQL sum per article number and/or per location – of quantity gives you your current inventory at hand. And it is the only way to tell how much you have, for storing such a field is considered a sin, as it could get out of synch with the sum, with the transactions. So far it is easy. You have another field, such as value, in $. If it is a purchase transaction, it is the purchase price. If it is a sales transaction, it is the $ purchase price of the purchase transaction it ostensibly came from. So, for example you have a value $10 purchase transaction, then a value -$10 sales transaction with sales price $12 and your profit is $2 and your inventory value is $0 as now you have nothing. Now comes the fun part. You have a subsidiary in the UK. And you want these type of transactions from them as well as you want to know their inventory quantity, value, profitability and all that. So every night they export their transactions to a CSV file, SFTP it over the Atlantic and it gets imported into your database. These guys convert all the GBP values that they have in their database into USD on the current days exchange rate. This way you can easily keep it all in one database and make reports over the company group. However one day you find out the following. They purchased 10 widgets,value converted is $50, then sold 5 and 5 widgets, sales price irrelevant, value (which should come from the purchase) is not $25 each but $23 each. So now your report shows 0 inventory quantity and $4 inventory value and the managers are all in panic because You Broke Logic Itself. You look into it and find it is simply because the exchange rate changed between the purchase and sales transaction. What’s next? 1. How surprised you are? Any chance you could have foreseen this? 2. Suppose there is no cooperation on the other side of the link because the upload is made by a COBOL program someone wrote 25 years ago and nobody can maintain it and they do not dare to touch it. You have to solve it on your end yourself. How hard is it? (This is a simplified version of a real problem that also included multiple locations and other ways of inventory increase than purchases.)

    1. >I’d like to play a little game here, a little test.

      I tried to respond to this twice last Friday while on the road last Friday but gave up after the arrangement of my laptop keyboard tricked me into deleting the text twice.

      No, I wasn’t surprised. I know less about the day-to-day financial mechanics of running a business than I should, but this puzzle is recognizably isomorphic to situations that come up whenever you have to do data reduction from sensors that are both noisy and sampled. Alarm bells rang in my mind twice: once when you mentioned multiplying by exchange rates, which are a noisy random variable, once when you got to sampling the exchange rate at much longer intervals than its normal almost-stable range.

      If you have or can develop a mental model of the kinds of induced errors, these situations produce, solving the puzzle is not difficult. I don’t think most hackers would be fazed for long.

  70. @Shenpen, the obvious answer is to take into account all variation in exchange rate towards profit or loss in business and keep the inventory value zeroed at the purchase price originally.

    I am not sure of the accounting practices though which deal with multiple currencies, but I think this should work.

  71. Also when converting between currencies, keep in mind that it may lead to an absurdity that you actually made a profit on the transaction in terms of the originating currency, but because of the variation in exchange rate, you actually lose money in the target currency.

    Such are the stupidities of corporate reporting though. The ideal way would be to deal with currencies separately in reports and then overall see the financial-year end profit and loss at prevailing exchange rates.

    I am a lawyer, not an accountant, but blind short-term currency conversions may lead to accounting and audit irregularities in the long run and possibly lead to misleading profit/loss figures.

  72. @Shenpen, what I am harping at is that this is a business level issue, which should be solved by businessmen and auditors and then the IT Team should be instructed how to deal with the business logic to solve the multiple currency issue.

    Blindly applying technological solution to the issue may actually lead to problems.

  73. >what I am harping at is that this is a business level issue, which should be solved by businessmen and auditors and then the IT Team

    Yes, but in the example, _you_ are that. That is the whole point. Only very large businesses can afford to hire multiple levels of people in things like this, and in order to land decently paid jobs without much difficulty,one needs technical and domain knowledge both, it is very hard today to sell domain knowledge only (“I specify it and someone else will program it”) or technical only (“I need a spec!”). Usually only the combination of both can be sold i.e. hired for a decent salary, which over here that means over EUR 50K a year.

    So in our example, you are hired as the accounting software expert and do everything from these “business” decisions to programming.

    Note that from the CEO’s angle there is no business decision here. He already made it: “Deliver me correct data!” and from his angle the rest is technical. Your ideas are all “technical” from this viewpoint, accounting-technical, details the CEO will not bother with so it is delegated to you.

    BTW there is actually a programming solution to it. Without changing it the other end. But let’s not spoil the game as of yet, maybe in 1-2 days.

  74. > 1. How surprised you are? Any chance you could have foreseen this?
    > 2. You have to solve it on your end yourself. How hard is it?

    1- I’m not surprised at all.
    The moment I read “convert all the GBP values into USD on the current days exchange rate”, alarm bells went off.

    2- Doesn’t sound too hard. It shouldn’t be too difficult to undo the GBP -> USD conversion and redo them in a way that makes the whole thing work again, eg by converting sales price and purchase price at the same conversion rate.
    Wouldn’t that fix it ?
    But then again, I’m not a programmer/hacker so maybe that doesn’t count.

    I agree with hari that this isn’t the way to go about this. It gives the CEO an indication of stock and profits in a currency he’s familiar with and maybe that’s all he wants, but AFAIK accounting in multiple currencies is not done by converting everything to 1 currency and throwing everything on the same pile.

  75. Wouldn’t you need to record every transaction in the currency it was made in, as well as snapshot the exchange rate at time of settlement?

  76. Jay Maynard on 2015-05-23 at 21:04:49 said:
    > Wouldn’t you need to record every transaction in the currency it was made in, as well as
    > snapshot the exchange rate at time of settlement?

    For proper accounting, that’s pretty much exactly what you would have to do. You’d need to determine both the amount of sale, and the cost of goods sold, at the time of the transaction, and record both. COGS could not be reasonably determined after the fact.

    This is probably why so many international companies use various subsidiaries – to allow separate books to keep track of transactions in various currencies.

  77. Hmm – this thread started out with some interesting directions, but bit rot and entropy win again.

    Why are we discussing CEO pay and financial settlements???

  78. @conrad

    Sorry, my bad. The discussion began on difficult and timesinky problems and I just wanted to show how typical difficult and timesinky problems look like in a programming field open source hackers are usually not very familiar with, as a comparison. Perhaps there is a lesson to be drawn form here. Such as difficulty is constant, lower technical difficulty means higher… let’s say design or domain difficulty, because ultimately everybody will gravitate into positions where the total sum of difficulties is about as much he / she can bear.

  79. This discussion is premised on the idea that there is a well defined final target that a project is aiming to achieve? Another way of addressing the problem IS to change the goal posts? I’m just battling with a new Android phone, and one would expect that 2 years on, the problems on earlier devices would perhaps be fixed by now? Still have the hardware problem where the device ‘locks up’ when you plug the power in and you ave to switch it off and on again to regain control. But more problematic is the fact that many of the core functions on the new one are simply not able to handle landscape working … something that the older phone handles flawlessly! My phone is also my sat nav and so sits in landscape when mounted in the car, but It’s not actually usable safely as a phone in that mode. SO one might wonder just who’s decision ‘simplified’ the design goal in order to make the job easier … when the original software did not even have a problem!

  80. Would you say that if an edge case requires code that is much too complex compared to the likelihood of that said case to appear, it is better to document the case and warn the user to stay out of harm’s way?

  81. An earlier and less recursive version of this is the Rule of 90s: 90% of the program takes up 90% of the schedule, and the remaining 10% takes up the other 90% of the schedule.

    Part of the underlying problem, as noted above, is the natural tendency to go for the easy portions first and put off the hardest stuff until later. I wrote an article about this back in 2008, then reposted it on one of the websites a few years ago (“Do not defer the difficult in IT projects”). I closed the article with this analogy:

    I have a mental image that I associate with this particular challenge. Imagine mixing flour, sugar, and spices together in a bowl — as if you’re making cookies — and then throwing in half a dozen walnuts, still in their shells. Now put the mixture into a sieve and sift it all together.

    Everything goes through just fine — except for those walnuts. It doesn’t matter how long or hard you work the sieve, the walnuts will just roll around and not go through. You have to take them out, crack them open, remove the nutmeat and grind it to a fine powder before you can get it through the sieve. Nothing else will do.

  82. @ Shenpen

    1. Not surprised at all – exchange rates will change between transactions.

    2. Very easy, but you do need the exchange rates used for each day.

    The value of Stock-On-Hand in any location/region/whatever should not be calculated from purchase/sales data directly; it should be calculated as:

    Quantity_On_Hand x Item_Value x Exchange_Rate_Today

    You only have to change the script(s) that use/calculate Value-of-Quantity-On-Hand. If this value is stored in the database (same sin – different value), it would have to be overwritten, of course.

  83. But just look at the advantages: the Achilles and Tortoise paradigm promotes community (look at all the comments and interaction here). It keeps people in work (extra hours expended, potentially to infinity), and stops A.I. ruling the planet because we can never get it to reach perfection.

    Thanks be that nature adheres to the same – else it would only provide exactly as many seeds per plant as are needed to re-grow one more plant and perhaps feed the odd butterfly. Then what would we do for bread?

    I suppose a poetic vein is extraneous here, but what the hey.

  84. One important thing to consider is that knowing what a “Zeno Tarpit” is does not necessarily give you the wisdom to detect them without false positives.

    There are also pseudo-tarpits — cases where the 80% solution is greatly simpler than the 99% solution, but the 100% solution is feasible. A programmer who wrongly takes the initial difficulty as proof he is in a tarpit will then do his users a disservice by shipping something that was avoidably faulty.

    I’m surprised the “Worse is Better” essay hasn’t been mentioned here yet. “Worse is Better” can be seen as a complaint that the C/Unix culture treats all problems as if they were tarpits.

  85. @ Shenpen

    I said that Value_of_Stock_On_Hand should be calculated as:

    Quantity_On_Hand x Item_Value x Exchange_Rate_Today

    I just realized that this is incorrect for the problem you specified – finding this value in US$. It should be calculated as:

    Quantity_On_Hand x Item_Value_USD

  86. Huh. I’m looking at this and thinking of the LibreOffice work on importing and exporting MS Office formats. There’s a pile of fixes every day and has been for years, but the actual task is a Zeno Tarpit, as they iteratively reconstruct the glitches of MS Office well enough that users will be happy. A stupidly messy job, though they’ve got a pretty good handle on how to make this mess work such that it can be done at all in a maintainable fashion (tl;dr unit tests out the wazoo).

  87. If you have or can develop a mental model of the kinds of induced errors, these situations produce, solving the puzzle is not difficult. I don’t think most hackers would be fazed for long.

    The overall point that Shenpen was trying to make is that in enterprise software development — the vast bulk of software development — you can’t punt on the complexities nor smooth them over into a nice neat logical normal form without losing valuable information. Your software is going to be riddled with irreducible messiness because you aren’t being paid to produce an 80% solution, you are being paid to produce a SOLUTION. And the user experience HAS to be smooth. Under that last constraint, Unix style composability goes straight out the window because the interface and the software design itself is full of things that make no sense from a mathematical or logical stance, nor from an engineering stance that favors virtues like composability or even maintainability — but they fit the worldview of the customer (who is ALWAYS RIGHT). I call such features “percent keys” after I read this article on calculator percent keys, which do nothing to encapsulate a useful or consistent concept of percentages but provide an easy way of calculating taxes, tips, and discounts.

    As an enterprise developer your code is going to be full of “percent keys”. Because the most important software development isn’t for hackers; it’s for the people doing data entry in the world’s cube farms, warehouses, and factory floors.

    1. >The overall point that Shenpen was trying to make is that in enterprise software development — the vast bulk of software development — you can’t punt on the complexities nor smooth them over into a nice neat logical normal form without losing valuable information.

      And my point is that not only “enterprise” programming is like this. So are the repository and document-markup conversions I do all the time.

      > Under that last constraint, Unix style composability goes straight out the window because the interface and the software design itself is full of things that make no sense from a mathematical or logical stance, nor from an engineering stance that favors virtues like composability or even maintainability

      This claim is so full of false assumptions it’s bursting. Unix-style composability and flexibility is more important in a messy environment, not less – it’s a way of reducing the overhead of your experiments so you can converge on a solution more rapidly.

  88. The “Achilles paradox” is NOT a paradox for today’s mathematicians (it USED TO be for the Ancient Greeks). Today’s mathematicians know it’s a sum of infinite numbers which has a total that is a constant.

    And your “Zeno Tarpit” problem is not a problem if you bother to collect as much input samples as possible, and actually design your app instead of starting coding as soon as possible. Similarly, for production apps, start collecting use cases and design your app. Everything else is off-standard (for converters) or off-scope (for production). But this has to be define before you start coding.

    Of course, this goes against the “just hack it” approach you propose. It’s the just hack it approach that has given us the GIMP (no CMYK support on a content production program, nobody bother to gather use cases, and once the code is written, good luck changing it without a major redesign).

    1. >And your “Zeno Tarpit” problem is not a problem if you bother to collect as much input samples as possible, and actually design your app instead of starting coding as soon as possible.

      There speaks the voice of deep naivete.

      You can have an essentially complete dataset at the start of your project and still land in a Zeno tarpit. This was the case, for example, for doclifter. While I didn’t have a set of all Unix manual pages ever to test against, essentially all the strange pathological cases I ever ran into in the succeeding 5 years were already present the first time I tested against the manpages tree of a full Linux distribution.

      What makes for a tarpit is when your input data format is unconstrained and you get a Zipf-like distribution with mostly well-formed ones but a long right tail of weirder and weirder outlier cases. I’m certain interpreting Microsoft Office documents has the same statistics.

      The distinction between designing your application and “just hacking” vanishes in that right tail. You can write clean, “designed” code to cover the non-weird cases, but it won’t stay clean as you chew your way down the tail because the logic of the code has to follow the increasing ill-formedness of the data.

  89. > And your “Zeno Tarpit” problem is not a problem if you bother to collect as much input samples as possible, and actually design your app instead of starting coding as soon as possible.

    No, per LibreOffice on MS Office formats. Per the famous Ben Goldacre saying: “I think you’ll find it’s a bit more complicated than that.”

  90. @kurkosdr
    > And your “Zeno Tarpit” problem is not a problem if you bother to collect as much input samples as possible, and actually design your app instead of starting coding as soon as possible.

    You sound like something out of the 1950s. Your response makes me think of clean room labs, IBM programmers in white coats and big buzzing machines in the background. Which is to say the software design methodology you propose here is about fifty years out of date.

    This approach where all or most of the requirements are known up front just didn’t work in practice except in special narrow cases. The progress in software development since then, starting probably with object oriented programming, though perhaps arguably back to structured programming, is to design software that is robust to changing requirements. This meets its culmination in the group of methodologies today under the banner of agile. The agility is in reference to quick adaptability to changing and morphing requirements.

    > Of course, this goes against the “just hack it” approach you propose.

    Eric proposed no such thing. If his goal had been to produce a system that worked with the gnarliest repositories from the beginning, he would never have actually finished. As it is, we all have a tool that works with nearly everything.

  91. @Shenpen

    The problem in your hypothetical is that an outside entity (the SFTPed file) presumes to tell the program what the “value” of something being taken out of inventory is.

    You have some options for calculating the cost of an item: You can either use LIFO or FIFO logic to tie individual items to their purchase prices, or you can use an aggregate system that takes the current total cost of the items in inventory divided by the number of units to produce the unit cost of the items being sold, In any event, when you sell the exact number of units in inventory, they sell at exactly the total cost they represent now.

  92. And my point is that not only “enterprise” programming is like this. So are the repository and document-markup conversions I do all the time.

    What is cvs-fast-export but a tool to smooth over the messiness into a nice neat canonical form? Unfortunately once you do this, unless your canonical form encapsulates and represents ALL the nuances of the forms you’re translating from, you lose, because valuable information is lost. This is true of Git, which has a different history model than other VCSes; and it’s true of the mother of all smoothed-over canonical forms: Unicode. If you’re an English speaker, Unicode looks like a huge win; the farther you drift from that ideal the more it looks like a huge pain in the ass at best and a culturally insensitive clusterfuck at worst.

    This claim is so full of false assumptions it’s bursting. Unix-style composability and flexibility is more important in a messy environment, not less – it’s a way of reducing the overhead of your experiments so you can converge on a solution more rapidly.

    Huh. That must be why internal Windows server deployments vastly outnumber Unix including Linux in the enterprise.

    If you do a lot of experimentation involving things like grokking the undocumented wire protocol of finicky Garmin GPS mice, what you say is true. And this is why robotics, sensor, and other embedded devs love them some Linux. But most enterprise development involves very little experimentation because most enterprise problems have been solved already. Competent enterprise devs are analysts first and foremost; the hard bit comes from figuring out the business logic and applying combinations of known solutions to the business problems at hand. Experimentation can be done at home on your Raspberry Pi; the invoicing system needs to be finished on time and under budget. As such, iterating towards a solution in the enterprise space is best done with integrated RAD tools such as Access, Visual Basic, or the tools supplied with modern ERP programs, which integrate all the pieces except the business logic which is the hard part.

    I know a fellow who has designed Web sites with database back ends and entire corporate workflows from within Access. All the HTML is generated with VBA code from templates and database data. It is a very potent tool for prototyping and “converging on a solution” without the some-assembly-required approach of Unix tooling.

  93. > Huh. That must be why internal Windows server deployments vastly outnumber Unix including Linux in the enterprise.

    That’s a surprising claim. Do you have numbers? I can see plausible ways for this to be true, but I’d like to see what the numbers say.

  94. > What makes for a tarpit is when your input data format is unconstrained and you get a Zipf-like distribution with mostly well-formed ones but a long right tail of weirder and weirder outlier cases. I’m certain interpreting Microsoft Office documents has the same statistics.

    This appears to be approximately the case. I wouldn’t expect any sanity-respecting human to do so, but it’s like a continuous 80%-20%-80%-20% cycle.

    (It doesn’t help that ODF, and I strongly suspect OOXML, specifies semantics but does not specify interpretation … and interpretation is what the naïve end users care a whole lot about. So OOXML is not just a sprawling incoherent ramble, it’s a sprawling incoherent ramble that leaves out half the story. Frankly, Miklos Vajna deserves a medal for bravery in the face of brain-eating computational prions.)

  95. You can write clean, “designed” code to cover the non-weird cases, but it won’t stay clean as you chew your way down the tail because the logic of the code has to follow the increasing ill-formedness of the data.

    I would call such code “optimally clean” – it’s as clean as it could be, given the GI- it’s required to accept while still outputting something that is not -GO.

    And if you could write any code that could handle all such cases, then it’d be strong AI, whether designed or not….

  96. @David Gerard
    “So OOXML is not just a sprawling incoherent ramble, it’s a sprawling incoherent ramble that leaves out half the story.”

    OOXML is a linearized memory dump of the internal document model of MS Word. There is nothing more. Except the occasional binary blob.

  97. > OOXML is a linearized memory dump of the internal document model of MS Word. There is nothing more. Except the occasional binary blob.

    The binary .doc/.xls/.ppt are actually worse. See this post and its comments.

    (And the .doc files from Word 2007 and up are the binary blob … with fragments of XML dropped into it. Oy.)

  98. @David Gerard
    “The binary .doc/.xls/.ppt are actually worse. See this post and its comments.”

    All MS Office formats were memory dumps of the internal states of the respective binaries. DOCX and OOXML are special in that they serialized the dumps and added XML tags.

    OOXML was super special in that it actually was documented. Sadly, the documented version did not correspond to the file formats of any MS program (but maybe it worked in 2012?).
    https://blogs.fsfe.org/gerloff/2012/08/16/microsoft-to-finally-support-ooxml-odf-1-2/

  99. @Winter – oh yes, I did follow this saga at the time. There’s no such thing as OOXML in the wild, there’s just what MS Office happens to take in or put out, with wacky new behaviours in each version. It’s slightly amazing LibreOffice does as well as it does.

  100. > OOXML is a linearized memory dump of the internal document model of MS Word. There is nothing more. Except the occasional binary blob.

    The other half of the story, then, is MS Word itself, which any other interpreter must implement an equivalent to.

  101. “> OOXML is a linearized memory dump of the internal document model of MS Word. There is nothing more. Except the occasional binary blob.

    The other half of the story, then, is MS Word itself, which any other interpreter must implement an equivalent to.”

    Open/Libreoffice do a much better job than M$ in opening older files which many customers now have trouble with. It IS a much better interpreter for these types of document with the added bonus of a PDF converter to maintain readable versions of legacy documents.

  102. I have in fact had to sysadmin a system which involved an embedded copy of OpenOffice.org, started from a cgi-bin script as needed, to convert .doc to PDF. This is because .doc is such an utter shower that it takes something approximately the size and complexity of MS Word to deal with it. (We thankfully talked them out of making us install embedded Word instead, by offering to charge them the cost of a Windows sysadmin to run it.) The .doc files were end users’ edited versions of defective RTF files (that wouldn’t open properly in anything else but Word) produced from XML and some XSLT. It also kept said .doc files in a Subversion repo, which was not operated using svn binaries but twiddled by the relevant Java libs. I quite delighted in deleting all traces of that system personally when it was finally decommissioned.

  103. @David Gerard
    ” I quite delighted in deleting all traces of that system personally when it was finally decommissioned.”

    There is special delight in issuing:
    shred -zu *

  104. @winter – asking the (really very good) lead dev responsible for most of the design decisions why things were the way they were was most entertaining, as he explained with a pained face that these were each about the least worst way of dealing with the circumstances as they stood. The spec was the user thinking they could get a magical flying unicorn pony by demanding it loudly and writing detailed specifications for the wing feathers. One bit (automatically merging changes between the base RTF and the users’ edited .doc files) would have required implementing strong AI.

  105. (And the moral of this story is: open source has quite a way to go before it measures up to the fit and finish of enterprise software.)

    [Diva’s Law of Software: quality is inversely proportional to sticker price.]

  106. After looking at several of the examples given in this post and thread (doclifter, DOCX file compatibility, repository export), it strikes me that all of these examples revolve around parsing some (typically poorly documented, or perhaps incomplete) data format, the type of problem which would be mathematically defined as a context-free grammar.

    Each project therefore must (in part) determine the universality of said grammar (or more likely, the inclusivity: that the statements it accepts completely contains those of the generating CFG). Unfortunately, if this formal definition is not publicly available for use, the question becomes mathematically undecidable.

    If this intuition is correct, that means that Zeno Tarpit problems are literally as bad as possible: textbook examples of problems mathematicians have proven are impossible to completely solve. You can achieve whatever desired percentage of coverage, but 100% will always remain just out of reach.

    1. >If this intuition is correct, that means that Zeno Tarpit problems are literally as bad as possible:

      I believe you’re right. Probably why I gravitate to them – easy problems are for lesser men. :-)

  107. >Each project therefore must (in part) determine the universality of said grammar (or more likely, the inclusivity: that the statements it accepts completely contains those of the generating CFG). Unfortunately, if this formal definition is not publicly available for use, the question becomes mathematically undecidable.

    The above is why Wikipedia only just has a visual editor in 2015, rather than 2005: the MediaWiki syntax was literally defined as whatever happened to emerge from a string of PCRE-ish regular expressions in PHP, resulting in something that was, e.g., provably not EBNFable. And there was a few billion words of legacy content to account for. So putting this magickal guacamole into a computer-manageable form became a Herculean effort, and is just about usable two years after its initial release. A truly spectacular Zeno tarpit. Out of tiny acorns of unfortunate decisions are huge oak trees of technical debt born.

  108. “OOXML is a linearized memory dump of the internal document model of MS Word.”

    Nobody cares if OOXML is a memory dump or whatever.

    Have you seen the hacks multimedia files contain? Let me give you a taste: New features put on seperate files (mpg/vob files accompanied by ifo files), support for interlace (barf, but neccessary), seperate yuv-rgb conversion tables for mpeg1 and mpeg2, seperate aspect ratio commands for mpeg1 and mpeg2, encoding surround sound as stereo and storing the diff (Dolby Digital, for easy downmixing to stereo). Then there is AVI, and particularly Divx AVIs, which is a massive pile of hacks. Double indexs (old and new), weird audio muxing settings (Dolby Digital requires it’s own settings), mpeg4asp (aka Divx/Xvid video stream) muxing sometimes combines two frames together (packed bitstream, where P and B files are merged) for compat with old decoders but sometimes it doesn’t combine them. Subtitles (XSUBS) are a hacked into the video stream.

    BUT, compatibility is very good (I am suprised by how Divx Avi files almost never fail to play, only Android used to have a problem with some packed bitstream files).

    You know why? Because, as the previous guy said, there is a concensus on how to interpret things, defined in the standard (Divx Home Theater is the name).

    I can understand OOXML not bothering to define that (it’s Microsoft). I cannot understand why ODF doesn’t bother to define that. Is it because LibreOffice cargo-cults the MS Office team, and copies good and bad things from them? What other explanation is there?

  109. Every piano is a a Zeno tarpit – from top to bottom. Whenever you open the lid, you have 200-300 non-terminating, non-repeating decimals staring you in the face and taunting you! Knowing your task is ultimately imperfectable (100% “in-tune”) from the word go, you must wrangle the damn thing into some sort of concordance with the laws of music and human perception. You would think this would be a standard order (algorithm) so to speak, and could automate it outright, but it would seem there is a high level of discretion at work for such a seemingly monotonous (ha ha!) task. Yes, there are certain theoretical rules that can assist you as you go, and yes, we have machines which can now consistently squeeze out 80% of a fine performance (most of which the world seems perfectly content with not knowing any better).

    Viewing a piano thus (as a general-purpose computing machine running a variety of software), it becomes patently obvious after a while that what is important to the average user is almost diametrically opposite to what is important to egghead tuners (i.e. programmers). Years upon years have been wasted on perfecting the perfect temperament, yet none of that matters once a single unison goes off. Therefore, young tuners should spend 80% of their time practicing unisons (especially in the middle of the keyboard), before ever moving on to the remaining 20% of octaves, major intervals, ephemeral temperaments, or setting a standard pitch. Yet this is the exact opposite of what young tuners are tested on. No wonder piano performance has died.

    So, a piano which is 80% good on unisons, 10% good on octaves, 5% good on intervals, 3% good on temperament, and 1% good on overall pitch, is a pretty damn good machine! Lots of times, it is much worse than that. But the wizened old pros know where to hide the edge cases. ;-)

    Perfect software is a myth. If tuners took as much time to tune pianos as many programmers do to perfect code, there would be no music.

  110. @kurkosdr
    “Nobody cares if OOXML is a memory dump or whatever.”

    That has been empirically tested, and it is wrong. Interoperability with OOXML/DOCX documents would be a $100B+ market, if it were possible.

    OOXML does not describe the document, but the memory state of Word when the document is loaded. ODF describes the document. The result is that to be able to load an OOXML document, you must emulate the internal states of Word, a proprietary application covered by patents.

    Therefore, it is extremely difficult to create a DOC/DOCX/OOXML processor without the original code from MS. The mere fact that OpenOffice has succeeded in this, and even better than MS itself for the older versions of Word, is close to a miracle.

    @kurkosdr
    “I can understand OOXML not bothering to define that (it’s Microsoft). I cannot understand why ODF doesn’t bother to define that.”

    There is a 6000 page ISO specification of OOXML which should define it completely. It is insufficient, mostly because MS Word/Office does not implement OOXML.

    ODF is a fully specified document standard. I have not heard of (many) problems of interpretation of this standard. There are at least two independent implementations of ODF (probably more now).

  111. @kurkosdr:

    Nobody cares if OOXML is a memory dump or whatever.

    Maybe, just maybe, that is literally the problem: nobody (including Microsoft) cares about the OOXML format. After all, when you’re providing a memory dump as your save format, you can’t (easily) make any changes to your build environment — such as changing compilers — because that would hamper use of that memory dump; you can’t make changes to your execution environment — such as migrating from 32 to 64 bit architectures; and you can’t ever “squash” bugs and fully remove them from your program.

    There is a fabulous blog on these sorts of problems entitled “OOXML is defective by design” — something I would highly suggest you read if you truly believe that the memory dump design doesn’t matter (as implied by your “nobody cares”). In fact, let me just quote one tiny bit from the earliest post on that blog:

    “Ironically enough, you will not only have to implement this stuff (reverse engineering since it is not addressed by the ECMA 376 documentation), you will have to implement in a way that reproduces current Office flaws. No matter how correct your implementation is, you have to retrofit it to work just like Office does.”

    So yes, I would suspect your statement to be true: nobody cares about the OOXML standard, including Microsoft, and that is actually the problem.

  112. @Jeff Read
    >If you’re an English speaker, Unicode looks like a huge win; the farther you drift from that ideal the more it looks like a huge pain in the ass at best and a culturally insensitive clusterfuck at worst.

    What in the hell is “culturally insensitive” about Unicode? It allows a unique code for every character in every human language (as well as some inhuman languages like Klingon) and properly distinguishes between visually-similar-but-syntactically-different characters such as Roman P and Greek Rho.

    The only thing the least bit “insensitive” about it is the preferential position of ASCII in the lowest-numbered positions, which therefore get the shortest UTF-8 encodings. But since America invented the Internet, it’s a reasonable compromise. When I read music, I accept the fact that Italian terminology is used, because Italian musicians invented those aspects of the notation system. But maybe it’s because I’m an Ellisian.

  113. What in the hell is “culturally insensitive” about Unicode? It allows a unique code for every character in every human language (as well as some inhuman languages like Klingon) and properly distinguishes between visually-similar-but-syntactically-different characters such as Roman P and Greek Rho.

    Ohohohoho, no it doesn’t! It unifies Chinese, Japanese, and Korean scripts, and that’s a problem:

    https://news.ycombinator.com/item?id=8041288

  114. @Jeff Read
    “Ohohohoho, no it doesn’t! It unifies Chinese, Japanese, and Korean scripts, and that’s a problem:”

    That is as culturally insensitive as unifying the German, French, ans Spanish alphabets in Latin1/2 codes.

    Unicode was designed because the Chinese etc were trying to force a 32bit international standard encoding (or even more) that would have each Chinese character encoded 4 times. And even that they could not agree on.

  115. In this case, “culturally insensitive” means the Japanese don’t want to use it, i.e. there’s a consequence.

  116. @David Gerard
    “the Japanese don’t want to use it,”

    The alternative would have been NO standard for many years and in the end a standard nobody else would use.

    Instead, we now have a standard everybody but the Japanese uses.

  117. Nor China, who favour GB-18030, or Taiwan, who favour Big5.

    South Korea does, so at least that’s one of the three languages all that was for.

  118. The alternative would have been NO standard for many years and in the end a standard nobody else would use.

    The alternative would be the correct solution to handle data in different languages and character sets: ALWAYS include encoding information whenever you store text.

    This is such a fundamental aspect of computing in the modern world that we should regard as deficient any language which doesn’t include the concept of encoding in the very definition of a character string.

  119. And, Jeff, given that your nose is so far up Microsoft’s ass that your hair cream smells, let me guess: C# is the only language that gets it right?

  120. > the Japanese don’t want to use it

    “the Japanese” were just fine using Shift-JIS before Unicode came around. There are a few vocal individual opponents, who happen to be Japanese, whose objections are, in essence, to how Unicode handles Chinese and Korean (with Japanese system fonts it certainly can’t be doing any worse for Japanese than Shift-JIS)

    “An individual of a non-European culture can never be wrong about something” is a condescending and infantilizing attitude to take.

  121. GB-18030, for its part, is a Unicode encoding, no different in concept than UTF-8, except for the lack of a simple mathematical mapping, and the fact that it trades ASCII compatibility for GB2312 compatibility.

  122. @David Gerard
    “Nor China, who favour GB-18030, or Taiwan, who favour Big5.”

    That seems to be because Big5 encodes traditional characters, used in Taiwan and Hongkong, and GB the new, simplified set, used on the mainland (or so they say). So much for “cultural sensitivities”.

    Legacy code/documents and inertia are great hurdles to overcome for any standard.

  123. And, Jeff, given that your nose is so far up Microsoft’s ass that your hair cream smells, let me guess: C# is the only language that gets it right?

    C#, like Java and Windows, uses UTF-16, which is maximally awful.

    Ruby is the only language I know of that gets this bit anywhere close to “right”. In Ruby, each string carries encoding information with it; strings can be converted between encodings; and it is an error to perform certain operations, like concatenation, on strings of different encodings.

    Each encoding should induce a string type that’s disjoint from all other string types; and furthermore, all such string types are disjoint from the “bag of bytes” type (called bytevectors in recent Scheme standards). But there should be made available ways of converting between encodings where possible, and between strings of any encoding and bytevectors.

  124. Ruby sucks for other reasons though: it’s slow, the way it does “blocks” or “lambdas” is a giant percent key, and the Rails douchebags have poisoned the community. But designers of other languages could learn important lessons from how Ruby handles strings.

    1. >But designers of other languages could learn important lessons from how Ruby handles strings.

      Let the record show that at 12:03 EST on 29 May 2015 I actually agreed with Jeff Read about something. Try not to drop dead at the shock.

  125. I am a bit shocked; your prior comments on textual localization seemed to suggest your opinion as “just convert to UTF-8 and use UTF-8 everywhere possible; problem solved.” Which is only a slightly better solution than converting foreign sales revenues to U.S. dollars in Shenpen’s toy problem above.

    1. >your prior comments on textual localization seemed to suggest your opinion as “just convert to UTF-8 and use UTF-8 everywhere possible; problem solved.”

      That’s the right answer to a different question. In the near term, multiple character encodings are a fact of life and the Ruby approach is as elegant a way to cope as possible. In the longer term, everything but UTF-8 should die. And will.

    1. >You’ve been studying Ruby?

      I take at least a hard look at every language that comes in my radar. I sort of collect them.

  126. > I take at least a hard look at every language that comes in my radar. I sort of collect them.

    And you examined Lua long ago, no doubt. I say nothing about Scheme because I know you’re a Lisp guru. ;-)
    Anyway, your answer reminded me of that old comment, in which you mentioned a technical limitation in Python. That was in 2009; to the best of your knowledge, have Python’s developers addressed the problem since?

    1. >Anyway, your answer reminded me of that old comment, in which you mentioned a technical limitation in Python. That was in 2009; to the best of your knowledge, have Python’s developers addressed the problem since?

      I don’t believe so, but I have not been tracking Python development closely enough to be certain.

  127. I take at least a hard look at every language that comes in my radar. I sort of collect them.

    Worth noting (to Jorge) that many hackers do this.

    To me they are all subsets of Lisp… but some are interesting subsets!

  128. >Anyway, your answer reminded me of that old comment, in which you mentioned a technical limitation in Python. That was in 2009; to the best of your knowledge, have Python’s developers addressed the problem since?

    I don’t believe so, but I have not been tracking Python development closely enough to be certain.

    I’m not an authority either, but Python development is still extremely focused on a general programming environment, and one of its largest use cases is making scripts for system administration purposes, a sandbox makes no sense in that scenario. Nobody is really actively out to make Python a restricted/sandboxed language, and any attempt to do so will most likely be only a subset of Python. Lua is by far the most popular choice for the embedded choice, and was actually designed to be in this kind of restricted environment.

  129. @ Jeff Read

    > Worth noting (to Jorge) that many hackers do this.

    Thanks.

    > To me they are all subsets of Lisp… but some are interesting subsets!

    Heh. Lisp is very special, ain’t it? :-) I’ve read good things about it from our host, Paul Graham, and Steve Yegge. RMS even recommends it as a first programming language. I suspect Eric doesn’t approve of that, because…

    @ esr

    …you wrote in your Neopaganism FAQ about an ultimate reality “that we cannot or should not attempt to approach directly”; so, similarly, you may believe it’s best to leave Lisp for a late stage in one’s learning.
    As you can see, I’m prone to dot-connecting maneuvers that may be outlandish or fallacious; but perhaps I’m on to something in this specific case? :-P

    @ Mike Swanson

    Thanks. Guess that’s one of the reasons for Lua’s popularity among game developers.

    1. >but perhaps I’m on to something in this specific case? :-)

      No, not in this case :-). I think Lisp is a fine place to start.

  130. If you want to start with Lisp I recommend Structure and Interpretation of Computer Programs by Hal Abelson and Gerald Sussman; and The Little Schemer by Matthias Felleisen and Daniel Friedman as starting points. Both of these use Scheme (a simple and straightforward Lisp dialect) to introduce key programming concepts; SICP is much more technical but also much more complete.

  131. @esr –

    > No, not in this case :-). I think Lisp is a fine place to start.

    So, this is a question I’ve been wanting to ask you in public. I’m hoping the answer will assist not only me, but other hackers (of whatever age) trying to “level up”.

    Background (our host knows some of this, but I thought I would repeat it for the rest of the crowd): I taught myself BASIC and PDP-8 assembler in high school (~~45 years ago), then Fortran and PDP-11 assembler in (my first try at) college. Also hacked at home on an OSI-400 microcomputer system with a 6502 chip, which we hand-coded assembler for. ( :-/ ) Fast forward a couple of years while working as an electronics technician, and I was able to semi-bluff my way into my first programming job, where I had to learn C under VAX/VMS from the K&R book and the Whitesmith’s compiler manual. (My mind exploded the instant that I realized that reason that my calls to “printf()” weren’t compiling was because the library implemented the alternative “putfmt()”, because *I/O wasn’t a built-in to the language*, unlike all the ones I had learned to that point!)

    Working as a programmer for the next nearly 30 years, I learned VMS DCL, several Unix shells, YACC & lex, awk, and Perl. In each case, I ‘soaked each one up through my skin’, without the benefit of explicit classes, and only in a few cases, by buying books or (later) on-line training.

    Once during this period, I tried to teach myself Lisp the *very* *hard* way – by implementing a non-standard interpreter and working through his very few examples. (C’mon, I mean, when you rename even car and cdr, you’re going to have a hard time getting people to participate in anything like the mainstream of the language / community!)

    Now I’m trying to teach myself Python. And my biggest issue (aside from finding the time to actually hack on it), is the lack of a specifically motivating project to make me do it. (I taught myself Perl when I ran into a necessity for something that was just a little too much for shell and awk in concert.)

    SO – the punch line – what kinds of project would you suggest a (relatively experienced) programmer start with *in LISP* to [a] get a decent feel for the language and its abilities, and [b] prove to her/him that it’s worth the climb of the learning curve?

    (Oh, yeah – the obvious example of Emacs macros doesn’t do it for me :-(, since I’m a <a href="http://esr.ibiblio.org/?p=5211&cpage=1#comment-422115&quot;)ViM bigot.)

  132. SO – the punch line – what kinds of project would you suggest a (relatively experienced) programmer start with *in LISP* to [a] get a decent feel for the language and its abilities, and [b] prove to her/him that it’s worth the climb of the learning curve?

    I dunno. What kind of software do you like to write? Video games? Networking tools? System administration tools? Simulation software? Web applications?

    Lisp can do all of these; if you use Common Lisp or the Racket dialect of Scheme, there are library repos available that prevent you from having to reimplement the world.

    I learned Scheme to script the GIMP. GIMP had a weak-ass scripting language it called Scheme but was not really anything of the sort (the situation has since improved and GIMP now embeds a tiny, but fully R5RS-compliant, Scheme interpreter). From there I learned about Stallman’s ambitious Guile project and his efforts to drive Tcl from the realm of GNU. I tried it out and immediately decided that it would be my go-to language for systemy stuff instead of Perl. (People complain about Scheme’s parentheses and then rush into the arms of Perl, which on its best day is just barely human-parsable. Why? Iunno…)

    Just pick a reasonably sophisticated problem that’s of interest to you and resolve to use Lisp to implement it. That’s pretty much how you learned everything else; Lisp is no different in this regard.

    (Oh, yeah – the obvious example of Emacs macros doesn’t do it for me :-(, since I’m a ViM bigot.)

    So is everybody else; it seems the only people left who take Emacs seriously are… well, old Lisp-heads.

    Contrariwise, if you use Lisp, you are really doing yourself a disservice by not adopting Emacs. Emacs is to Lisp what Visual Studio is to C#: all of the good tooling for the Lisp family of languages plugs into Emacs. With evil-mode, Emacs even does a halfway decent vim-personation while still giving you all the benefits of the tools which talk to it or are written as extensions for it.

  133. @ esr

    > No, not in this case :-). I think Lisp is a fine place to start.

    You surprise me, given what you wrote about Python in “How to Become a Hacker”.

    BTW, you misquoted my emoticon, “:-P”, as “:-)”.

    @ Jeff Read

    > If you want to start with Lisp

    Thanks, but I don’t; it was just an observation. I once tried to tackle Emacs’ built-in Elisp tutorial and quickly lost interest.

    > evil-mode

    I couldn’t find any indication that its developers are doing anything to save it from Gitorious’ impending demise. Do you know if they are?

  134. Thanks, but I don’t; it was just an observation. I once tried to tackle Emacs’ built-in Elisp tutorial and quickly lost interest.

    That’s hardly a proper introduction. Emacs Lisp is old and crufty by Lisp standards; in particular, it was dynamically scoped by default and only recently got lexical scope — as an option. Most people today agree that lexical scope (introduced to the Lisp world by Scheme, fwiw) is a huge win (inb4 the newlisp folks pipe up with how their retrogressive toy interpreter is “just as good if not better”).

  135. …it seems the only people left who take Emacs seriously are… well, old Lisp-heads.

    Plus me, for whatever that’s worth.

    Plus me.

    > evil-mode

    I couldn’t find any indication that its developers are doing anything to save it from Gitorious’ impending demise. Do you know if they are?

    evil-mode is replaced by viper-mode

  136. If you want to learn Lisp, start with Arc. It has all of the beauty with none of the cruft. Its tutorial is so well written I’ve read it multiple times, just for the enjoyment.
    http://www.arclanguage.org/tut.txt
    *Fanboys out*

    On the subject of why Lisp hasn’t seen wider adoption. I think it has something to do with Lisp making the first 80% much more easier and the last 20% only somewhat. When you reach 80% with Lisp you’ve only done an interesting exercise. When you reach 80% with a non-Lisp you’ve done serious work and proven your dedication. Conversely when you stumble upon a 80% solution in Lisp it’s likely abandoned, the temptation to do your own 80%er instead of improving the original until it suits your needs is high. When you stumble upon a 80% solution in a non-Lisp chances are it’s actively maintained, that makes it attractive to contribute to until it meets your needs, even if you are a Lisper and easily could do your own 80%et. So Lisp land is full of abandoned 80% solutions, whereas in non-Lisp land they slowly creep towards 100%

  137. @ Emanuel Rylke

    > If you want to learn Lisp, start with Arc. It has all of the beauty with none of the cruft. Its tutorial is so well written I’ve read it multiple times, just for the enjoyment.
    http://www.arclanguage.org/tut.txt

    Neat! Thanks. I knew Paul Graham’s Arc project, but wasn’t aware – or had forgotten – that the tutorial was “intended for readers with little programming experience and no Lisp experience”.

  138. And now, a Lisp koan of my own composition:

    A student travelled far to the East to sit at the feet of Master Sussman and hear his teachings on Lisp. Master Sussman had just begun talking about the Lisp-nature in low-level inner loop optimization for high-performance computing applications, when the student spoke up.

    “Master Sussman,” he asked, “the nature of Lisp is indeed very powerful. But Lisp is a high-level language, akin to Java or Python; would it not make more sense, if you were to do these sorts of optimizations, to drop to a low-level language like C or even assembler?”

    Master Sussman responded, “Once there was a man who, walking along the beach, encountered an eagle. ‘Brother eagle,’ he said, ‘how untouchably distant is the sky!’ The eagle said nothing, and flew off towards the horizon.”

    At that, the student was enlightened.

  139. Alex K. said
    >After looking at several of the examples given in this post and thread (doclifter, DOCX file compatibility, repository export), it strikes me that all of these examples revolve around parsing some (typically poorly documented, or perhaps incomplete) data format, the type of problem which would be mathematically defined as a context-free grammar.
    >
    >Each project therefore must (in part) determine the universality of said grammar (or more likely, the inclusivity: that the statements it accepts completely contains those of the generating CFG). Unfortunately, if this formal definition is not publicly available for use, the question becomes mathematically undecidable.

    I don’t know about the applicability of the maths, but this comports with my experience trying to write portable Bourne-derivative shell scripts in a project I am working on. They appear to work on one system, and then when you try another system they break, and you fix them, and then they break on the next system, and so on.

    This has implications on the question of how we develop software in the first place. Simply, should we test it correct, or prove it correct? That we should prove it correct was one of the theses of Edsger Dijkstra, which I have disagreed with – because the world’s too complicated and your assumptions can be wrong, so even a program which is proven to be correct might not work. There are two development methodologies – one is to read all the documentation and standards for the other systems that you are interfacing with first, write a complete implementation of your program which you are sure that conforms to this documentation, and prove mathematically that your program has the desired behaviour. The other methodology would be a practically-minded approach which we associate with the word “hacking” – get the simplest possible program that you can as quickly as possible, that maybe works about 60% of the time, and through extensive testing progressively get it to work more and more.

    When the programs or files that you are interoperating with or processing don’t have good accurate documentation the second approach would seem to be the best one. This is the case with man page files in Eric’s experience.

    There is also a lesson for the design of data files and protocols: they should be as simple as possible to reduce the possibility of incorrect implementations, they should be easy to implement (for example, minimize the necessity of backtracking when parsing), and they should be completely specified by their inventor.

    As an aside, I don’t think the term “Zeno tarpit” is a good one. That would be an appropriate name for an apparent problem that turned out to be a problem at all (just as Achilles can in fact beat the tortoise in a race).

  140. I guess that the Zeno tarpit problem is the inverse of other human problems, described by an expression I’ve heard, “Shaving a yak”. It’s the sort of thing where you really want to upgrade a piece of software, but that requires an upgrade to the OS, which in turn requires that you install more memory, but that forces you to change the motherboard, which overloads the power supply, which can’t be upgraded unless you get a larger case, etc…

  141. @LS:
    >It’s the sort of thing where you really want to upgrade a piece of software, but that requires an upgrade to the OS, which in turn requires that you install more memory,

    Useless nitpick: Nowadays it’s more likely that the software upgrade would necessitate the RAM upgrade directly. The minimum you’re likely to see on a desktop system these days is 4 gig, and pretty much any operating system will run inside a gig or two at most. On Linux, at the very least, the biggest contributor to RAM usage on the most RAM-heavy distributions seems to be the desktop environment, which can be pared back pretty far without breaking dependencies for any application software. Ubuntu with Mate and without Compiz will run in about 256 meg. Meanwhile, a system with 32 gig of RAM can be built for around $1000.

  142. I’ve always loved the term “yak shaving” because of where it came from: a segment of The Ren and Stimpy Show that parodied children’s holiday customs (Christmas, etc.) In it Ren and Stimpy alert the audience that “there’s only 5 more days till Yak Shaving Day”, and remind them of the holiday’s bizarre, pointless rituals: decorating the house with disposable diapers, filling Dad’s boots with coleslaw, and leaving a bowl of hot lather by the sink for the Shaven Yak to use when he comes through your bathroom drain to shave himself. The mess of stubble and used lather was his parting gift to good boys and girls.

    More closely related to the Zeno Paradox, I think, is what I call “getting ye flask”, which means a task which in principle should be simple and straightforward, but either a) requires a complicated series of yak-shaving steps; b) has bizarre unforeseen consequences, perhaps unless certain obscure conditions hold but also maybe unconditionally; or c) is outright impossible. The idiom comes from another cartoon, Homestar Runner, in which Strong Bad plays a text adventure called Thy Dungeonman. An early game message reads: “Upon a wall ye see a FLASK.” but when he types “get ye flask”, he is confronted with the frustrating message “You can’t get ye flask”, requiring the usual text-adventure guessing game of *how* to get ye flask in a manner the game will accept. Various playable Thy Dungeonman games exist with a flask in each: In one game, ye flask is a “load-bearing flask” that collapses the dungeon around you and results in an instant game over; in another, getting ye flask is possible (and a win condition for the game), but requires a long quest to obtain the “Flask Grabbing Glove” to complete.

    In a programming context, getting ye flask could be a task like opening a file. In Unix this is relatively straightforward, but in old versions of DOS you had to allocate and fill out a File Control Block which the OS would then use to track all file data. If you have a record-oriented file system, accessing a file as plain text can be fraught with unnecessary complications, as you are basically given a view of the text as a series of fixed-width records and have to figure out how to cross record boundaries as you scan. And so forth.

    I’ve found that “you can’t get ye flask” in programming is usually a case of bad API design, but may also be the result of something akin to Zeno’s tarpit.

  143. @ patrioticduo

    Each type of instrument produces a specific sound. Economics drives this. Now look at how prolific electronic music synthesizers have become. Why are there so many different makes and models. It’s all software? Surely one modern synth can reproduce all of those sounds?

    No.

    Zeno got shark-jumped in the late 80s/early 90s with invisible, bean-counting usurpers who found that copy-and-pastable music was way more profitable than any original performance. Therefore, the “Master’s Apprentice”/Roadie who would tweak the knobs and make the show happen was easily taken over by any pointy-haired suit and a 3.5” floppy.

  144. > Zeno got shark-jumped in the late 80s/early 90s with invisible, bean-counting usurpers who found that copy-and-pastable music was way more profitable than any original performance. Therefore, the “Master’s Apprentice”/Roadie who would tweak the knobs and make the show happen was easily taken over by any pointy-haired suit and a 3.5? floppy.

    This is the class of error “I don’t understand it so it must be simple.”

    Test your theory: if it’s so trivial, then you do it well enough to convince the audience. I bet you’ll have missed something.

  145. Speaking of Maximally Awful datasets, don’t forget non-compliant headaches.
    Consider that you may be entering in to a space where there is already a major player (Microsoft Word, Cisco Routers, etc). It is possible that they do not follow the associated specifications, do not care, and do not have any incentive to care.

    For example, if you were implementing an NFSv2 server you would consult the
    appropriate specifications. Looking at the READDIR operation, you would see that the readdir cookie is 32 bits in size and explicitly opaque (except for the all-zeros value).

    Here’s the problem: Some of your prospective customers run an operating system which will panic if they see a readdir cookie with the most significant bit set (because they store it as a signed integer and then perform comparisons on it). Likewise, other operating systems will panic or reject the value if, as cast to an unsigned integer, the new readdir cookie isn’t numerically larger than the previous one.

    Having to actively pursue these customers means changing your design up-front to meet the limitations of what the customers are doing, even when what they are doing is not spec-compliant and objectively ridiculous.

  146. @ David Gerard

    > Zeno got shark-jumped in the late 80s/early 90s with invisible, bean-counting usurpers who found that copy-and-pastable music was way more profitable than any original performance. Therefore, the “Master’s Apprentice”/Roadie who would tweak the knobs and make the show happen was easily taken over by any pointy-haired suit and a 3.5? floppy.

    This is the class of error “I don’t understand it so it must be simple.”

    Test your theory: if it’s so trivial, then you do it well enough to convince the audience. I bet you’ll have missed something.

    Okay. Tune a piano.

    Then, let me.

  147. @ David Gerard

    Oops! Sorry.

    I didn’t realize you were just playing Chopsticks at the bottom of the keyboard here.

    Bygones!

  148. This is the economic law of diminishing returns, and it shows up all over the place- not just in programming. I think though that “asymptotic returns” gets the idea across better. I.e., work or financial inputs to outputs is very often an asymptotic function. Sometimes, you never get to 1.

    Recently a client wanted an estimate to convert a an artist’s series of psd’s to a working website theme. The answer is 1 week to get to 90% similar, 2 weeks to get to 99% similar, 3 weeks to get to 99.9% similar- when do you want to stop?

Leave a comment

Your email address will not be published. Required fields are marked *