Program Provability and the Rule of Technical Greed

In a recent discussion on G+, a friend of mine made a conservative argument for textual over binary interchange protocols on the grounds that programs always need to be debugged, and thus readability of the protocol streams by humans trumps the minor efficiency gains from binary packing.

I agree with this argument; I’ve made it often enough myself, notably in The Art of Unix Programming. But it was something his opponent said that nudged at me. “Provable programs are the future,” he declaimed, pointing at sel4 and CompCert as recent examples of formal verification of real-world software systems. His implication was clear: we’re soon going to get so much better at turning specifications into provably correct implementations that debuggability will soon cease to be a strong argument for protocols that can be parsed by a Mark I Eyeball.

Oh foolish, foolish child, that wots not of the Rule of Technical Greed.

Now, to be fair, the Rule of Technical Greed is a name I just made up. But the underlying pattern is a well-established one from the earliest beginnings of computing.

In the beginning there was assembler. And programming was hard. The semantic gap between how humans think about problems and what we knew how to tell computers to do was vast; our ability to manage complexity was deficient. And in the gap software defects did flourish, multiplying in direct proportion to the size of the programs we wrote.

And the lives of programmers were hard, and the case of their end-users miserable; for, strive as the programmers might, perfection was achieved only in toy programs while in real-world systems the defect rate was nigh-intolerable. And there was much wailing and gnashing of teeth.

Then, lo, there appeared the designers and advocates of higher-level languages. And they said: “With these tools we bring you, the semantic gap will lessen, and your ability to write systems of demonstrable correctness will increase. Truly, if we apply this discipline properly to our present programming challenges, shall we achieve the Nirvana of defect rates tending asymptotically towards zero!”

Great was the rejoicing at this prospect, and swiftly resolved the debate despite a few curmudgeons who muttered that it would all end in tears. And compilers were adopted, and for a brief while it seemed that peace and harmony would reign.

But it was not to be. For instead of applying compilers only to the scale of software engineering that had been accustomed in the days of hand-coded assembler, programmers were made to use these tools to design and implement ever more complex systems. The semantic gap, though less vast than it had been, remained large; our ability to manage complexity, though improved, was not what it could be. Commercial and reputational victory oft went to those most willing to accrue technical debt. Defect rates rose once again to just shy of intolerable. And there was much wailing and gnashing of teeth.

Then, lo, there appeared the advocates of structured programming. And they said: “There is a better way. With some modification of our languages and trained discipline exerted in the use of them, we can achieve the Nirvana of defect rates tending asymptotically towards zero!”

Great was the rejoicing at this prospect, and swiftly resolved the debate despite a few curmudgeons who muttered that it would all end in tears. And languages which supported structured programming and its discipline came to be widely adopted, and these did indeed have a strong positive effect on defect rates. Once again it seemed that peace and harmony might prevail, sweet birdsong beneath rainbows, etc.

But it was not to be. For instead of applying structured programming only to the scale of software engineering that had been accustomed in the days when poorly-organized spaghetti code was the state of the art, programmers were made to use these tools to design ever more complex systems. The semantic gap, though less vast than it had been, remained large; our ability to manage complexity, though improved, was not what it could be. Commercial and reputational victory oft went to those most willing to accrue technical debt. Defect rates rose once again to just shy of intolerable. And there was much wailing and gnashing of teeth.

Then, lo, there appeared the advocates of systematic software modularity. And they said: “There is a better way. By systematic separation of concerns and information hiding, we can achieve the Nirvana of defect rates tending asymptotically towards zero!”

Great was the rejoicing at this prospect, and swiftly resolved the debate despite a few curmudgeons who muttered that it would all end in tears. And languages which supported modularity came to be widely adopted, and these did indeed have a strong positive effect on defect rates. Once again it seemed that peace and harmony might prevail, the lion lie down with the lamb, technical people and marketeers actually get along, etc.

But it was not to be. For instead of applying systematic modularity and information hiding only to the scale of software engineering that had been accustomed in the days of single huge code blobs, programmers were made to use these tools to design ever more complex modularized systems. The semantic gap, though less vast than it had been, remained large; our ability to manage complexity, though now greatly improved, was not what it could be. Commercial and reputational victory oft went to those most willing to accrue technical debt. Defect rates rose once again to just shy of intolerable. And there was much wailing and gnashing of teeth.

Are we beginning to see a pattern here? I mean, I could almost write a text macro that would generate the next couple of iterations. Every narrowing of the semantic gap, every advance in our ability to manage software complexity, every improvement in automated verification, is sold to us as a way to push down defect rates. But how each tool actually gets used is to scale up the complexity of design and implementation to the bleeding edge of tolerable defect rates.

This is what I call the Rule of Technical Greed: As our ability to manage software complexity increases, ambition expands so that defect rates and expected levels of technical debt are constant.

The application of this rule to automated verification and proofs of correctness is clear. I have little doubt these will be valuable tools in the relatively near future; I follow developments there with some interest and look forward to using them myself.

But anyone who says “This time it’ll be different!” earns a hearty horse-laugh. Been there, done that, still have the T-shirts. The semantic gap is a stubborn thing; until we become as gods and can will perfect software into existence as an extension of our thoughts, somebody’s still going to have to grovel through the protocol dumps. Design for debuggability will never be waste of effort, because otherwise, even if we believe our tools are perfect, proceeding from ideal specification to flawless implementation…how else will an actual human being actually know?

UPDATE: Having learned that “risk homeostasis” is an actual term of art in road engineering and health risk analysis, I now think this would be better tagged the “Law of Software Risk Homeostasis”.

92 comments

  1. Yeah, except no.

    Try making this argument inside Google, a company which develops and deploys software at unfathomably massive scales. You will be laughed out of the room. Google uses protocol buffers extensively for internal IPC, and its engineers are enculturated to reach for protocol buffers first. There are no debuggability concerns because the protocol buffer implementation takes care of serialization, deserialization, and inspection. The point of bringing provable code into it is that once you prove the implementation of PB correct, you needn’t worry about grovelling through protocol dumps ever again for anything you build on top of PB. So there is no need to worry about proving huge stacks of software correct when it comes to debugging binary protocol dumps; all you have to prove correct is the protocol layer.

    Even in the open source world there is barely any reason to create a new ad-hoc text-stream protocol to wire two programs together. Why? D-Bus. D-Bus allows programs to communicate via well-definied, verifiable, auditable, type-safe APIs; one of the reasons why systemd has absolutely taken over is that it leverages D-Bus to present consistent, unified cross-distro APIs for critical system functions that developers of major projects of GNOME can just hook right into and expect to work. The bundle of shell scripts, duct tape, and wire that used to perform those functions just isn’t up to performing these tasks in a reliable way. And with D-Bus becoming a kernel feature in the foreseeable future, it will gain the capability to ship massive quantities of data between processes with zero copies, making it the premier IPC mechanism on Linux.

    Text streams are a huge loss unless you use something like XML or JSON because now you have not only the added CPU effort of chewing through ASCII in order to produce something machine-meaningful, but the added programmer effort of parsing and unparsing the text stream. The C string library functions are woefully not up to the task of writing decent parsers which are not prone to buffer overruns and the like. With a well-specified format with readily available parser libraries such as XML or JSON, the situation improves; but since you’re linking in these libraries you may as well just use protocol buffers and put the CPU to better use than grovelling through ASCII.

    Binary formats may have been a huge loss in the 1970s and 1980s when they were specified ad-hoc, and you had naught but poor documentation and a debugger or memory monitor to examine the data so encapsulated. But we’re living in an era of fast computers, powerful tools, and standardized formats that have been encapsulated into open source libraries anyone can use. It’s time to leverage those advantages.

  2. tl; dr, FWIW (and all things solutio-reductionist)

    Can the “semantic gap” be equated with the Pythagorean Comma perchance?

  3. Provable correctness only gets you as far as your ability to strictly define the meaning of “correct.” Perhaps one day we will be able to prove that code X correctly meets requirements Y, but that just means the bugs will have migrated north to the requirements spec. And so it goes.

    Just the other day, I wrote a spec and implemented a serial protocol to go between embedded devices with tiny, tiny cores. I still used human readable ASCII. (And unit tests.) It’s probably slower than it could be. Don’t care. It works, and I was able to debug it with minicom on a tty.

  4. @Jeff:
    >Binary formats may have been a huge loss in the 1970s and 1980s when they were specified ad-hoc, and you had naught but poor documentation and a debugger or memory monitor to examine the data so encapsulated. But we’re living in an era of fast computers, powerful tools, and standardized formats that have been encapsulated into open source libraries anyone can use. It’s time to leverage those advantages.

    I think you have it backwards: The arguments in favor of binary formats were probably strongest in the 70’s and 80’s when computers had slow processors and kilobytes of RAM, and every cycle and bit counted. But we’re living in an era of fast computers, powerful tools…

  5. Ha! I just saw Jeff Read’s comment about Google. I was just talking about building an ASCII serial protocol from scratch, and as it happens, it was at Google. :)

    As Jeff says, we use protobuf everywhere. I certainly use it whenever I can. It’s awesome. But occasionally you find yourself on a target without a handy protobuf implementation, and rather than trying to port the library, you fall back to what works.

    Even with protobuf, when I’m storing persistent data (e.g. config files) I often save the ASCII conversion of the protobuf to disk (rather than the packed version) because I can hand-edit and inspect it, and I don’t much care about performance there. I think of protobuf as yet another XML replacement, just with a more compact serialization when you want it.

  6. I think you have it backwards: The arguments in favor of binary formats were probably strongest in the 70’s and 80’s when computers had slow processors and kilobytes of RAM, and every cycle and bit counted. But we’re living in an era of fast computers, powerful tools…

    No, not really. It’s true, every cycle and bit counted at the lower end of the scale spectrum. The point is, they also count at the upper end, when you are running highly scalable Web applications. There you want each CPU to be doing as much useful work as possible; the savings incurred by eliminating the busywork of ASCII serialization and deserialization could add up to millions on your electricity bill, if not more. At the largest scales, it may literally be impossible to provision enough power to run an app at scale within reasonable time constraints if the CPU is fucking around with grovelling through ASCII protocols.

    The only place where you get appreciable slop is in the middle, on time-shared minis and later, desktop peecees. Possibly also smartphones. Basically any time the hardware is overprovisioned relative to the expected typical task load. But even there, the gains to be had not wasting the CPU are immense: Android, with its virtual machine and GC memory management, is still laggy and jittery when compared to iOS; and any given iOS device is capable of running much more sophisticated and CPU-intensive apps than the equivalently specced Android device.

  7. >> “proceeding from ideal specification to flawless implementation”

    Let’s just call that ‘hiding the knot’ for the time being. (ibid. Pythagorean Comma)

    Sorry. You hit my wheelhouse. I shoot back!

  8. Text and xml streams have the same technical greed problem. See docx, xls, etc. And the dark corners or OSX and iOS metadata for things like drivers.

    My career has spanned the same timeframe. Imdid everything in C before the buzzwords were created? Structured – not formally but I was indenting and avoiding gotos. Modular? I forgot which, but I remember 20 c files with plenty of static subfunctions and only one global, and a single globaldata.c

    But my job is not to write programs (or design hardware – I do both). It is to battle complexity. Combed spaghetti is still spaghetti. One criticalmdifference is you think differently if you have only 256 bytes of ram or 128k, or a 16Mhz processor.

    Moore’s law has meant that the complexity and resource wast could also grow exponentally. But it stillis a killer – of battery life if nothing else. Physics says you pay in columbs for each instruction.

    In a climax of cheap oil we had something similar. Faulty gas guzzlers.

    As to provably correct programs, the hard part is to define “correct”. Think haskell. “Correct” has to be defined for all the corner cases and error handling, overflows, loss of precision, integration cumulative error, wild inputs, etc. A(n unsigned integer) function returning the square can be replaced with a huge lookup table, ont to return gcd or lcm pushes things, beyond is a rainbow table.

    Functions with memory or other state now have even more inputs. 7 dimensional arrays with 64 bits in each direction? And tnat is ONE output.

    It may be possible to prove a function does X. But then you’ve merely moved the hard part to X.

    And I will go further. Say you have a function and a definition X for the proof engine. If you can create a formal syntax for defining X (needed for the proof), it can become the next language and generate code. Ah, but then you need a prover for definition X. So you have infinite recursion.

  9. Also see “the quality plateau” from “the programmer’s stone”.

    Managing larger piles of manure is philosophically and technically the wrong thing to do.

    When I eliminate a malloc, I eliminate that possibility of a memory leak.
    One thread can’t deadlock. And two are far easier to check than dozens.
    2 variables instead of 10 are easier to trace and use less stack.
    Accessors and setters are illusory, a=b is clearer than setA(getB());
    When I eliminate levels of nesting, I can see and trace much faster and more easily.
    If things fail, you want them to fail noisily, not silently (one code checker vendor uses an example of a password field set to a null pointer – and then corrects it by if…return. – that hides it or shifts it up – the caller would have the bug).
    Source level debuggers are one tool, but a blinky LED, DAC to scope probe, and sprinkling printfs are often more effective.

  10. I often lament that we’ve forgotten how to write small, tight, fast code. Looking around today’s software ecosystem will demonstrate that to all but the most blind of programmers. What I hadn’t fully internalized was that we’re learning it again, in building massively scalable systems.

    Even so, there are large parts of the program problem space that fundamentally do not lend themselves to provable implementations from a well-defined specification. GUIs, anyone? 3D graphics? You know, the kinds of things you can’t automate testing for?

    The world of programming is not all writing a specification and launching the latest program generator to turn it into executable code. As Eric points out, we’e been through that cycle, and system complexity always eats up the gains, and as tz points out, all we’e done is move the complexity – and the opportunity for bugs. After all, a program in assembler is nothing more than a program specification.

  11. Reading Jeff’s posting, it strikes me that the question of binary vs. textual formats is one of optimization, and the usual rule of optimization applies:

    Only optimize after real world experience says you need to.

    I’m perfectly willing to concede that, at the scales you find in Google data centers, the overhead of serializing/deserializing to text matters.

    How many of us are lucky enough to work at Google?

    For the rest of us not pushing petabytes around, the savings of electricity and CPU and so forth vanish into the noise in comparison with the real cost of debugging. I’m not sure how many orders of magnitude the difference in cost is, but it wouldn’t surprise me greatly if that number was as high as ten.

    So the sane course is to design your system so you can drop in protobufs or the like once you have both thoroughly debugged your implementation and once you have proven you actually need the optimization. Using binary protocols for everything is as bad as any other form of premature optimization.

  12. Why aren’t filesystems textual?

    I’ve never heard a convincing answer to this question (rather, the question as to why they should be an exception and why nobody agitates against binary filesystems) from the anti-binary-formats people.

    1. >Why aren’t filesystems textual?

      Sometimes they are. Subversion implements a special-purpose filesystem as its version store textually.

      Filesystems aren’t usually textual because they’re designed in the expectation that they’ll contain a lot of large binary blobs. This means that in the typical case you lose the advantages of textuality (dumps won’t be readable anyway) so why not squeeze out some compression? This is a very different case from most interchange protocols.

      You do remind me, though, that back when Ken and Dennis were designing Unix one of the decisions they had to defend was using variable-length text lines delimited by \n rather than expressing lines in fixed-length records. The critics said the efficiency loss would be insupportable. Ken and Dennis said it would be more than paid for by the space savings. They were right.

  13. I wonder if this is a corollary of Moore’s Law? I mean, it seems that if computers still did what they did in the 60s(or, say, had advanced as fast as car technology), these innovations, insofar as we could actually code and run them, probably would drive defect rates down to near-zero, because programs big enough to overwhelm them couldn’t exist.

    1. >I wonder if this is a corollary of Moore’s Law?

      To some extent, but so what? Even if Moore’s Law peters out, many of the competitive pressures that lead to bloat will still exist. The larger programs would be slower, but they’d still be larger.

  14. Jeff Read on 2014-09-27 at 02:01:53 said:
    > No, not really. It’s true, every cycle and bit counted at the lower end of the scale spectrum. The point is, they also count at the upper end, when you are running highly scalable Web applications. There you want each CPU to be doing as much useful work as possible;
    Bad move. Eliminating ASCII-parsing might buy you 30% more useful cycles on a single CPU. But you don’t care about that, you care about “will this still scale linearly when I have a cluster of 2,000 boxen?”
    Debuggability will still be helpful there.

  15. I have to agree with Jeff Read ‘s comments. I work at another major company as a software engineer and we use binary wire protocols which are auto-generated from a specification. The simple reason is that we care about the scarcest resource we have: network bandwidth. And this is in a case where we have machines talking to each other over distances measured in inches or feet with 10Gbps and 40Gbps connections.

    On the flip side, all of our external management APIs are done using something akin to XML-RPC. That’s because the throughput we need there is much lower, and we expect our customers to be writing their own tools, so debugability is important.

  16. >> Why aren’t filesystems textual?

    > Sometimes they are. Subversion implements a special-purpose filesystem as its version store textually.

    And we have strange case of Git object store (filesystem for revisions), which in its loose form is at its core textual (with the exception of ‘tree’ object, representing directories, which has one field binary for historical reasons)… then compressed. This IMVHO tremendously helped with extendability, adding submodules (commit entries to tree objects), signed commits, signed merges.

    The packed form is binary, as optimization, but it is overlaid over object datastore.

    The transport protocol in Git is also textual… except for the final blob with packfile.

  17. @esr:

    > readability of the protocol streams by humans trumps the minor efficiency gains from binary packing.

    As others have mentioned, this is not always true. IMO, when it is true, it’s often true with a vengeance, and I have cursed people who generate XML with no whitespace for small files in systems that were not bandwidth- or storage- limited.

    But the efficiency gains often aren’t minor. That is, within the context. Yes, USB 5.27 and Ethernet 29.42 will be out in a few years, and bandwidth won’t matter, but there are always cases where bandwidth matters. Would you suggest that netflix and youtube switch to some form of text encoding for their A/V streams?

    @Jon Brase:

    > I think you have it backwards … we’re living in an era of fast computers, powerful tools…

    One of esr’s very valid points is that whenever we get better, faster tools, we do more stuff. But the fast tools with large memories can be applied to debugging, as well. If I can capture an entire 20 GB stream at speed and parse through it with Python in a matter of a few seconds, what does it matter that it was not in text originally?

    (Especially if the alternative, that it was in text originally, would mean that it was an 80 GB stream, and I would not have been able to capture and store it at speed?)

    So, I think in every era there will be times when it is best to use text, and times when it is best not to. The sweet spot moves, but the general principle doesn’t change.

    In general, I would say that the sweet spot for text usually involves situations where the data does not overwhelm the bandwidth, or the memory and processing capabilities of either side, and the data is mostly heterogeneous.

    @Jakub:

    > The transport protocol in Git is also textual…

    Well, sort of. What does it sit on top of?

  18. Debuggability will still be helpful there.

    Debuggability is a red herring. In a context where you have highly efficient standard binary protocols with well-tested open source packing/unpacking implementations, the debuggability advantage of text is very small, if it exists at all.

    The future is binary. Once again the Amigans were right…

    1. >Is the Rule of Technical Greed different from “Jevons paradox”?

      They’re identical, though (oddly, in view of my interests) I did not know of Jevons’s Paradox before this question came up.

      Think of lines of code as a resource that has to be mined by programmers. Every increase in mining efficiency decreases the cost of producing code, increasing (this is the key point) its substitutability for more expensive resources. This increases demand for code in aggregate. It also creates an incentive to attempt more complex and difficult resource substitutions. Code has become coal.

      If the nature of the substitution is not obvious, consider for example, that using GPSes and computers to do routing for a truck fleet economizes use of fuel, driver hours, and wear on the rolling stock.

  19. Provable correctness only gets you as far as your ability to strictly define the meaning of “correct.”

    If you can’t strictly define the meaning of correct behavior for any given program, you don’t understand the problem well enough to write the program in the first place.

    If you can strictly define the meaning of correct behavior, you can write a program that’s provably bug-free.

  20. @Patrick Maupin

    >> The transport protocol in Git is also textual…
    >>
    > Well, sort of. What does it sit on top of?

    AFAIK it sits on top of pipes (for file:// URLs), or sockets (for git:// URLs), or SSH (for ssh:// URLs and equivalent), or HTTP(S) (for “smart” http:// and https:// URLs… taking into account that it is stateless protocol).

  21. I often lament that we’ve forgotten how to write small, tight, fast code. Looking around today’s software ecosystem will demonstrate that to all but the most blind of programmers.

    It wouldn’t be so bad if the code sacrificed a bit of expressiveness and speed for correctness and/or readability and maintainability. Stuff I’m reading (mostly Perl :( ) doesn’t.

    How many of us are lucky enough to work at Google?

    Luck?

  22. Yes, USB 5.27 and Ethernet 29.42 will be out in a few years, and bandwidth won’t matter

    A major cable company is testing delivering video in Samsung’s UltraHD 4k format. IIRC one movie is about 350GiB.

    That’s going to blow through a whole lot of caching.

  23. If you can’t strictly define the meaning of correct behavior for any given program, you don’t understand the problem well enough to write the program in the first place.

    Sometimes you have to start writing it anyway, because it *needs* to be done, and if you waited until you had an exhaustive understanding you’d never start because it’s alway changing.

    Yes, at the very top of the performance envelope where you can afford the best developers, and you get to define the whole system you’re right.

    And down at the bottom where you’re trying to squeeze every bit out of a system designed against power/heat/size/whatever constraints you’re right (an example is putting a satellite in orbit. Because of lots of things high speed processors and big chunks of memory aren’t possible.)

    Most of us live in the middle of several of those curves, and debugging *is* relevant and routine.

    Most of us live in the midd

  24. It’s the same story all over – features win out over reliability. In electronics, you could only put so many vacuum tubes in a device before the time between failures got too short. Transistors were more reliable, so they developed more complicated devices, using more transistors, that did more, with the same reliability that people were used to after using the previous generation of gizmos. Move along, nothing to see here.

  25. If you can’t strictly define the meaning of correct behavior for any given program, you don’t understand the problem well enough to write the program in the first place.

    Yeah, sure, fine. But you left the punchline out of my quote: “[T]he bugs will have migrated north to the requirements spec.” And I don’t have the hubris to ever think I fully understand a program after it exceeds a trivial complexity.

    That said, I saw some other posters who maybe don’t grok what you’re saying about protobuf. In some ways, protobuf is the best of both worlds. I can build a system with ASCII-formatted protobufs at every layer, debug it, and then flip a switch to get an efficient serialization and transport. Even then, I can insert a shim at any layer to pull out and ASCII-ify the protocol if need be, or keep them ASCII where desired.

    Better still, when IPC builds on protobufs everywhere, there are some obvious optimizations to be made. You can select a serialization appropriate to the transport, or even no serialization at all.

    1. >In some ways, protobuf is the best of both worlds. I can build a system with ASCII-formatted protobufs at every layer, debug it, and then flip a switch to get an efficient serialization and transport.

      Hmmmmm. Interesting.

      There is a potential feature set that would remove much of my objection to protobufs, If

      (1) The protobufs handler layer on any peer adapts without having to be recoconfigured when you switch your program from shipping packed bytes to ASCII, or vice-versa.

      (2) There are control messages you can send to any protobufs stack that says “talk to me in {ASCII, optimized binary}”.

      If both these things are true, then protobufs does not have evil opacity – when I need to debug I can just flip both into textual mode. With some loss of efficiency but great gain in debuggability.

  26. Text vs binary is not necessarily a black & white distinction.

    Text can be inscrutable (as XML often is) and binary can be clean and beautiful (PNG). (And both XML and PNG blur the lines a little by containing both text and binary codings.)

    If text vs binary was always a black & white question, then sure, Eric picked the better choice of the two. But the ultimate goal is to make well-designed protocols that fit their use cases appropriately.

    1. >binary can be clean and beautiful (PNG).

      PNG is exceptionally well designed for a binary protocol. It’s so beautiful that after I read the standard I joined the libnpng dev group just to play with it. Ended up writing the library support for several chunk types.

      I think PNG has justification for being binary; it deals in data volumes per unit of work where compression actually matters. Nevertheless, one of my projects is SNG, which losslessly maps PNGs to a purely textual representation and back. Makes some kinds of chunk surgery much easier.

      (That was fun to write.)

  27. They seem related but distinct. They both involve nonintuitive changes to the optimum as a result of the change in availability of a fungible quantity.

  28. @Jakub:

    > AFAIK it sits on top of pipes… sockets…

    The point behind that rhetorical question was that, unlike turtles, it isn’t ASCII all the way down. esr said protocols should be textual, you usefully helped by describing one that is, and I claim that is only possible because of what is underneath — which isn’t.

    @William O. B’Livion

    >> bandwidth won’t matter
    > [When] one movie is about 350GiB…
    > That’s going to blow through a whole lot of caching.

    Exactly right and making my point for me. One thing esr got right in the post is that we’re insatiable, and always building things right up to or even past the current limits.

    But when you’re at the limits, debugging capability is always going to be sacrificed — no ifs, ands, or buts. Oh, sure, maybe you have a debug build, or simulation or emulation models that lets you make inferences about the real thing, but the deliverable — the thing that none of your supposed competitors can touch — is only untouchable because there’s no room for improvement.

    But most people and companies aren’t anywhere as near to the limits as they think they are, so they should heed esr’s warning and work at making things debuggable until they can’t afford to any more. If they do that right, then they actually stand a better chance of eventually creating that untouchable deliverable.

  29. “If you can’t strictly define the meaning of correct behavior for any given program, you don’t understand the problem well enough to write the program in the first place.”

    Jeff, I’ll give you the same challenge I gave the guy arguing your side on G+: Describe the correct behavior of a client for the Second Life virtual world in less time and less verbosely than if you’d simply sat down to write the C++ in the first place.

    “It wouldn’t be so bad if the code sacrificed a bit of expressiveness and speed for correctness and/or readability and maintainability.”

    This is the principle I follow religiously, William. Write for clarity and debuggability, optimize only when proved to be necessary by real world experience.

  30. @Casey Barker:

    In some ways, protobuf is the best of both worlds. I can build a system with ASCII-formatted protobufs at every layer, debug it, and then flip a switch to get an efficient serialization and transport.

    This is essentially letting your simulation model / prototype use exactly the same code as your final product.

    It’s a well-known truism that if you build your prototype and it works well enough, you just leave it alone. But if you are building something to google-scale, the amount of engineering effort to optimize your prototype will often be paid back multiple times over.

    Especially once you have solved the same problem multiple times and you take your lessons learned and build a reusable tool like protobufs.

    And also especially if you have the capability to gather useful metrics that can tell you when and where to optimize next.

  31. @Jay Maynard:

    Jeff, I’ll give you the same challenge I gave the guy arguing your side on G+: Describe the correct behavior of a client for the Second Life virtual world in less time and less verbosely than if you’d simply sat down to write the C++ in the first place.

    For a lot of problem domains, writing the code isn’t what takes the time. Writing the test cases can and should take a lot more time for some of those domains. Now you get into a chicken and egg problem — if the correct behavior is specified by the program, exactly how do you test it? (Hint: you don’t need to. It’s already perfect, by definition.)

    And yes, writing specifications can take more time than writing the code and the tests together. But those are understandable by managers, and transmutable by tech writers into end user documentation.

  32. @Patrick Maupin:

    >> AFAIK it sits on top of pipes… sockets…
    >>
    > The point behind that rhetorical question was that, unlike turtles, it isn’t ASCII all the way down. esr said protocols should be textual, you usefully helped by describing one that is, and I claim that is only possible because of what is underneath — which isn’t.

    Actually pipes, sockets and http *is* ASCII all the way down to the bytes.

  33. @Jakub Narebski:

    > Actually pipes, sockets and http *is* ASCII all the way down to the bytes.

    You can’t say it’s ASCII all the way down with a straight face — the “bytes” happen before your data actually gets anywhere, e.g. TCP or UDP.

  34. To me, the distinction isn’t “text vs binary” (it’s all binary, if you go down far enough) The schism is “can I inspect(*) this with a wide range of tools, or am I limited to some vendor’s proprietary drek, or is it something in between?”

    (*) and by inspect, I mean the data in question, not view some vague translated version of it.

  35. Actually pipes, sockets and http *is* ASCII all the way down to the bytes.

    HTTP/2.0 – which was largely designed by Google; it’s rebranded SPDY, basically — is deliberately a binary protocol, for many of the reasons I expressed above.

  36. @Patrick Maupin:

    >> Actually pipes, sockets and http *is* ASCII all the way down to the bytes.
    >>
    > You can’t say it’s ASCII all the way down with a straight face — the “bytes” happen before your data actually gets anywhere, e.g. TCP or UDP.

    But you can debug it with telnet and/or netcat (perhaps even with tcpdump)…

    Nb. debugging of ssh transport requires using ssh client.

  37. >>If you can’t strictly define the meaning of correct behavior for any given program, you don’t understand the problem well enough to write the program in the first place.

    I>>f you can strictly define the meaning of correct behavior, you can write a program that’s provably bug-free.

    Well — I live in the world of manufacturing — vinyl windows. I don’t pretend to even try to write provably bug-free code. I write code that is pretty much good enough to build windows today.

    Mostly what I bring to the table is speed and flexibility. Most changes can be done in a few hours, at the most in a day or two. And most bug fixes get done in minutes, not days or weeks. We don’t need perfect code, and we can’t afford the manpower and time to test exhaustively. We need to build windows now. And get this change of process or dimensions into the system tomorrow, or at the most, next week.

    Not perfect, I just shoot for good enough.

    Jim

  38. Keep in mind that ascii is just a human readable subset of binary. At the computer and disk level, it’s all binary anyway.

    Ascii message protocols are simply a binary format that is easy for humans to read and if necessary, modify. So far as the computer is concerned, it’s still data in binary that has to be parsed to do the next thing.

    That said, easy to read and understand code, and data that can be understood when trying to debug is the way to go in almost all cases. Very, very few pieces of code are written in a world where efficiency and speed outweigh maintainability and modifiability.

    Non zero admittedly, but sparse.

    A sort of quote from years back — “Take care to write readable code. Someday, someone, somewhere will need to fix or modify it. And a pretty good chance it will be *you*.”

    Jim

  39. > Actually pipes, sockets and http *is* ASCII all the way down to the bytes.

    It is not really reasonable to say “all the way down” without actually mentioning TCP and IP. It’ ASCII until it’s not, just like my point about (conventional) filesystems.

  40. “But you can debug it with telnet and/or netcat (perhaps even with tcpdump)…

    Nb. debugging of ssh transport requires using ssh client.”

    My point was that these (and using the filesystem API to debug or work with a directory full of text files) are all the same kind of thing: using mature tools that work at the binary level to show you text.

  41. @Jakub:

    > But you can debug it with telnet and/or netcat (perhaps even with tcpdump)…

    Absolutely! But esr’s initial thesis was “readability of the protocol streams by humans trumps the minor efficiency gains from binary packing.”

    As I and many others here (including you with this very statement) have pointed out, in many cases, tools can make it reasonably easy to deal with binary packing.

    Life’s a trade-off, though, and building tools costs time. Try not to make a protocol that requires binary packing, or if you’re sure you need it, try not to reinvent the wheel, and use something like TCP or UDP or protobufs or anything else that already has all the tools you need.

    1. >Absolutely! But esr’s initial thesis was “readability of the protocol streams by humans trumps the minor efficiency gains from binary packing.”

      TCP/IP is different from protobufs and other bad ideas because it doesn’t impose any non-textual structure on the layer above it. Thus, you can write netcat and any textual protocol over TCP/IP becomes readable as far down as it matters.

      This generalizes. Any binary interchange protocol foo used to implement an 8-bit stream that doesn’t constrain the encoding above its level might as well be textual if the people using it do the right thing and don’t add binariness to their application protcols – if you have foocat.

  42. @William O. B’Livion:

    @Jeff Read:

    If you can’t strictly define the meaning of correct behavior for any given program, you don’t understand the problem well enough to write the program in the first place.

    Sometimes you have to start writing it anyway, because it *needs* to be done, and if you waited until you had an exhaustive understanding you’d never start because it’s alway changing.

    This. Although, of course, a lot of programming projects fail miserably, because the lack of initial understanding combined with a subsequent lack of time and/or financial resources means that there won’t be enough iterations to get it to work.

    Most of us live in the middle of several of those curves, and debugging *is* relevant and routine.

    Debugging is always relevant. I prefer when it’s not routine, or is at least better viewed as incremental development. I work in a group where we build things at the extremes — we design the analog and digital of a chip, the firmware that runs on the chip, APIs for others to communicate with the chip, etc.

    Debugging on the chip and firmware is hard precisely because of the limitations you discussed, but of course it’s still relevant.

    OTOH, we also build a lot of tools (for debugging, for testing, for customers, etc.) in unconstrained environments, and debugging is a comparative cakewalk there, because you can use techniques like wrapping everything in text if that’s what seems reasonable.

  43. PNG is exceptionally well designed for a binary protocol. It’s so beautiful that after I read the standard I joined the libnpng dev group just to play with it. Ended up writing the library support for several chunk types.

    Like many great binary formats, PNG is a pretty deliberate ripoff of the Amiga IFF format. There’s a reason why it’s so beautiful and well designed — it is directly descended from a tradition of such superb design.

  44. By the way, IFF was designed by Electronic Arts, in an era when Electronic Arts was run by hackers. Were it designed by the EA of today, half the spec would be given over to DRM chunks and you’d need to authenticate with Origin to read a damn bitmap file…

  45. There is a potential feature set that would remove much of my objection to protobufs, If

    Neither of your requests are available in the open source protobuf library, per se. Protobuf provides separate text vs. binary pack/parse calls, and I didn’t see any multiplexing calls in a quick glance at the dev guide.

    That said, I imagine both #1 and #2 are probably best implemented as a feature of the IPC layer. #1 would likely use a thin wrapper (or existing IPC wrapper) to distinguish between payload types. #2 would just be an out-of-band service call.

    AFAIK, Google hasn’t yet published an IPC library to go along with protobuf, but it wouldn’t be hard to add these features to an existing mechanism.

  46. Oops,I forgot the quote in my previous comment.

    >Is the Rule of Technical Greed different from “Jevons paradox”?

    They seem related but distinct. They both involve nonintuitive changes to the optimum as a result of the change in availability of a fungible resource.

  47. The real problem with protobuf as far as debuggability goes is that it’s not ‘self-describing’, i.e., if you’ve got a protobuf binary stream, you can’t decode it unless you know the _exact_ schema it was created for. ISTM that this is a significant limit, since such self-description would go a long way in recovering the flexibility of ‘textual’ form. You could deal with this quite easily with some discipline: for instance, if a debug option is set, have the first chunk in the protobuf stream be a special one with a ‘magic’ tag you don’t use elsewhere, containing a dump of the schema. Then debug tools can be written to deal with these binary streams, and you would even keep the efficiency of a machine-friendly format.

  48. > But you can debug [Git transport protocol] with telnet and/or netcat (perhaps even with tcpdump)…

    Or you can run Git command with GIT_TRACE_PACKET and save stream to a file

  49. They’re identical, though (oddly, in view of my interests) I did not know of Jevons’s Paradox before this question came up.

    Then you’ve been hanging with the wrong crowd. It’s well-known among peak-oil theorists, for obvious reasons.

    1. >[Jevons’s Paradox is] well-known among peak-oil theorists, for obvious reasons.

      You misspelled “complete blithering idiots”. They get no cred for being able to quote Jevons when they haven’t the faintest clue about elasticity or demand substitution.

  50. >Every narrowing of the semantic gap, every advance
    >in our ability to manage software complexity, every
    >improvement in automated verification, is sold to us
    >as a way to push down defect rates. But how each
    >tool actually gets used is to scale up the complexity
    >of design and implementation to the bleeding edge
    >of tolerable defect rates.

    This is analogous to the concept of risk homeostasis, which says that people will gravitate toward a certain amount of perceived risk in their lives, and if outside measures are taken to reduce those risks, then people will modify their behavior to obtain new perceived benefits while keeping their level of perceived risk constant. This idea goes a long way, for example, toward explaining why automotive safety improvements haven’t resulted in the expected safety gains. 30 years ago if conditions were crappy you stayed in, but now that you have all-wheel drive, ABS, high-center brake lights, and electronic stability control, you’ll willingly venture out into that blizzard for casual errands.

    It sounds like programmers are doing something similar with the tools being handed to them: instead of reducing defect rates as expected, they’re jacking up complexity (a perceived benefit) while keeping defect rates at a level they’re used to. If increased complexity appears (in the programmer’s eyes) to confer more value than a reduced defect rate, then this shouldn’t come as a surprise. If it *actually* confers more value, then it isn’t a bad thing.

    1. >This is analogous to the concept of risk homeostasis

      /me googles…

      I did not know this was a thing.

      Had I known, I think I might have called it the “Law of Software Risk Homeostasis”. Or perhaps the Wilde/Jevons Law.

  51. It’s not just programmers that are constantly demanding more, more, more…blame that one as much or more on the marketroids.

  52. The Rule of Technical Greed is not just Jevon’s Paradox applied to program complexity; it’s also related to the rule of infinite demand.

    There’s a lurking caveat though. What humans demand without upper bound isn’t really program complexity, but rather problem solving; it’s just that we often need complex programs to get there. There might be a plateau at which the difficulty of such problems maxes out, perhaps for a long time, perhaps forever. Or at least until the universe’s entropy makes existence impossible. (But we’ll leave that as the Last Question for the reader.)

    If greater complexity is truly what is demanded, then it will probably lie on the path through programs specifically designed to “understand” more complex things than we can. I don’t just mean the way compilers do it – sure, fine, no human understands Photoshop at the assembly language level – I mean at some point where the program writes a program to solve a problem, and we don’t even get to see how it did it, but we can see that it solves the problem (or at least appears to).

    What if not only the universe is more complex than we can imagine, but even grossly simplified slices of it?

  53. CompCert only proves the semantics of the source code and compiled executable are identical…it does nothing to prove that your code is correct. You still need to debug it.

    Code generation from a mathematical spec moves this one layer higher…but you could still have mathematically described a pile of crap….if the system does not behave as you and your infallible 1337 math-fu expect it to, you’ll still need to debug it to unearth the problem and refine your math.

    I’ve been around long enough to witness God-knows-how-many-iterations of this human desire to go stampeding off after the next latest, greatest, hottest silver bullet….file me under “sober & jaded”

    1. >I’ve been around long enough to witness God-knows-how-many-iterations of this human desire to go stampeding off after the next latest, greatest, hottest silver bullet….file me under “sober & jaded”

      Amen. Nobody ever caught me claiming open source was a silver bullet, because I wasn’t damn-fool enough to believe it. Everything we learn about good practice becomes necessary, but not sufficient.

  54. Hehehe, risk homeostasis is the reason I never wear a bicycle helmet. On average it earns another foot of passing clearance from the motor-cagers. (No-I-haven’t-done-a-study-so-this-is-anecdotal. I think others have though.)

    The original “technical greed” formulation prompted me to think of jumbo jet marketing. Every time Boeing or Airbus roll out another bigger jet, we see all these artists’ conceptions of urbane travelers hanging out in the mile-high martini bar. After all, what else will we do with all this extra room? As it always turns out, we get another 40 rows of sardine-class seating instead.

  55. @ESR

    Well, this sucks, because it means if you refuse to buy into the latest trend, then you either going to write very buggy software or have to stick to a lower level of complexity than others and get outcompeted.

    OK it is not as harsh – you still use C, and C is fundamentally still at the structured paradigm, and you probably don’t see either of it happening. If I would ask you know why do you still use C and not D or really anything similar for gpsd, you would probably answer that the huge amount of tools, libraries, code generators and experts (you may find a statistician how can read C, but finding a statistician who can read D can be a whole lot harder) is more of an advantage than the backwardness of the language paradigm itself is a disadvantage.

    So yeah, this means programmers don’t always need to buy in into the latest trend, still…

    OTOH I think a modern Python program is in many ways less complicated than a Turbo Pascal program from 20 years ago, because you spend less time working around the various restrictions of the language itself. This surely most count for something, and frankly I think the average mp3 player app is not really that complicated.

    Today complexity seems to happen behind the scenes and not in the form of many buttons on a screen each doing something – consider something like Hipmunk, I know a lot of work was invested into it, I know it totally does not look like something a lot of work was invested to, but behind the scences it tries to combine gazillions of information from gazillions of sources and tries to present it in a way that that user does not even notice anything, just miraculously finds the app is reading his mind and presents him exactly those flights and hotels that he wants to choose.

    In other words, complexity today sounds a whole lot like NOT a lot of menu items but more like something sort of an AI.

  56. @Casey

    >but that just means the bugs will have migrated north to the requirements spec

    Exactly, this is the the single No. 1 point that if every programming blogger would “get”, a lot of debates would go away: there are theoretically infinite layers of texts, each layer being the implemention of the layer above and the specification of the layer below, and each containing bugs. Sometimes you don’t even specify at all, because you have easy problems in an easy scripting language so the spec is just “watch me working and automate whatever you can”. Sometimes you write a set of requirements informally in an e-mail. Sometimes you go into more and more details, but once your spec gets sufficiently detailed it will look a lot like a program written in pseudocode, but what matters even more than that is that in this case there is probably going to be some kind of a pre-spec, like a scope definition document. So the spec itself can be buggy by not properly implementing the scope definition, or even the scope definition could be buggy because it solves a low-priority problem while leaving a high-priority problem intact. Sometimes the bug is in the whole act of worrying about software at all – if I see my users arguing with customers 90% of the day and entering sales orders 10% of the day, it is a bug to even worry much about improving my order processing software at this stage!

    This, I think is very clear. The issue is, that many programming bloggers essentially see their job as turning specs into code, because writing specs is Someone Else Problem. Thus the discussion is all too often about what tools, methods and paradigms turn specs into code the best possible way.

    The point is, it is not clear at all that for the final goal, namely, improving human lives through making computers do all kinds of things for people and perhaps making some money while doing so, the inefficiency of turning specs into code is the bottleneck at all!

    In my field, ERP/business software, there are hardly any bugs that are not the result of some kind of human miscommunication i.e. that are more misfeatures than classical bugs.

    Thus, programmers may find clever tools and methods for reducing defect counts or generally improving turning specs to code, but it is not clear at all how much it improves the whole thing.

    Meanwhile, for every 100 blog post enthuiastic about TDD there is at most 1 that says anything about how to specificy requirements well, or how to find out what problems of your users have the top priority etc.

  57. @ESR

    >If the nature of the substitution is not obvious, consider for example, that using GPSes and computers to do routing for a truck fleet economizes use of fuel, driver hours, and wear on the rolling stock.

    I think there is an even deeper lesson to learn here. Yes, it is true as long as you know the price and quality of “coal” i.e. code, if you know roughly what can you expect for a given investment. But it is not the case. For some companies yes, but for many, software is still a huge question mark: they have no idea how to test or ensure the quality of a software they purchase, or if they employ in-house programmers they have no idea how good they are.

    Contemplate this: the truck comes with warranty, and the trucker profession has certain rules and drivers who regularly break them are objectively bad drivers and should be fired. Software does not come with a warranty, and nobody can really just how many bugs week make a programmer bad enough to be considered a bad programmer and rather fired.

    Thus, this substution process is all about substituting a known quantity with another quantity that for a lot of firms is unknown.

    Now imagine if in Jevons’ time only a few businesses would only be able to measure the caloric content of coal and others not, what would have happened? These firms would have had a huge competitive advantage.

    This suggests, that the ability or inability to estimate or check or test or figure out anyhow software quality and programmer quality could, today, make or break a non-technical firm as well!

    This is a bit hard to explain to CEOs to whom the IT dept is a necessary evil cost center, not a profit center.

    There is another interesting dilemma: for a non-technical firm both is hard, but what is easier, to figure out software quality or programmer quality, in other words, buy or build? I would say – programmer quality is easier, hence build. All the non-technical boss needs to do is to estimate if the candidate is smart, curious, gets things done and honest. This is something they should be able to do. Then they can ignore the issue of estimating software quality: whatever this fellow builds (or suggests buying) is probably good enough.

    What do you think?

    1. >What do you think?

      While what you say is plausible, I think most non-technical bosses are not very good at estimating programmer quality – and a lot of them know they aren’t.

  58. OK it is not as harsh – you still use C, and C is fundamentally still at the structured paradigm, and you probably don’t see either of it happening. If I would ask you know why do you still use C and not D or really anything similar for gpsd, you would probably answer that the huge amount of tools, libraries, code generators and experts (you may find a statistician how can read C, but finding a statistician who can read D can be a whole lot harder) is more of an advantage than the backwardness of the language paradigm itself is a disadvantage.

    This explains the Corollary to Greenspun’s Tenth Rule, which is: any sufficiently advanced Common Lisp or Scheme program will be rewritten in C++, Java, or Python. Programming is a social activity, and by going with the tools that have the most support, regardless of how technically advanced they are or aren’t, you save yourself a lot of hassle.

  59. @ESR

    > I think most non-technical bosses are not very good at estimating programmer quality – and a lot of them know they aren’t.

    I seriously can’t fathom why – it is not the skill level that needs to be estimated, but rather the person: having the kind of personality, character and intellectual traits that tend to predict a decent skill level in one’s profession _whatever that profession that happens to be_ and the desire and ability to learn new things fast.

  60. @Jeff Read correct, but not that simple either. By that logic we’d still FORTRAN and ALGOL – except of course that the level of support was not that big back then to begin with, and the marginal utility of paradigmatic language improvements was big. One possibility of that the decreasing marginal utility of paradigmatic language improvements combined with the huge marginal utility loss of giving up these levels of established support basically makes the two projected graphs close and at that level ossify hard and be reluctant to change. And this would not even be too far from now, C# got “lispy/pythonic” enough not gain much from gradual, marginal paradigmatic improvements, maybe Java too, perhaps this is already happening.

    But there is another possibility – paradigmatic improvements inside the ecosystem, without throwing it away.

  61. > but that just means the bugs will have migrated north to the requirements spec

    Sure, but the whole point of having a _formal_ spec is that it simplifies matters by abstracting away from implementation details. You can also prove simple properties that a formal spec is supposed to have.

    Is it a silver bullet? No way, because informal, hard-to-describe requirements will eventually dominate, as you say. Does it help? Of course it does.

  62. One could argue that FORTRAN and ALGOL only died because the pre-ASCII, punch-card format had become obsolete, which encouraged users to switch. Even then, many users in the most prominent niche for FORTRAN77 (numerics/computation) chose to stay with later versions of the language. And C is not that different from ALGOL as far as the basic paradigm goes.

  63. I put a nice, chunky, capital-letter lemma in this thread … and NOBODY sought to engage it!

    What a fucking world we live in now!

  64. @guest, yes, but then that formal spec – probably made in something suspiciously like a programming or mathemathical, algorithmical language, a pseudocode, a formal language – requires an informal, accessible pre-spec, something the average user can understand and comment on and point out if something is missing. So it is turtles all the way down, each layer being the spec to the layer below and the implementation of the layer above. Take an average Python program, travel back in time and show it an assembly programmer: he will consider it a spec. And perhaps one day your todays “formal spec” becomes interpretatable / compilable: if it is formal enogh, why not? The only reason that the infamous “you can program our computers in English” was and still is bullshit is that English is not formal enough.

  65. Didn’t the Jargon File have an entry that mentioned this exact problem with programming computers in a human language? That the effort of nailing down a specification of what we really want was the true bottleneck? I looked up “visual programming” and other possible terms, and couldn’t find it, but I swear I remember reading it in there.

    (I still wonder whatever happened to Jargon Friends.)

  66. Defn. 2 is close, but not quite. The text I’m thinking of explicitly made the point for why “programming in English” was so hard that it was effectively programming in $Code. Perhaps it was English, and the entry was since revised; I’m too lazy to go version hunting.

  67. @shenpen:

    Take an average Python program, travel back in time and show it an assembly programmer: he will consider it a spec.

    In point of fact, in the early 2000s I was writing DSP assembly language for a modem chip (remember those?)

    We had abysmal debugging capability on the modem, so I preferred to do most of my work in Python. When adding significant new code, I would code a prototype and a testbench in Python, and then make regression-tested revisions of the Python that looked more and more like the assembly language should, and untested revisions of assembly language that looked more and more like the Python.

    Whenever a construct in my Python didn’t cleanly translate to assembler, I would write the approximate assembler I wanted, and then modify the Python so that it mapped pretty closely to it, and then go back and modify the assembler to look more like the Python, so it was an extremely iterative process.

    Once the assembly language and Python looked sufficiently similar (and it was actually pretty spooky how similar that could be, line for line), I would put the assembler in the real modem and fire it up. First time success was amazingly high.

    Before Python, I used to do similar prototyping for assembler in C, but you can’t make C visually map nearly as close to assembler as you can Python. Declaration cruft and the lack of multiple return values are some of the reasons.

    Now that I think about it, this may explain one of the reasons why Jessica and I differ on the importance of the lack of type-checking in Python: for much, if not most, of my career, type-checking simply hasn’t been an option anyway for the code my deliverables are written in, and Python actually gives me *real* type checking for things I care enough about to write custom classes for.

    1. >Isn’t “the Rule of Technical Greed” (or whatever you’ve decided to dub it at this point) really just a parallel to Parkinson’s Law?

      There’s a loose connection, but I think connecting it to Parkinson’s Law misses out on the interesting risk-homeostasis angle.

  68. Risk is quantifiable, isn’t it? If there is an investment option for $100 that has 50% chance of paying back $120 and another 50% to pay back $90 then the expected payoff is (.5*120 + .5*90) = $105, you look at opportunity costs, you time-discount it back, and then you know if it worths it or not.

    Now let’s keep the risk the same, so 50% chance of $90 pay back, and increase the reward, 50% chance of $180 pay back. Now the investment option looks a whole lot more attractive.

    In other words, how can risk homeostasis even exist – shouldn’t the changes in expected pay-offs heavily influence our willingless to take risks, both up and down?

    How to put it… there is a new paradigm but not that much more very exciting program ideas: risk-taking goes down. There is no new paradigm but a chance to write revolutionary software: risk-taking goes up. Isn’t it how it should work?

    What am I missing?

    In other words, risk is just a cost – why is it taken as a pain? (Cost: no matter how high as long as it pays off: Pain: something you just cannot take beyond a certain level, no matter how high the payoff.)

    1. >In other words, risk is just a cost – why is it taken as a pain?

      Probably because in the environment of ancestral adaptation risks had teeth.

  69. @Shenpen
    “In other words, risk is just a cost – why is it taken as a pain?”

    What is better, a meal today or 7 meals next week?

    Without a meal today, you might not be around next week. That is the rule of risk averse discounting.

  70. Humans, like other species, are evolved to incorporate a good deal of habit into their lives as well. Analyzing a new risk is seen as pain to the extent that it requires activation energy – a change to their habits. It’s not pain in the same sense, of course, but it still elicits a feeling of aversion.

    Some people learn to assess new risks of certain types, and for them, assessment becomes the habit, and they thrive. Anyone not used to that risk type survives either by avoiding those risks if possible, learning to assess them, or hiring someone who will.

    Risk homeostasis is also closely related to moral hazard, something actuaries can often measure in real dollars. I’m surprised this wasn’t brought up yet (even by me).

  71. Sorry to arrive after the battle.

    (1) Excellent post.

    (2) It also applies to other domains. When you improve a tranport infrastructure, the level of traffic jam usually stays steady(but the overall traffic increased, and with it the economic activity). In the 80s, they’ve doubled a road bridge next to my home, the Viaduc de Gennevilliers(from 2 to 4 lanes each side, basically building a second bridge next to the first one). 10 months later, the traffic jam was exactly the same as before.

    A very useful build(for economic reasons). But not the usefulness people expected – it’s still a pain to go from Cergy to Paris by car at rush hour.

    (3) from the very first evolved languages, only 2 are still alive today, and not by much. They are LISP & COBOL. LISP was designed for ultimate power, and COBOL for ultimate readability. They absolutely suck at the other side. COBOL is absolutely white(anyone able to read some english can get a grasp of what the code makes, even if details may remain subtle), and LISP absolutely black(black magic in terms of potential, but also impossible to understand by whoever is not a very skilled hacker).

    Most langages designed since those 2 are trying to get the best of both worlds : readability and power. In various balances. LISP and COBOL are still alive in areas where their absolute unbalance is still needed. COBOL, for example, is still very useful for accounting in banks. Where programming power is absolutely not needed, you just need + – and * , sometimes, when things go wild you might even need to divide. But you absolutely need a mediocre programmer to be able to find quickly where the hell in those 187000 programs(average size : 2000 LOC) is a specific operation done, and how. For that, COBOL is unbeatable. In my 10 years on the banking domain, I suffered its absolute lack of power only once.

    The ideal language would have COBOL’s readability, and LISP’s power. That’s not for tomorrow. But some modern languages are very cool on that level, far better than 50/50, like Python or Ruby.

  72. http://esr.ibiblio.org/?p=6271&cpage=1#comment-1146051
    and then
    http://esr.ibiblio.org/?p=6271&cpage=1#comment-1146155
    “I often lament that we’ve forgotten how to write small, tight, fast code. ”

    Be FORTH like on stack optimized architectures.

    It can’t be suboptimal until we are rating our CPU’s by qubits.

    I recall seeing the whole image your icon is cropped from, and think the sleek catsuited actors in the TRON movies are FORTH, and the ubiquitous in reality barely contained in strained spandex is every other programming paradigm.

  73. el_slapper, I think if readability was a real goal, we’d put a lot more effort into tools for extracting readable, natural-language descriptions of, say, LISP or Haskell programs. (This is a feasible goal, persay; Montague grammar as used in formal natural-language semantics is closely related to type theory as known in programming languages.) But AIUI, the consensus view is that even the purported readability of COBOL is quite illusory once you expand beyond relatively small programs.

    A comparison with math may be instructive; historically, math texts used to have lenghty prose descriptions which made even simple algorithms (such as for arithmetic, or solving simple equations) almost inscrutable. Nowadays, most writing in math strikes a careful balance between the use of formal expressions vs. natural language; the latter is mostly used for portraying the high-level, broad structure of mathematical content.

  74. I was wrong. You should really be using Cap’n Proto.

    The author of Cap’n Proto found that protobuf introduced unacceptable overhead, stating that servers at Google could spend up to 30% of their time marshalling and unmarshalling protobuf objects.

    Cap’n Proto requires zero marshalling and unmarshalling. mmap()ing a Cap’n Proto structure Just Works on all extant CPU architectures. (Hint: Big-endian is going the way of nine-bit bytes.) If you need to serialize a data structure and send it somewhere, reach for Cap’n Proto first.

Leave a comment

Your email address will not be published. Required fields are marked *