Holding up the sky

During the last few years I’ve noticed a change in the meaning of my life – well, my life as a hacker, anyway. I had an exchange on a mailing list last night that made me think it’s not just me, that the same change has been sneaking up on a lot of us.

It’s part of the hacker ethos to (as Alan Kay put it) predict the future by inventing it – to playfully seek solutions to problems people outside our culture are not yet even thinking about. We still do that, and I think we always will.

But increasingly, as the world of pervasive networks and ubiquitous computing hackers imagined decades ago has become reality, we’re not just the innovators who thought of it first. Now we’re responsible; having created the future, we have to maintain it. And, as the sinews of civilization become ever more dependent on the Internet and software-intensive communications devices, that responsibility gets more serious every year.

This makes for a subtle change in our duties and our relationship to our work – a gradual shift from merry prankster to infrastructure gnome.

What started me thinking about this seriously is the Bufferbloat project. Those of us working on it believe we’ve identified a cluster of serious problems deep in the Internet’s implementation, and we’re working hard on diagnostic tools and mitigation methods.

Sometimes, as we work on this, it’s difficult to wrap our minds around the implications of the worst-case scenarios. We’ve identified problems that plausibly could trigger a congestion collapse of the entire Internet. The one previous time the Internet suffered a congestion collapse, in the late 1980s, almost nobody but a handful of geeks noticed. Today, a service interruption of the same relative magnitude would be a civilization-challenging disaster.

Nobody is panicking about this. We’ve got a remediation job to do, and we’re about as competent to do it as any team could be, and the odds that the Internet will random-walk into an unrecoverable crash before we can fix the vulnerabilities seem acceptably low. But damn. This isn’t an aspect of the future we were expecting, though in retrospect we probably should have. I look at us and wonder: when did we join the people who have to hold up the sky?

While an Internet congestion collapse is still a high-end extreme, it is no longer a particularly rare thing even for smaller projects to be life-critical. My own GPSD is a good example. Maybe it was all about mapping WiFi hotspots and geocaching and research applications a decade ago, but nowadays the known deployments include the IFF systems of armored fighting vehicles in wartime. Bugs in my stuff could kill people.

I don’t lose sleep over this, because I know I’m very good at what I do. If it weren’t me obsessing about our regression-test suite and our portage tests and annotating our code for static checking it would probably be someone less experienced and skilled than I am, and the odds of consequent avoidable deaths would go up. But, again, damn. This is not exactly what I was expecting thirty years ago, when I signed on to the whole hacker-ethos thing to push on the frontiers of possibility.

And it’s not just the bufferbloat guys, and it’s not just me. Think of Linux on embedded systems, diffusing its way into medical equipment. And the flight avionics of airliners. And thousands of other invisible deployments where crashes and errors can kill. Hackers didn’t go looking for the job of holding up the sky, but as ephemeralization and distributed machine intelligence become more and more critical to the way human civilization functions, and open source takes over ever-larger pieces of that infrastructure, that job is finding and settling on us.

There may be transitions like this associated with every new technology. But it’s happening faster now. Newcomen and Watt didn’t live to see the day when the world’s factories and commerce became dependent on steam engines, but I’ve lived to see my code become ubiquitous on almost everything that lights up pixels on a digital display. And I’m far from the only hacker who can say similar things.

Ubiquity, like great power, requires of us great responsibility. It changes our duties, and it changes the kind of people we have to be to meet those duties. It is no longer enough for hackers to think like explorers and artists and revolutionaries; now we have to be civil engineers as well, and identify with the people who keep the sewers unclogged and the electrical grid humming and the roads mended. Creativity was never enough by itself, it always had to be backed up with craftsmanship and care – but now, our standards of craftsmanship and care must rise to new levels because the consequences of failure are so much more grave.

But that’s OK. We’ve always had an ethos of service, an other-directed component to our idealism. I believe hackers, as a culture, can handle these new demands; the adjustment required is not a break with our traditions but a broadening and deepening of them.

And then there’s this. Back when what we did on computers was more exclusively playful exploration and those computers were less ubiquitous in everyday life, it was easy to wonder sometimes if our hacking – as much fun and as challenging as it was – would ever be actually mean anything outside of research labs and universities and corporate server rooms.

Now there is no longer doubt; what we do matters. Today hackers are, in fact, among the unacknowledged maintainers of civilization. There is honor in that.

175 comments

  1. Well, the dreaded day has come, and even you, esr have recognized it. We can no longer really afford to have a group of loosely organized volunteers just create things. You must become ENGINEERS. It’s long been fashionable to denounce the very idea of software engineering, with the associated rules, certification, bureaucratic requirements, oversight, degree qualification, and responsibility, but it’s all becoming necessary. Creation is fun, but maintaining what you create is work, and you must maintain what you create; who else knows the most about it?

    Right now, hacker creations are free, and so it is harder for someone who uses them to collect damages from their use, but how long will this situation last? If damage results, someone will have to pay. Is there to be a hacker insurance collective to indemnify your users when all those eyes on the open source still miss something?

    Small is beautiful, but it looks like the future lies with the software behemoths. I hate to say it, but only they will have the resources to survive, and use regulation to squeeze hackers out.

    Please don’t think I’m crowing. I’m not happy about this at all.

    1. >Small is beautiful, but it looks like the future lies with the software behemoths.

      No. Their way has been tried, and it failed. That is why we hold up an ever-increasing arc of the sky.

  2. I’d be amused to read the headlines much of the media would write if they read this. At any rate, I’m just glad its the folks who are most capable and most passionate that are involved in keeping these systems functioning. I shudder to think what would happen if the core of the internet were closed, proprietary and maintained by an ever-rotating cadre of dudes at some megacorp.

  3. Yep. Those of us not concerned with inventing a compensatory ‘hacker’ cult figured this out quite some time ago.

    Welcome to the party.

  4. > It’s long been fashionable to denounce the very idea of software engineering, with the associated rules, certification, bureaucratic requirements, oversight, degree qualification, and responsibility, but it’s all becoming necessary.

    To observe bureaucracy in operation read up on the challenger disaster.

    Contrary to what Charles Murray claims our elite is not very bright, is not the cognitive elite, is considerably dimmer than it used to be, and is getting dimmer every day, as conformity and political correctness become ever more important, and ability ever less important. Indeed it has been visibly and obviously getting dimmer since around 1910 or so, and I would argue getting dimmer since 1870. Bureaucratic requirements, oversight, and degree qualification means that final authority slides into the hands stupid people, as happened in the challenger disaster.

    Secondly, today’s bureaucracy runs on consensus, which invariably means dominion by the evil and the insane. The system you refer to does not work. Observe what the IETF has become.

    Here the space shuttle manager explains to the Challenger Inquiry Chairman why he ignored Lund’s paper that the Challenger, would, if launched in cold weather, go BOOM! “Failure effects summary. Actual loss. Loss of mission, vehicle and crew due to metal erosion, burn-through, and probable case burst, resulting in fire and deflagration.”

    The manager whips us some nonsense that this paper, which he read and signed off on, does not mean what it says, nor say what it means, and in any case is all wrong – but his explanation is incoherent and makes absolutely no sense. His explanation basically boils down to “This disaster scenario was too complicated for me to understand and totally went over my head, so I just blew it off”.

    Upper management was, as Feynman tells us, in denial, illustrating that consensus always winds up dominated by the evil and insane, but the inquiry reveals that they managed to remain in denial despite signing off on papers that told the truth. It is a lot easier to be evil if the people you are lying to are stupid, and a lot easier to be insane, if you yourself, are stupid.

    So once the consensus of the elite comes to be dominated by the evil and the insane, the elite tends to recruit for stupidity. Hence the decline of western civilization and the slowing of technological progress.

    1. What caused the Challenger disaster was pretty clearly bureaucracy yes, but the fundamental algorithm of bureaucracy is hierarchy, *not* consensus. If NASA was run by consensus, the engineers who wrote those papers pointing out the possibility of a fatal explosion would have blocked consensus on a launch, and it wouldn’t have happened until their technical criticisms were dealt with to their satisfaction.

      As Chetan Druve describes in the detailed Challenger case study in his book ‘Why Your Boss is Programmed to Be a Dictator’, each layer of managers in the hierarchy downplayed the seriousness of the engineers’ warnings to their own superiors (nobody likes giving their boss bad news). By the time the reports reached the decision-making elite at the top of the management pyramid, it all looked hunky-dory to go ahead with a launch. The rest is history.

  5. I think that you ought to consider whether this is another instance of the concept of the “invisible hand”.

    Hackers write software and make it available. People use it. If they want greater certainty of service than the distro provides, they pay for it – like Redhat or by employing/contracting good people. Projects want their products to be used, so the interests of the users and developers is usually more or less the same.

    A developer develops something, alone or as part of a group and the result is made available. I don’t see that this makes the developer, in any sense, responsible for maintaining that software.

  6. Regarding: “Think of Linux on embedded systems, diffusing its way into medical equipment. And the flight avionics of airliners.

    Do not such critical devices use only hard real-time operating systems? Is this no longer true? Was it ever true? For decades, I thought it was.

    Is Linux moving into critical devices due to excellence, or device makers trying to cut corners and save money, or some mix, or something else?

    Thank you. Have a productive day.

    1. >Is Linux moving into critical devices due to excellence, or device makers trying to cut corners and save money, or some mix, or something else?

      Some mix of the above. Another factor in the rising use of Linux on general-purpose chips for embedded work is that it’s simply becoming difficult to find the people who know how to write machine code for old-style 12- and 14-bit PICs. There were never very many such people, and an increasing number have been lost to attrition. Even device manufacturers who would like to do things the old-school, pre-Linux way have increasing trouble putting together teams.

      But for the changes I discuss here, why it’s happening is less important than the fact that it’s happening.

  7. I suspect Software design can never be really engineering.

    An engineer can prove beyond reasonable doubt that a design will actually do what it is supposed to do. Say a bridge will carry a specific load over a specific time under a range of environmental conditions (wind, earthquakes, temperature).

    There are mathematical proofs that prevent such certainty for software. You can often prove at great costs that some restricted program will actually perform as intended on a restricted processor. The restrictions are often such that they make the whole exercise futile in real life.

    On the other hand, medical practice, horti-/agriculture, animal husbandry, and the legal professions have the same problems. There is always incomplete knowledge and limited predictability, even though lives can be at stake. And I see software projects going into that direction of best practices, evidence based approaches, and modularity.

    It is obvious that these applied fields could not work in an equivalent of a “closed source” environment. Hence the demand that all closed source must die in Software too.

  8. This sounds like a whole culture going from adolescent playfulness to sober, sombre, serious and boring maturity. I think you are mourning the collective inner child of hackers which sooner or later has to go.

    In other words a transition from the view of science (“no such thing as a failed experiment, we always learn something, at least that this way we can’t do it”) to the view of engineering (“certain stuff MUST work”) and with that becoming the kind of boring and nitpicky people mechanical engineers tend to be. Also it will mean more reliance for “best practices” (which is a code word for tradition) rather than experimentation, as in more and more things risks will have to be kept low. A “best practices” culture is also a culture of established experts with certifications and whatnot, not self-taught innovators, which also has some authoritarian tinge to it. No wonder you don’t like it. I personally have no problem with it, but I too wonder, if hackers go the way of the certified best-practices expert, who will invent the next future?

    1. >I think you are mourning the collective inner child of hackers which sooner or later has to go.

      I didn’t think I was mourning anything. I can put in the kind of effort GPSD requires and still work on Battle For Wesnoth. I can write joke RFCs while I work on the bufferbloat problem. The choices involved are not zero-sum.

  9. My post came right after a long one by James A. Donald and it sort of looks like my post is a response to him.

    I should have make clear that I was addressing ESR when I started:

    I think that you ought to consider whether this is another instance of the concept of the “invisible hand”.

    Hackers and other developers make software and make it available. If someone wants to use it but wants it to be more reliable than it already is, they have to pay for it one way or another.

  10. I wonder if it would be hard to mathematically prove (perhaps tools-assisted) correctness of GPSD code,… assuming that kernel and libraries are bugless ;-)

  11. “Hackers” have done a pretty lousy job of holding up the sky. Exploits that are 100% preventable with better design keep pouring in by the bucket-load. Even without better design simply choosing better languages (e.g. Forth, LISP) could eliminate whole classes of errors. But hackers are only interested in the prestige that comes with proliferated more complex software that sort-of-kind-of-works-if-you-treat-it-with-kid-gloves. Every release of Firefox has a bunch of critical holes in it. Even gpsd had a code execution bug directly traceable to the design of C.

    1. >“Hackers” have done a pretty lousy job of holding up the sky.

      Um, so who does a better job? You omitted to mention that other browsers have far worse records than Firefox even when you normalize for userbase size. And in general the hacker culture has a pretty enviable record – how frequent are bugs in the Internet core? Our stuff is as ubiquitous as it is now because other technical cultures failed the reliability and sustainability test.

      >Even gpsd had a code execution bug directly traceable to the design of C.

      You’re probably thinking of that format-string vulnerability on ’05. Sadly, languages like LISP and Python weren’t practical then for software targeting embedded deployments, and still aren’t. When that changes, you can expect hackers to be the people aggressively pushing the language transition.

  12. @Roger Philips
    “> There are mathematical proofs that prevent such certainty for software.
    Like what?”

    The Halting Problem, e.g., work starting with Turing. Then there is the work on NP complete problems, which basically says that if you enter “interesting” problems, there is no guarantee you will be able to provide a solution within the allotted time. Btw, anything containing global optimization or open search could run into some NP bounds.

    Together, what they say is that when starting the proof of correctness of a Turing Complete application, you cannot guarantee that you will be able to finish the proof within the allotted time. And when sampling the space of possible inputs, you cannot get a statistical distribution over the run-time performance (due to edge cases).

    The devilish problems lie in interrupts and timing (deadlock). If you disallow interrupts and can prevent shared resources, often proofs of correctness can be done. But that leaves out most of the useful software.

    1. >The Halting Problem, e.g., work starting with Turing.

      Roger is a computer scientist, He knows all about the Halting Problem etcetera. Doubtless he’s about to spring some sort of cute rhetorical trap on you.

      /me gets popcorn.

  13. @esr
    “>The Halting Problem, e.g., work starting with Turing.
    Roger is a computer scientist, He knows all about the Halting Problem etcetera.”

    I remember having “sparred” with him earlier.

    I must admit my own information is becoming (seriously?) dated. But if he can tell me it is currently possible to do proofs of correctness on multithreaded Turing Complete applications, that would be great. I am particularly interested in interactive applications (eg, computability logic, Japaridze).

    Disclosure: I make no secret of my strategy of blurting out half-informed opinions in the company of experts. When dosed correctly, this lead to them explaining the errors of my ways and me learning a lot.

  14. “Had to be me, someone else might have gotten it wrong.” — Mordin Solus, _Mass_Effect_3_

  15. Proven correct operating systems … remind me again how that went with the old “verified” Unix kernel?

  16. “My own GPSD is a good example. Maybe it was all about mapping WiFi hotspots and geocaching and research applications a decade ago, but nowadays the known deployments include the IFF systems of armored fighting vehicles in wartime. Bugs in my stuff could kill people.”

    A malfunctioning IFF can equally well save people, namely foes identified as friends.

    It’s not a strong argument to say that malfunctioning stuff can kill people, if it was intended to kill people in the first place.

  17. >Bureaucratic requirements, oversight, and degree qualification means that final authority slides into the hands stupid people, as happened in the challenger disaster.

    To an extent, rules cause stupidity, since rule-following short-circuits thinking about the problem. And rule-following is easier than thinking. Why do you think religion and government-worship are so prevalent?

  18. Bufferbloat is an interesting problem to solve, but the way it’s being described sounds a lot like “imminent death of the ‘net predicted!” — and we all know where those tend to go.

    As a member of the hacker community I’m a bit put off by this post. I think it’s unreasonably self-congratulatory.

  19. Anyone care to take a crack at the relationship between unanswerable questions about the behavior of computer programs and the possibility of Friendly AI?

    I’m interested in why the difficulty of maintenance is surprising– I have a theory.

    It’s that computers seem as though they should work.

    Science fiction writers are a reasonably bright bunch, but not only did almost no one predict the ubiquity of computers or the social use of them, no one came even close to predicting that there would be shelf after shelf of books in bookstores because the general public included a market for knowing how to wrangle their home computers.

    I tentatively suggest that computers do very simple things pretty reliably, but intuition is weak both about how hard it is to do complex things, and about how many things are complex.

  20. >Small is beautiful, but it looks like the future lies with the software behemoths.

    No. Their way has been tried, and it failed. That is why we hold up an ever-increasing arc of the sky.

    Uh, last time I checked, Microsoft was still in existence, selling all sorts of desktop stuff. Google is still around, taking over the smartphone arena. I’m not sure about the ‘ever-increasing’ part.

  21. @LS

    More and more of the underpinnings of things are open source.

    I was there during the browser wars, which were really web server wars. Companies including Netscape gave their browser software away hoping to increase the market for their server software. The browsers, you see, had all these nifty proprietary extensions that worked best with the same brand server software.

    While Microsoft and Netscape were fighting over this Apache stole a march on them and made the browser wars irrelevant.

    This process continues. It’s easier and cheaper to use open source software at the low levels than to re-invent the wheel.

  22. You have very good points. In my previous job, I was working for a company that provided 911 services to VoIP phone companies, and we were using Linux, Java, and a whole bunch of open source in our work. Now, I’ve worked on important projects before, but this was the first time I’d ever worked on something that literally was what Charles Stross called “SFPD” – “System Fails, People Die.” It was a little humbling.

    Now I’m working for a company that doesn’t work at quite that critical a level, but I won’t forget the mentality any time soon.

  23. Um, so who does a better job?

    Ummm, these guys?

    If hackers want to hold up the sky, they will have to start adopting things they don’t like, like slow, methodical processes and bondage-and-discipline languages like Ada that provide strict guarantees of language semantics and don’t do stupid shit like make “smashing the stack for fun and profit” pert-near trivial.

    When lives and the machinery of civilization are on the line, getting strict is the only way to go.

    Software will have to become an engineering discipline.

  24. @LS
    > Uh, last time I checked, Microsoft was still in existence, selling all sorts of desktop stuff.

    Ya, cause MS has such a stellar track record of producing stable, secure software.

  25. @Jeff Read
    “Software will have to become an engineering discipline.”

    Can it?

    Can you really draw up specifications and then prove beyond reasonable doubt that your executable will correctly perform within these specifications?

    See my comments above.

  26. @Shenpen
    > if hackers go the way of the certified best-practices expert, who will invent the next future?

    To misquote: I don’t know who it will be, but they will be called hackers.

    1. >To misquote: I don’t know who it will be [who invents the next future], but they will be called hackers.

      This may have been meant as a joke, but I think it’s very true. What is more relevant than what they’re called is that they will still laugh at RFC1149 – the posture of mind, if not the name, will persist. Because without it they won’t be the kind of people who are capable of inventing the next future.

  27. @Jeff Read

    Hm. That article seems to miss the obvious difference between the shuttle team and all the other software teams it takes such pleasure in deriding: expectations, especially from management, and actual responsibility. When your manager is demanding impossible deadlines, quality will slip. When devs don’t have the corporate political clout to put a hold on things when problems arise, quality will slip.

    Given the shuttle software team has to give a full sign-off just like every other team, that means that they also have the ability to throw up a red flag and have it actually mean something. Management actually LISTENS.

    How many times have devs in the world raised flags that the suits ignored for time reasons, or because they didn’t understand?

    Isn’t that what happened with Challenger? (except not with software)

  28. Science fiction writers are a reasonably bright bunch, but not only did almost no one predict the ubiquity of computers or the social use of them, no one came even close to predicting that there would be shelf after shelf of books in bookstores because the general public included a market for knowing how to wrangle their home computers.

    “A Logic Named Joe”?

    Much of the works of Asimov?

    The Moon is a Harsh Mistress? (Though that dealt more with the ubiquity and social use of one big computer, which you might well consider the internet to be)

    To a greater or less extent all of these dealt with the bizarre failure modes of computing machines, or ways in which they could be made function contrary to their primary users’ expectations.

    And of course let’s not forget the king of all the computing prognosticators, William Gibson, whose prediction of the social upheaval that computing would bring was so spot-on that he now writes books set in the present day, with little change in style.

  29. Brings to mind a quote by Voltaire:
    With great power comes great responsbility.

  30. The article on writing software for the shuttle sure does not match my memories. Of course, I only peripherally saw that effort in the early ’80’s when it was all assembler. And consisted literally of card decks, with more card decks of patches.

    The shuttle had six computers, basically space rated 360’s, that were set up such that four or five ran the main software and the remaining computers ran a separately written and compiled backup flight software. The backup flight software failed completely for the first couple of missions. And the main software was far from bug free.

  31. “While Microsoft and Netscape were fighting over this Apache stole a march on them and made the browser wars irrelevant.”

    The browser wars were never irrelevant. Bill Gates finally got the message that the ‘net was the future, and he couldn’t afford to let Windows users think that there were alternative companies that could produce major pieces of software for them – hence the all-out effort to squash Netscape. It succeeded.

    You have to bear in mind that there are far more clients running IE out there, than there are servers running Apache.

  32. SFPD for open-source medical device software is handled the same way it always has been for medical devices. A corporation is set up, and officers of that corporation hire technologists to assemble products out of a mix of commercially available components and contract manufacturing and jump through all the necessary hoops to get the final product certified. There are no elegant correctness proofs and no perfect-software-by-composition tools–just a lot of best-effort testing, a sprinkling of best practices, careful limitation of liability and a giant insurance policy. It’s the guy holding the certificate, selling product, and making claims of merchantability and fitness for a particular purpose who is going to get sued, and he gets to choose (and take responsibility for) who works for him, where they get their upstream software from, and how it behaves in practice. None of that process that I’ve ever seen changes significantly if the upstream software is open or closed source.

    Product certification can be bad for software quality. The certification process tends to provide incentives for things like locked bootloaders and buggy component software. If a recertification costs six figures and takes several months, you don’t make changes until you can prove that not making some change will hurt someone, and then you batch up the life-saving bug fix with any outstanding user-visible issues–if you fix those at all. If a modified device in the field can get you sued, you lock the boot loader, use cryptographically authenticated batteries, and generally make modifications to the device without prior authorization hard.

  33. “Um, so who does a better job? ”

    This cuts to the core of the Leftist critique of the institutions of Western Civilization. (This includes the voluntary associations of the civil society, such as open-source dev teams, that aren’t blessed by Soros Almighty.) For all have sinned and fallen short of some unrealistic standard, so must we elect $SAVIOR[$n] to Do It Right This Time.

    That $SAVIOR[0..n-1] also fell short of that standard is never to be mentioned.

  34. @Eric:
    “I can put in the kind of effort GPSD requires and still work on Battle For Wesnoth. I can write joke RFCs while I work on the bufferbloat problem. The choices involved are not zero-sum.”

    But every hour you spend on GPSD is an hour not spent on Battle For Wesnoth, and vice versa. The tradeoffs are indeed zero-sum. And these tradeoffs are even greater for those who have a 9 – 5 job (or longer, as many STEM jobs do) that doesn’t involve workign on either. Personally, I’m trying to initiate several different personal geek projects, and the lack of time and energy has slowed them all down to a crawl.

  35. @Winter:

    The undecidability of the halting problem is not among the practical obstacles to proving the correctness of programs that solve real-world problems, just as Gödel’s incompleteness theorems are not a practical obstacle to doing ordinary mathematics (meaning algebra, analysis, topology, statistics, etc., as opposed to “meta” mathematics like logic and set theory). For decades, we’ve known about typed lambda calculi such as the Calculus of Constructions in which type-checking is decidable and every well-typed program halts, yet are so expressive that we’re nowhere near being able to describe their proof-theoretic ordinal. It’s easy, through diagonalization, to pose problems that can’t be solved in the CoC, e.g. “write an interpreter for itself”, but it’s nigh impossible to come up with one that doesn’t invoke some meta-programmatic trickery of that sort.

    The real obstacles to writing verified code that solves non-trivial problems are practical, not theoretical. Most of them boil down to “math is hard”.

  36. @James Macdonald:
    “Contrary to what Charles Murray claims our elite is not very bright, is not the cognitive elite, is considerably dimmer than it used to be, and is getting dimmer every day, as conformity and political correctness become ever more important, and ability ever less important. Indeed it has been visibly and obviously getting dimmer since around 1910 or so, and I would argue getting dimmer since 1870.”

    But what’s the driving force behind this? I confess I’ve never understood why it’s been so difficult for me and those like me (highly intelligent, highly educated at name universities, good social skills) to get into positions of authority in the organizations where we work. “It’s not what you know, but who you know,” is an old saying, but it begs the question of why the highly competent don’t seem to know the right people to be more influential. Perhaps we just think too differently from the way that they do and our interests are too different from theirs.

    /me shrugs

  37. @Daniel Franke
    “For decades, we’ve known about typed lambda calculi such as the Calculus of Constructions in which type-checking is decidable and every well-typed program halts, yet are so expressive that we’re nowhere near being able to describe their proof-theoretic ordinal.”

    A large fraction of applications contain Turing Complete scripting interpreters. When you want programming to be engineering, you cannot just limit the functionality you want to implement.

    There are some tough questions to answer when you would like to reimplement proven correct versions of, eg, VIM, SQL, PostScript, or JavaScript.

  38. > When you want programming to be engineering, you cannot just limit the functionality you want to implement.

    Of course you can. It is common practice in every discipline of engineering to make design trade-offs in the name of reliability.

    > A large fraction of applications contain Turing Complete scripting interpreters.

    Even here, the theoretical limitations don’t have significant practical consequences. You can’t use the Calculus of Constructions to prove the correctness of an interpreter for a Turing-complete language, but you can design a tiny Turing-complete language out of just a few primitives, write an interpreter for those primitives whose correctness is self-evident from inspection, and then use the CoC prove the correctness of your transformations from a larger language down to those primitives.

  39. The Halting Problem, e.g., work starting with Turing. Then there is the work on NP complete problems, which basically says that if you enter “interesting” problems, there is no guarantee you will be able to provide a solution within the allotted time. Btw, anything containing global optimization or open search could run into some NP bounds.

    Are you trying to say that there’s proof that there’s no way to guarantee that you will find a solution to a given instance of an NP-complete problem? This is true of any problem that takes non-constant time. If P = NP then all NP-complete problems have a solution providing answers in time that grows polynomially with problem size. So it’s just plain wrong to say that there’s mathemtical proof of the difficulty of these problems. We do know they are all essentially equally difficult though.

    Together, what they say is that when starting the proof of correctness of a Turing Complete application, you cannot guarantee that you will be able to finish the proof within the allotted time.

    No, the halting problem says that there is no _program_ (for some specific definitions of “program”) that can calculate for every program whether that program will halt. But it doesn’t say anything about what subclasses that can be calculated for. It certainly doesn’t say anything about the ability of a human to provide halting proofs for these programs.

    No idea why you’re linking this to NP-complete problems.

    The devilish problems lie in interrupts and timing (deadlock). If you disallow interrupts and can prevent shared resources, often proofs of correctness can be done. But that leaves out most of the useful software.

    Most useful software except compcert, the Paris metro system, SEL4 kernel, various small JVM implementations, etc. In fact, it looks like we’re not too far off closing the loop to have a compilation and theorem proving toolchain that is verified. I don’t think such a general statement about some supposed inability to prove properties of interrupt-driven programs is in any way justified. Furthermore, “most useful programs” are defined in terms of a semantics that completely hides the notion of interrupts.

    I have no axe to grind here – I think most formal methods work is a wasteful or even harmful. But we should be straight on the facts. And the fact that the proofs always omit certain details is simply the nature of maths.

  40. Um, so who does a better job? You omitted to mention that other browsers have far worse records than Firefox even when you normalize for userbase size. And in general the hacker culture has a pretty enviable record – how frequent are bugs in the Internet core? Our stuff is as ubiquitous as it is now because other technical cultures failed the reliability and sustainability test.

    Sorry, that’s post-rationalisation. Yes, some things have fallen by the wayside because they were unreliable. The rise of Firefox is a direct reaction to the hideous problems of IE. But Firefox is just the Nth worst browser, not something that escapes the fundamental problems.

    You’re probably thinking of that format-string vulnerability on ’05. Sadly, languages like LISP and Python weren’t practical then for software targeting embedded deployments, and still aren’t.

    I’m not trying to pick on gspd, which has a record that is above par in the open source world. Maybe it was just a bad example. My point is that when programming in C, even thorough programmers don’t escape unscathed.

    When that changes, you can expect hackers to be the people aggressively pushing the language transition.

    I doubt it. Firefox has taken decades to _experiment with_ switching to a memory-safe language. And that won’t solve all the logic bugs resulting in cross-domain JS execution they keep having. This requires fundamental design that enforces invariants at a low level. And yes, the browser may have to be simpler. So what? Bridge builders used to have to spend time under the bridge. Even now they have to “sit under the bridge” legally. Not so for software engineers (hackers included). Our ethics are so distorted that we take the upside when things work but ohers bare the consequences when we make recurring failures for decades. The fact that hackers have done some factor less damage than souless companies just reflects that they are ego driven, not a fundamental ethical difference.

    1. >But Firefox is just the Nth worst browser, not something that escapes the fundamental problems

      Nothing escapes the fundamental problems.

      >My point is that when programming in C, even thorough programmers don’t escape unscathed.

      That is true. It is also true that random-input testing of C programs has shown open-source code to be significantly less afflicted by buffer overruns and memory-management issues than closed-source equivalents. It’s neither true nor useful to claim that hackers are doing a lousy job when (a) C is still the only practical language of deployment in many environments, and (b) open-source hackers are doing a measurably better job of managing C risks than anybody else.

      As an old LISP-head and a Python fan, I too would like to be shut of C’s problems. But that is not yet possible.

      >Our ethics are so distorted that we take the upside when things work but ohers bare the consequences when we make recurring failures for decades. The fact that hackers have done some factor less damage than souless companies just reflects that they are ego driven, not a fundamental ethical difference.

      There is something in what you say here. But how would you fix it? Imposing liability bonding as a requirement for selling software won’t work – no vendor would assume that risk in our present legal environment, it would just kill the market for software. Some of my more zealous peers might like that outcome; I wouldn’t.

      Imposing liability bonding as a requirement for writing software in an attempt to make even open-source developers eat the risk from their own failures might be the next step, but that would be even more fraught with problems. Only beginning with the freedom-of-expression issues…

  41. Cathy said
    “It’s not what you know, but who you know,” is an old saying, but it begs the question of why the highly competent don’t seem to know the right people to be more influential. Perhaps we just think too differently from the way that they do and our interests are too different from theirs.”

    I think you’ve got it. The people that ended up as part of the financial/political elite, apart from often being members of “the lucky sperm club”, spent a lot more time networking/hanging with others in their group, while the advanced placement/science club/rocket club/ham radio gang spent their time building things and blowing stuff up. Being smart had a whole set of social disadvantages even back then. Even in a college prep school attended by the current resident.

  42. There are some tough questions to answer when you would like to reimplement proven correct versions of, eg, VIM, SQL, PostScript, or JavaScript.

    I think you’re pretty confused. The major obstacle for proving VIM correct would be defining what it meant to be correct, since its behaviour is mostly a hodge-podge of UI concepts.

    Not so with SQL, PostScript and JavaScript and which (given some formal semantics) have a nice definitional “net” that can catch the most egegious errors. As Daniel Franke points out, getting there with verified implementations stems mostly from the fact that constructing proofs is just hard business. And for computer programs, they are probably best automated in mechanised theorem provers since the proofs consist mostly of shallow details. Those front-ends of those mechanised provers are bloated messes of complexity and the back-ends are are fussy about details.

    I suggest for anyone who is interested in understanding the business of program verification download either Isabelle or Coq and prove some basic principles. Programmers may find Coq surprisingly intuitive once you have “gotten it”, even without understanding the Calculus of Inductive Constructions. Isabelle is in my opnion more approachable from a traditional applied maths background.

  43. I think you’ve got it. The people that ended up as part of the financial/political elite, apart from often being members of “the lucky sperm club”, spent a lot more time networking/hanging with others in their group, while the advanced placement/science club/rocket club/ham radio gang spent their time building things and blowing stuff up. Being smart had a whole set of social disadvantages even back then. Even in a college prep school attended by the current resident.

    This is largely an American problem.

  44. @James Macdonald:
    > > “Contrary to what Charles Murray claims our elite is not very bright, is not the cognitive elite, is considerably dimmer than it used to be, and is getting dimmer every day, as conformity and political correctness become ever more important, and ability ever less important.

    Cathy on Monday, March 19 2012 at 2:06 pm said:
    > But what’s the driving force behind this? I confess I’ve never understood why it’s been so difficult for me and those like me (highly intelligent, highly educated at name universities, good social skills) to get into positions of authority in the organizations where we work.

    Three cases that should be studied: The Challenger disaster, Washington Mutual, and Countrywide, where demonstrably stupid people were given power and catastrophically fouled up.

    In the case of the Challenger disaster, the people who knew the Challenger was going to blow up, and wrote reports saying so, were engineers trained on the engineering track. At university they studied engineering problems. At work, they their jobs on the ability to solve engineering problems. The people who did not know, and did not want to know, that the Challenger was going to blow up, were engineers who did postgraduate work on the business management track, trained to manage engineers, not to themselves engineer, and their careers were from the very beginning in management – they went directly from university to low level management. As managers, they were judged not on their ability to solve engineering problems, but on their ability to solve people problems – and if you are smarter than your boss, it is a problem.

    Consensus tends to be dominated by those who will not shift their purported beliefs in the face of evidence and rational argument, dominated by the evil and the insane, meaning by those who lie about what their beliefs are, and thus have purported beliefs that are unaffected by reality, and those who genuinely have beliefs unaffected by reality. To fit in with such a consensus, it helps if you are stupid. See the scientific debate on linguistic deep structure for a debate dominated by the evil, dominated by those for whom scientific theory was one more club with which to destroy their enemies in academic struggles over power and funding, and the scientific debate on animal fats in the human diet for a debate dominated by the insane, dominated by those for whom health means spiritual health, which is best obtained by not exploiting or oppressing animals. In both cases, stupidity, real or feigned, helps advance one’s scientific career. Too much smarts will incur the wrath of a small but fanatical group, which no sensible person is going to provoke.

    When selecting people for the management track, businesses very reasonably look for past experience on the management track. To him that has shall be given, to him that has not, even what little he has shall be taken away. However success at the lowest levels of the management track is at best a poor indicator of intelligence, and in government and in large schlerotic organizations choked on red tape, is a strong negative indicator of intelligence. Dumb people thrive in an environment where there are lots of committees, and lots of time is spent attending meetings.

    To compensate for this, businesses look to universities to select management track people for them, so that only the very smart get started on that track. But increasingly universities accredit people in much the same way that large bureaucratic organization gives people experience on the management track. You are required to sincerely believe six impossible things before breakfast every morning, and it is a lot easier for stupid people to sincerely believe, to fit in.

    Observe that one can get a computer science degree from a name university, despite total lack of ability to write a non trivial program. Course material is no longer filtering out the less clever, and political correctness and general requirements for conformity are filtering out the clever. This is a problem even with engineering courses, and it is a much bigger problem with management track postgraduate courses. If someone has a management track postgraduate degree in engineering from an elite university, he is pretty much guaranteed to be stupid, relative to an engineer from a no name university who has successfully done some actual engineering at work – which is what we saw happening in the Challenger disaster.

    And so you have to be at least that stupid to fit in and get started on the management track.

    And that is what happened with the Challenger disaster: Murray signed off on a report by Lund that said that the Challenger was going to explode, but never understood what he was signing off on.

    If we look at Kerry Killinger, CEO of Washington Mutual, we find that Washington Mutual had internal reports saying that it was going to collapse, because its borrowers could only pay their mortgages by flipping their houses, so as soon as housing prices stopped rising, Washington Mutuals mortgages would collapse, just as NASA had reports saying that the Challenger was going to explode, which reports were, of course, ignored.

    Let us look at Kerry Killinger’s resume, leading to his job as CEO. It is all that he served on the X committee, and was Chairman of the Y committee. Committees work by consensus, so tend to be dominated by the evil and the insane, so it helps to be stupid to fit in. His management background was a career in the track that tends to most strongly select for stupidity, conformity, and willingness to conform to what is stupid, evil and insane.

    The Countrywide disaster was in large part caused by affirmative action, and by selection for true believers in affirmative action, which is to say, for stupid people. The CEO, Angelo R. Mozilo, was an affirmative action hire, and also a sincere believer in affirmative action lending. His bank became large and powerful in large part by making special loans to politicians and regulators, by left wing politics. He made an enormous number of “VIP loans” to fellow leftists in positions of power. Since left wing politics requires sincere belief in no end of ridiculous stuff, leftists tend to be stupid, even when they do not affirmative action dimwits to power among themselves.

  45. > It is also true that random-input testing of C programs has shown open-source code to be significantly less afflicted by buffer overruns and memory-management issues than closed-source equivalents.

    Can you provide a citation for this? I’d like to see what their methodology was.

    There’s one particular flaw that I’m worried I’ll find. The researchers who (arguably) invented fuzz-testing in the late 80’s used the basic UNIX command-line utilities as their experimental subject. In the earlier aughties, still well before fuzzing entered widespread use, GNU fileutils/sysutils/textutils (all now known as GNU coreutils) got a similar beating from a project that ISTR was named “Bulletproof Linux”, though I can no longer find anything relevant by Googling that. If the authors of the study you’re citing selected these tools as a representative sample of OSS, then it taints the interpretation of their results: OSS may be better not because its original authors did a better job than their closed-source counterparts, but just because it was lucky enough to receive early attention targeted at correcting the very defects that the study was testing for.

    Then again, if all this is true, there’s still a good retort: the fact that OSS was the first get all that research attention is not a coincidence, but a direct corollary to the “many eyes” principle.

    1. >Can you provide a citation for this? I’d like to see what their methodology was.

      Here’s one: ftp://ftp.cs.wisc.edu/paradyn/technical_papers/Fuzz-MacOS.pdf

      It’s from the same group at University of Wisconsin that did the earlier fuzz testing papers, and they summarize their earlier results in the introduction. Among other things you’ll see that while both GNU and non-GNU open-source code had failure rates in the single-digit range (6% and 9% respectively) closed-source CLI code had dramatically higher rates of failure (15%-43%). Thus, whatever variation might plausibly be accounted for by GNU pre-hardening is swamped by the effects of open-source vs. closed.

      I think the kind of effect you’re thinking about does show up in their MacOS X testing, however. The failure rate in MacOS X CLI tools was 7%, which I take to be the result of starting with the open-source BSD toolset and then applying CLI fuzz testing internally. Even so it’s notable that Apple wasn’t able to pull the defect rate low enough to match that of fully open-source development.

  46. You’re probably thinking of that format-string vulnerability on ’05. Sadly, languages like LISP and Python weren’t practical then for software targeting embedded deployments, and still aren’t. When that changes, you can expect hackers to be the people aggressively pushing the language transition.

    Ada provides run-time protection against this sort of thing, is designed to be practical in embedded deployments, and yet is not being aggressively pushed by hackers.

    Yes, writing Ada is hard. It’s not an instant-gratification language. The compiler will keep you on the bounce. In exchange, you will save a lot of time, effort, and ultimately money on the back end by not having to test for a zillion little bizarre failure modes you shouldn’t expect to happen in the first place.

  47. @LS

    You have to bear in mind that there are far more clients running IE out there, than there are servers running Apache.

    Microsoft never charged extra for IE. Netscape officially charged, but gave you an infinite length evaluation period. Both charged big bucks for server software from the get-go.

    Netscape also shot themselves in the foot by complete rewriting their browser in Java, throwing out many features that had helped the earlier version render pages more quickly and introducing a slew of nasty bugs.

  48. Can you provide a citation for this? I’d like to see what their methodology was.

    One that comes to mind is this: https://www.gnu.org/software/reliability.html

    It’s a fairly old study, so it’d be great to see something more recent, but I don’t expect the outcome to be drastically different, at least not in the favor of proprietary software.

  49. @esr
    Something you said recently that is related:
    >I say it would be hideously irresponsible because our economy and civilization now depends
    >on the Internet; shutting it down even transiently would cost staggering amounts of economic
    >losses and probably human lives.

    I wonder if politicians and various other forms of “Them” dimly realise that while you* might not be able to completely do this you could wreak a lot of havoc if it came down to it and them not knowing anything else but control they pull stunts like SOPA and whatever is coming next (in addition to being in the MPAA’s pocket).

    As far as actually galting the Internet, remember things were not exactly wonderful at the end of Atlas Shrugged either; I would rather have a Cthulhu goatse(tm) burned into my brain than have to make that decision.

    *the collective you

  50. Nothing escapes the fundamental problems.

    Please dont hassle me on language. I’m talking about memory safety and separation of domains, and these can be escaped by separation at a basic level (language or runtime). Java hasn’t escape these problems because it’s too big and tries too hard to be efficient.

    That is true. It is also true that random-input testing of C programs has shown open-source code to be significantly less afflicted by buffer overruns and memory-management issues than closed-source equivalents. It’s neither true nor useful to claim that hackers are doing a lousy job when (a) C is still the only practical language of deployment in many environments, and (b) open-source hackers are doing a measurably better job of managing C risks than anybody else.

    Your fatal assumption here is that programs should be written for maximum platform penetration. To me this is just building systemic risk. Now that we’ve infected all these platforms with broken code how are we going to get it out? Furthermore you ignore Forth (Chuck Moore’s idea of Forth, not the bloated Forths), which can run on virtually anything and has been used on the most resource-constrained devices. It’s just hard to proliferate complexity with Forth, which is why it is an unpopular choice with “hackers”. Your own criticism of C++ apply equally to C, it’s just C is at your chosen level of complexity.

    As an old LISP-head and a Python fan, I too would like to be shut of C’s problems. But that is not yet possible.

    This is basically just an excuse. Firefox is not a tightly engineered program that inherently needs to be written in C. It should in no way be construed as “efficient”. It just does the same things I could do more than a decade ago, slower. It’s simply the case that it’s easier to have a hugely complex program thrown together with a bunch of C libraries than it is to build something simple that doesn’t come with unintended features that serve criminals instead of the user. The standards touted by open source people are paradoxically barriers to entry. How long would it take to write a fully compliant web browser from scratch? And yet given basic OS-style services such as hardware abstraction, most of the applications built on the web browser could be built easily in a small amount of assembly, Forth or other simple executional mechanism. We just can’t see it because we think it’s important to have layers of nonsense on top.

    There are lots of putative reasons to have all this complexity, e.g. accessibility. I can tell you as someone who cannot use a computer without magnifiers and zoom that these features are universally sloppy. If you want to build a good system, you have to build it scoped to purpose with a clever and simple design (e.g. magnification on the video card). Which is antithetical to modern productivity-focused software engineering.

    There is something in what you say here. But how would you fix it? Imposing liability bonding as a requirement for selling software won’t work – no vendor would assume that risk in our present legal environment, it would just kill the market for software. Some of my more zealous peers might like that outcome; I wouldn’t.

    I understand that you’re an economics/politics enthusiast, but to me it’s not a question of making a rational economic or regulatory analysis. We simply need to know that this ethics has to come to pass at some point. So we should commit ourselves to action that promotes these ideas, even if it comes at the expense of our own economic well-being. This is the fundamental reason why I like RMS’ style more than yours; he promotes an ideal, not economically rational action focused on production and market penetration. His ideology may not be effective in the short term, but his doctrine will be passed onto future zealots, unlike the economic rationalisations that can be extinguished quickly by expediency or (more likely) regulatory action. The costs of such superficially rational actions are often hidden, such as what we are seeing with the infestation of software and hardware systems “too important” to scrap and start again. I am apolitical – to me this is the same risk problem as big government and big corporations.

    Imposing liability bonding as a requirement for writing software in an attempt to make even open-source developers eat the risk from their own failures might be the next step, but that would be even more fraught with oroblems. Only beginning with the freedom-of-expression issues…

    I don’t see how it would b e problematic at all. It would harm neomaniacal computerisation. People would still build tools that solved genuine problems. Games and other entertainment can be put on dedicated devices with separation between code and save-game data. You download games on your secure shopping device, which would be ruthlessly simple and secure by design. These are then loaded onto cartridges that are write-only when you plug them into the game system. They can then be as insecure as you like. We had a similar setup with game consoles in the past, but these are increasingly turning into locked-down network PC’s. Platforms have taken on a memetic quality and started to reproduce and grow for their own benefit. Arguing for freedom to make people dependent on technology and then bollocks it all up is not convincing to me.

    I think a regulatory agency for this problem would be disastrous and would become its own self-perpetuating entity. Some legal liability or inculcation of a viable system of ethics could change things for the better. But I’m not aware of more than a few influential software people who understand the problems and I can only think of one who has actually rejected mainstream software significantly. Any attempt will be resisted by programmers addicted to fictitious “progress”.

    1. >Your fatal assumption here is that programs should be written for maximum platform penetration.

      That’s not my assumption, it’s the economics talking. Single-platform code doesn’t tend to thrive – it has trouble attracting the financial mass (on the proprietary side) or the volunteers (on the open-source side) to sustain development.

      This also explains something you complain about later, platforms proliferating as ends in themselves. What’s really going on is people deciding that the front-loaded advantage of avoiding the NRE to build a new platform is greater than the time-discounted cost of bugs inherited via an existing one. Sorry, but though this judgement may irritate you it is almost always correct. The few exceptions happen in the chaos around major technology transitions and are usually the result of harsh limits in early versions of the new technology that prevent the old software platform from being ported to it at all.

      >It’s just hard to proliferate complexity with Forth, which is why it is an unpopular choice with “hackers”.

      This claim is just bizarre. I was there for the first wave of Forth and hackers loved it – for its simplicity. I did a little programming in it myself, and the person who introduced me to it was the closest to an individual hacker mentor I ever had (he described it, not inaccurately, as “LISP without parentheses.”). To this day I don’t know why Forth never took off, but it wasn’t what you think.

      >We simply need to know that this ethics has to come to pass at some point.

      Maybe. In the long run, ethics and economic efficiency have to coincide. It cannot be otherwise, because economic inefficiency is not sustainable under competition and what “ethics” is really about is sustainable reciprocal exchange in situations where search costs are high and there’s no unit of account. If it isn’t economically efficient for programmers to bear the costs of software flaws, either the ethic you want will never emerge at all or it will be extremely fragile and not last long.

      I guess I need to put some serious thinking time into this problem. I’m sympathetic to some of your critique, but I think we have a system where users eat software error costs because as much as that sucks none of the alternatives are actually workable. Please prove me wrong by making a large fortune from your insight.

  51. > But I’m not aware of more than a few influential software people who understand the problems and I can only think of one who has actually rejected mainstream software significantly.

    I take it you’re referring to DJB?

  52. > Another factor in the rising use of Linux on general-purpose chips for embedded work is that it’s simply becoming difficult to find the people who know how to write machine code for old-style 12- and 14-bit PICs. There were never very many such people, and an increasing number have been lost to attrition. Even device manufacturers who would like to do things the old-school, pre-Linux way have increasing trouble putting together teams.

    If the Arduino forums are anything to go by, there are lots of new microcontroller developers training up. Some of them are as young as 9, starting out by building and programming decorative blinking LED arrays and modifying open-source-hardware clocks. Many modern devices contain a number of PIC-style chips, so somebody is clearly still programming them. There’s at least one inside every modern PC and another inside every laptop battery pack. Some ARM SoCs have one or more PIC-style processors embedded in the chip to take over the system while the ARM core is powered down (usually all they do is power it back up again, but the processors can interpret data from serial buses so the ARM core can stay off until something really interesting happens). Cars are utterly saturated with the things, like a rolling swarm of insects chatting with each other over CANbus. If anything, the evidence suggests we should expect more people programming PIC devices now than ever before.

    The problem with old-school PIC devices for embedded work is that they ran out of diminishing returns from industrialization a decade or two ago. Prices can’t get much lower, and technical gains are going into lowering the already tiny power consumption instead of larger RAM/ROM size or higher CPU performance. To get audio or networking or wireless or a display on these devices, you have to glue on a bunch of interface chips which not only perform hardware-level interfacing, but predigest the data (often with their own embedded CPU) so the PIC-sized master CPU can cope with it. You use an Ethernet or WiFi interface hardware module that implements a TCP stack which you access over a 115kbps serial interface with AT commands. You access SD/MMC storage devices one bit at a time. You make day-to-day software design decisions like “play audio or update the display? I don’t have enough RAM or CPU to do both.” It’s just like programming desktops in the 1980’s–but at that time the tiny microcontrollers inside them happened to also be the best general-purpose embedded technology available within orders of magnitude of the price, and now they are not.

    Today, for what you’ll pay for all that extra interface hardware and interconnect costs (not even including the much-larger constrained-environment software development costs), you could buy modern ARM SoC devices which include popular interface hardware or an FPGA to roll your own. The devices include ten thousand times more computer than an old-style 12-bit PIC, and they are powerful enough that you can run Linux and get working TCP/IP and filesystem stacks.

    I’d say the use of Linux is just a standard risk/benefit/cost trade-off, and it’s not an all-or-nothing one either. Bare-metal programming can be mixed with embedded OSes (e.g. bootloader splash screens, or rtlinux-style real-time hypervisors). Different OSes can run on each core of multi-core CPUs, so Linux gets one core for networking, UI, and so forth, while a hard-real-time bare-metal app runs on the other core collecting data, controlling robots, and so forth.

    1. >If the Arduino forums are anything to go by, there are lots of new microcontroller developers training up.

      An interesting and quite possibly valid point, but if so it’s not being reflected in embedded platform choices yet.

      The kind of patchy, multiple-processor systems you describe have been seen before. They don’t tend to last – it’s more efficient to migrate all that stuff into the central CPU, while pouring money into process improvements to optimize whatever figure-of-merit (speed, low power consumption, whatever) you’re trying to maximize. Which is why it’s a good time to buy ARM stock.

  53. I take it you’re referring to DJB?</blockquote

    DJB certainly understands good design. And designing good things inside complex and broken systems is worth doing. But I want to see (almost) total rejection of the complex mainstream systems, e.g. Chuck Moore. I would like to see more, smaller operating systems. Moore thinks you should build your own Forth. Indeed, this is easy to do. It's just hard to interface with modern hardware, which is broken to the point where it can't be used without a thick layer of undocumented drivers. Contrast with VGA, which gives you most of the bare essentials in a manner that is uniform across almost every card. People will no doubt come up with reasons why it has to be hard to tell a piece of hardware to copy some coordinates out of system memory into its own memory, and so the problem will continue.

    I am not interested in complexity for the sake of compatbility. It's usually just an excuse to abandon artistry and avoid building complete, working artifacts. All you need is good documentation and a simple system. If you want two pieces of software to talk you can build another simple artifact for compatibility, since the two bits of software are simple in the first place. Current software makes it too easy to increase complexity. Simple things should be easy, complex things should be hard. We have the reverse right now.

    It's important to be careful about "simplicity". This doesn't mean "easy". JavaScript and LUA are not "simple" unless you completely abandon performance. You need a complex implementation to make them fast, and they are only optimised for throughpuit anyway. And yet most people aren't doing bulk numerical calculations with their computers, and those who are can easily use simple, strongly-typed languages to do so.

  54. To this day I don’t know why Forth never took off, but it wasn’t what you think.

    Could’ve been sheer luck. How many hackers were in the market for a programming language at the time? A few thousand, maybe? There is no law of the universe compelling every hacker to try every language, let alone long enough to rationally judge its general use. Who happens to try which language first has a greater effect over a smaller pool of hackers than a larger one.

    It could also be made worse by the “NYT is the source of record” problem. If C is good enough for K&R, say, and enough hackers think enough of K&R to think whatever K&R use is good enough for them to try, and it turns out in fact to be good enough, then it may not matter how perfect Forth is. It also may not matter how inherently discerning hackers might be, if their focus of discernment is on making stuff go with some language as opposed to trying a new language.

  55. I think a regulatory agency for this problem would be disastrous and would become its own self-perpetuating entity. Some legal liability or inculcation of a viable system of ethics could change things for the better.

    Considering that at this point, the US legal landscape seems to be shedding corporate liability like a moldy pair of socks… I don’t see this coming to pass any time soon.

  56. Shenpen> …the kind of boring and nitpicky people mechanical engineers tend to be.

    Ouch. I will try not to take that personally.

    LS> You must become ENGINEERS.

    As mentioned in a previous thread, I don’t think software and engineering are a good fit. Engineering relates to the physical world. Programming relates to logic. This means they differ greatly when it comes to testing and project cost timing to name two significant examples.

    If engineering is applied physics then programming is applied mathematics(?). Different animals. Programmers need their own word.

    ESR> Imposing liability bonding as a requirement for selling software won’t work – no vendor would assume that risk in our present legal environment, it would just kill the market for software. Some of my more zealous peers might like that outcome; I wouldn’t.

    The liability and the rules (strict best practices, legislation, professional associations, etc) will come when the cost of not having them gets higher that “society” wants to pay. i.e. when enough people die and enough money is lost. There will not be a careful economic analysis. “Something must be done” is what will be said.

    For example, you would not believe the hassle involved with designing and manufacturing pressure vessels. Is it worth it? Maybe, they tend to explode less in modern times. It also makes them very expensive. As a secondary effect, how dynamic is mechanical engineering relative to 100 to 200 years ago?

    I find this very depressing, because I expect to see programmers become boring and nitpicky people in my lifetime.

    1. >I find this very depressing, because I expect to see programmers become boring and nitpicky people in my lifetime.

      To quote Douglas Adams, “There is another theory that states this has already happened.” :-)

  57. @Jeff Read:
    “Yes, writing Ada is hard. It’s not an instant-gratification language. The compiler will keep you on the bounce. In exchange, you will save a lot of time, effort, and ultimately money on the back end by not having to test for a zillion little bizarre failure modes you shouldn’t expect to happen in the first place.”

    They why didn’t Ada ever catch on in the business world, where the business and finance types, not engineers, make these decisions? It isn’t just the open source world that rejected Ada, but the whole software engineering space, barring some edge cases concentrated in government.

    I remember taking a required Ada class in a job many years ago, paid for by the employer (under government contract), and then never being asked to use it on a project. Back when I was doing contract programming work, I can’t remember hearing of any demand for Ada whatsoever, or I’d have considered applying at a suitably high hourly rate.

    Don’t blame the geeks and hackers for the failure of Ada. It was a nice idea that just didn’t pan out.

  58. @Roger Phillips:
    “I’m talking about memory safety and separation of domains, and these can be escaped by separation at a basic level (language or runtime). Java hasn’t escape these problems because it’s too big and tries too hard to be efficient.”

    There has never been a time in software history that efficiency hasn’t mattered. No matter how fast the underlying hardware becomes, we ask it to do more and more. And since new features sell software more effectively than reliability and lack of bugs, the economic incentives favor adding new features in release N+1 instead of fixing bugs. (See Microsoft Windows.)

    Today, when the big push is in embedded devices rather than desktops, the problem is even worse. Android Dalvik is designed to avoid some of these problems with a nice VM, but to get decent performance, you have to download the NDK and write C code anyway.

  59. I took Ada back in college as well. (Well, it was a general languages course, but we had to write Ada code.) I remember having to specify every little thing, even for relatively simple tasks like reporting something to stdout. I’ve been a program correctness geek ever since I learned programming, and I didn’t feel particularly taken care of with respect to catching bugs – Ada required me to say this or that, but I could intuitively see how I could tell it to end a for loop too fast or write threaded code that would subtly deadlock.

    And still, I had to write a Lem novel for every loop. I got nice and sick of it. And then I learned Ada was developed by the Defense Department, and it all dawned on me:

    I was filling out a form.

    Never wrote any Ada code since.

  60. I guess I need to put some serious thinking time into this problem. I’m sympathetic to some of your critique, but I think we have a system where users eat software error costs because as much as that sucks none of the alternatives are actually workable.

    You say they’re not workable or “efficient”. I am not claiming that they are, nor that they should be. The pursuti of efficiency is the root of the problem, because it has lead to the belief that problems with complex software can be solved with yet more complex software. Your economic insights are shortsighted. We don’t have to change the world now, we just have to preserve the ideals long enough that they can some day grow. According to your economically rationalised position, librarians, monks and other curators of knowledge should have abandoned the stores of classical literature to go work in the fields for short-term economic good because it would make more short-term economic sense. And yet the world would be tremendously poorer intellectually for the loss of these works.

    Your sttategy is fragilising, because you can’t “compensate” fragility out of a system; it has to be structurally sound. Compensation just causes a bigger blow up. I assume you understand that tumours must be excised with haste and thhat deficits should be resolved promptly. But you seem to think software should be grown and grown based on some rationalisation or another. More additive drugs can’t counteract the problems of unhealthy overeating and lack of exercise. Likewise, pretty features whose purpose is to satisfy pointless “neomania” cannot justify systems that create fragility. The main point here isn’t even software defects, it’s the fact that this approach to software compounds the difficulty of starting from scratch, which is a systemic risk.

    The fact that people will willingly buy cigarettes doesn’t change the fact that toabcco companies are detestable. I personally don’t care if someone wants to smoke their lungs out until they die. But a society that thinks of large-scale peddling of cigarettes or any other harmful product . This is not the same thing as growing some tobacco in your yard for the use of you or your friends, just as the same as Friday night poker is not the same as mass-produced poker machines.

    Please prove me wrong by making a large fortune from your insight.

    I can see that you’e trying to hoist my by my own petard here, but I’m not making a prediction on the future, nor am I interested in present economic realities. The problems have already happened. You yourself admit that we are critically dependent on computers. My argument is for nothing more than the removal of this dependence. That we should try to do this requires no more “proof” than the insight that we should try to move a treasured piece of glassware away from the edge of the table. Arguing about cost/benefits is nonsensical. If the vase is an heirloom then its destruction cannot be undone by creating some other, different good. So it’s you who needs to prove your economic arguments for maintaining the status quo. Once we push computing off the cliff no amount of supposed “goods” (most of which serve computing, not people) will buy us back up to the top.

    You clearly like to wear many hats. I think there is a time for wearing the hat of an idealist transcending expediency, even if it leads to personal failure.

  61. @Jeff Read:
    “This is largely an American problem.”

    Nonsense. Take Angela Merkel out of the sample — a sample of one proves nothing — and look at currently serving Western politicians, recently-served politicians, or electable candidates, and you don’t see much in the way of STEM backgrounds. Blair? Cameron? Sarkozy? Going all the way back to Thatcher is a cheap anti-American shot, because then you’d have to add Carter and his engineering background as well.

    The one exception seems to be economics; while it’s not a STEM field and is not as rigorous as hard science, it’s still a cut above law, politics, and the humanities in terms of having a logical, mathematical basis; yet it seems to have produced a significant number of influential leaders (for better or for worse).

    What Jeff’s linked NYTimes article mostly shows is that East vs. West cultures and nations have different atittudes toward selecting leaders, which isn’t surprising as it’s rooted in their complex, differing histories.

    Of course, the article was also sloppy about lumping together scientists and engineers, despite their very different mindsets…

  62. @Roger Phillips
    > According to your economically rationalised position, librarians,
    > monks and other curators of knowledge should have abandoned the stores of
    > classical literature to go work in the fields for short-term economic good

    You’re overly generalizing, and assuming that you can’t hold both an ideal and an effective method for practicing it in the same breath. The way you hold up the ideal makes you sound a little uncomfortably like a religious martyr-hopeful.

  63. “Being smart had a whole set of social disadvantages even back then. Even in a college prep school attended by the current resident.”

    I keep quoting Ponella’s Law: “People are smart and stupid at the same time.”

    Being good at STEM is very nice, but the socially adept are very smart in their own way. You might want to consider that all those people that ‘got ahead’ are actually smarter than you are.

  64. @Roger Phillips:
    “The pursuti of efficiency is the root of the problem, because it has lead to the belief that problems with complex software can be solved with yet more complex software.”

    Sounds to me like Roger has problems with the fundamental basis of modern Western culture, and his issues with software are just a minor sideshow.

  65. There has never been a time in software history that efficiency hasn’t mattered. No matter how fast the underlying hardware becomes, we ask it to do more and more. And since new features sell software more effectively than reliability and lack of bugs, the economic incentives favor adding new features in release N+1 instead of fixing bugs. (See Microsoft Windows.)

    The fact that people push for more inefficient and prettier ways to do essentially the same things is begging the question. People ask for prettier things because we as software engineers have created a fashion trend that says that pretty things are important. If people started to market complex and pretty but unsafe hammers we would rightly condemn it as idiocy. But in software it is rationalised away as “necessary”. Chuck Moore may be “eonomically ineffective”, but he’s worth 100 drones who just keep accepting more and more burdensome decorations. Computers can and should create their own aesthetics that is suited to the medium. Oil paintings don’t aim for perfect realism or resolution, and neither should computers aim for the highest emulation of reality and sterile specifications. They just need to be easy to read, use and serve some human purpose. And to the extent to which rational economic justifications are used to perpetuate this they should be regarded as nonsense.

  66. Sounds to me like Roger has problems with the fundamental basis of modern Western culture, and his issues with software are just a minor sideshow.

    Yes, but I work with software so I am empowered to push against the cliff face.

  67. Your fatal assumption here is that programs should be written for maximum platform penetration. To me this is just building systemic risk. Now that we’ve infected all these platforms with broken code how are we going to get it out?

    You’re conflating two important issues that both have known engineering resolutions. The first is that when you have any component, there are a number of ways to reduce the risk of failure. Diversity can be a viable approach, but when you’re dealing with a good like software that can be replicated for free, economic pressures discourage multiple competing implementations of the same functionality. In this case, it might be better to heed Pudd’nhead Wilson: “the wise main saith, ‘Put all your eggs in the one basket and—WATCH THAT BASKET!'”

    The second is the question of how to make sure that a component can in fact be replaced if it becomes necessary, and the well-known answer here is API modularity. Eric’s written about the case study of source-code compatibility of Unix programs, and using common interfaces permitted, for example, replacing 3DES with AES while most of the programs using {OpenSSL,nspr,Java Crypto} remained oblivious.

  68. You yourself admit that we are critically dependent on computers. My argument is for nothing more than the removal of this dependence.

    I had managed to miss this. It appears that either you’re unfamiliar with the vast economic efficiencies that computerization have made practical, or you are advocating a complete economic rollback to the 1960s. There simply isn’t a way to both gain the advantages of computerization and prevent a massive disruption in the events of its mysterious disappearance.

  69. Nice piece. I cried. Coping with the stress of knowing that stuff of I worked on… runs in the B2, and at least one satellite, and in multiple other mission critical systems, and knowing intimately what all the bugs were then and discovered since, is one of the main reasons why I tried to leave the field entirely, a few years back.

    Nowadays I get by on the same answer that feynman came up with (after spending 10 years depressed after building the Bomb) – “Disregard”. If you can’t deal with life without accepting fundamental irresponsibility for at last some of the consequences, it becomes hard not to jump at shadows. Hell, I’ve become so patently aware of all the risks in all the systems that we depend on to abhor driving, and prefer to live in a society with less interdependent links on technology.

    And I note that this neat toy, linux, is not just in airliners,medical systems, and metal cutting robots, and cellphones… It’s in spacecraft. Multiple satellites and now, multiple rockets. It’s not only used in armadillo aerospace in their attitude adjustment system (and elsewhere) but spacex is building stuff around it, too.

    Anybody up for a job programming linux into the greatest rocket ship anyone is planning to fly? http://www.spacex.com/careers.php

    I’ve been thinking about it. (space is the only thing I care about more than than the Net)

    Ubiquity may be our best revenge, but it carries responsibilities. And nightmares, like this one:

    (the best 2:50 of my life http://www.youtube.com/watch?v=Dhci-93Xnxw – but the bug shown at 2:52 cost millions, involved shifting a few bits to the left, and was probably fixed long before the rocket finished burning up in the atmosphere)

    (and wasn’t my fault, so *I* at least can sleep a night. usually. But what if people were riding on that?)

    Maybe, confronting the future, and changing it, is the job of the young. But I bear enough responsibility for the future we live in now, to think: “if not now, when? If not me, who?”
    Are we passing on the skills – the needed precision for delivering truly reliable systems that can drive our spacecraft (and cars!) to the next generation? Are society’s managers and politicions truly aware of just what could go wrong with a major solar flare or emp?

    Getting a little more support, and respect for holding up the sky, and getting past it, would be good.

    1. >Nice piece. I cried.

      Perhaps appropriate, since the remark that triggered it was yours.

      >Maybe, confronting the future, and changing it, is the job of the young. But I bear enough responsibility for the future we live in now, to think: “if not now, when? If not me, who?”

      Exactly. And if open-source hackers don’t take the burden of holding up the software sky (by which I mean shared critical infrastructure like the Internet), who else is going to do it? And how much worse will the consequences of failures be? By actual measurement, we deliver reliability better than anyone else. With that power comes responsibility.

      >Are we passing on the skills – the needed precision for delivering truly reliable systems that can drive our spacecraft (and cars!) to the next generation?

      All we can do is set a good example, practically and ethically, and hope. I deal with a lot of younger hackers and wannabes and what I see makes me cautiously optimistic on this score.

      >Are society’s managers and politicions truly aware of just what could go wrong with a major solar flare or emp?

      No. No, in general they aren’t. Though there are some signs of awareness stirring; there’s an exercise going on in the U.S. right now to scope the difficulty of moving replacement high-power transformers in the wake of something like a repeat of the Carrington Event.

  70. LS said: If damage results, someone will have to pay.

    The entity that sold the device that caused the damage gets to pay.

    After all (to pick a relevant example here), GPSD expressly comes with no warranty to contain no bugs – the guys who chose it as the foundation for an IFF system knew that, and chose it anyway.

    Legally, they’re responsible (as well as morally, unless the GPSD guys deliberately ignored a bug and denied its existence and knew it would be relevant to this case and cause that damage).

    Try suing ESR (eg.) for a GPSD failure in some device Random Company A sold you, and the courts will quite reasonably laugh at you. Even juries will, I imagine – and if they don’t, a few new laws about indemnity will fix that right up.

    Demand all the “engineering” you want, and all you’ll do is slow things down. You’ll still get bugs and damage, and the “process” crap will slow the fixes down, too.

    There are very few places where software achieves a “no bugs ever” status – and in those places it’s insanely expensive and slow to develop. Which is fine, when it’s avionics* or a reactor control system. Not for “everything” because “there might be some damages sometime”.

    (* I’ve heard second-hand tales of avionics development. Multiple, parallel implementations on different platforms, all tested for 100% code coverage against the most ludicrously comprehensive test suite ever.

    Apply that model to everything? No. I’ll take my chances with “damages” rather than deal with the sclerosis that would necessarily apply to the entire software world – which today means nearly the entire world.)

  71. You’re overly generalizing, and assuming that you can’t hold both an ideal and an effective method for practicing it in the same breath. The way you hold up the ideal makes you sound a little uncomfortably like a religious martyr-hopeful.

    You can’t execute simplicity using complex methods. Sometimes you can carve out a simple niche in a complex ecosystem. Nowhere am I saying this is a bad thing. The fact remains that most are simply proliferating more of everything. Look at what Facebook has done to social interaction. Unlike this forum, which is in the style of the old salons, Facebook is a circus of operational conditioning, starting with the “Like” button. If I told you that my family had a habit of bringing all our photo albums into a room and putting ratings little ratings on the photos of each others’ children, then you would probably think I was a bit odd. And yet this kind of absurdity has been normalised by a computer systems like Facebook that promote the idea of information (ratings) just for the sake of it.

    I’m also not saying everyone has to drop everything write their own operating system. In the current environment few have that luxury. My posture is extreme because the problem is entrenched to the point where you have to yell at people to get them to see through the piles of self-justifying “solutions’. That problem is more acute for people who need to believe in the worth of these things to obtain employment. This is why hackers can see the problems with academic approachs to problem solving (even if their reasoning is superficially “faulty”), but academics can’t. Eric (rude to talk about you in your presence, I know) is financially independent. I don’t think he has to believe in all this stuff in any material sense, which is the main reason I bring this up here. But he has stated openly in the past the belief in some dangerous epistemological illusions (operationalism or something similar). I can only hope that his mental trajectory leads him to take meaningful action on this issue and avoid more intellectual excuses. I don’t define “meaningful” to mean “marketable”. I’m thinking on the scale of 100+ years.

    I have no intention of being “martyred”, whatever that means in this context. I have arranged my own financial security. But I have left academia and turned down job offers in industry in favour of lower paying but more flexible working habits in order to avoid contributing to the problem. That your knee-jerk reaction to a genuine attempt to address issues is to see it as a kind of pretentious display is a sad sign of our ethical deterioration.

  72. I had managed to miss this. It appears that either you’re unfamiliar with the vast economic efficiencies that computerization have made practical, or you are advocating a complete economic rollback to the 1960s.

    I’m sorry to be rude Christopher, but please read my post again. I am arguing against efficiency. And I am not arguing for some specific rollback. If even 100 people reject mainstream programming and work on simpler systems it will not produce such a thing. But the construction of even such a small ecosystem would make many things possible that are impossible in our complex systems.

    1. >I am arguing against efficiency.

      Reading back in the thread, I think you and I may be using the word “efficiency” in different senses and that contributes to misunderstanding. I have not been speaking of “efficiency” in the CS sense that one can say a compiled C program is more efficient (e.g. sparing of machine resources) than a Python one.

      On that level, I too am against efficiency. Programmers in general (not just the open-source ones) still overvalue it as a legacy from times when processor cycles were much, much more expensive than they are now. The result of this error is great wastefulness – too much human time spent chasing micro-optimizations, and (as you point out) too many downstream errors accepted as a result of “efficient” languages with serious baked-in failure modes.

      When I speak of lesser or greater efficiency, I am using the term as an economist would, considering how the value of total inputs to a good compares to the value of total outputs of it. For software this has to be compared over the entire life cycle, and all costs – including development and costs of downstream failures.

      Being against efficiency in this sense is like being against gravity. You can position yourself that way, but (a) the universe doesn’t care, and (b) you are quite likely to injure yourself in the process.

      You seem to be arguing that we’re eating excessive costs in downstream failure through being stuck on C. As an old LISP-head I am sympathetic to this position, but I also target for embedded deployment and think I understand why C is so persistent – because, errors and all, using it is still efficient in many contexts.

      I would like for you to prove me wrong. The way to prove it is by rationalizing out this inefficiency and collecting what used to be wasted inputs as profit. Go make yourself rich.

  73. I am using efficiency in the sense of trying to maximise process outputs (computational, institutional, …), not computational efficiency specifically.

    You seem to be arguing that we’re eating excessive costs in downstream failure through being stuck on C. As an old LISP-head I am sympathetic to this position, but I also target for embedded deployment and think I understand why C is so persistent – because, errors and all, using it is still efficient in many contexts.

    No. I don’t accept the importance of efficiency as a rationalisation of human activity, and it is simply narrowminded to try to tout it as some kind of unavoidable paradigm of thought. It can be useful to you without it being useful to me. And Forth is just as afficient as C. Good Forth systems can also be rewritten on short timescales. Forth is just one take on simplicity, so don’t take this as Forth advocacy per se.

    Your apparent position that a person should automatically accept external pragmatisms to achieve popular acceptance is disastrous for the possibility of renewal and flexibility. I’m not saying everyone should drop their big systems and come write Forth programs. But intellectual tools are for bettering mankind not justifying misbehaviour. I’m talking about where we should be looking. And if _some_ people (even a small group) can make a concerted effort to get there, then we will have gained something. But according to the economic-rationalistic POV we can just write off the value of striving for ideals based on illusory rational prescriptiosn for behaviour.

    I would like for you to prove me wrong. The way to prove it is by rationalizing out this inefficiency and collecting what used to be wasted inputs as profit. Go make yourself rich.

    Sorry, this is absurd. This is not like talking about the fortunes of smartphone vendors, where there are financial tools that can be used to make bets on the market. You’re making (at least) two unwarranted assumptions:

    – That inefficiency will be realised in my lifetime (I don’t even agree that efficiency is relevant here except as a harmful motivator).
    – That there is an obvious way to exploit those supposed inefficiencies (again, I don’t even agree with the notion that inefficiencies are the problem, since I am arguing against striving for efficiency).

    And since you haven’t bothered to do the same with your smartphone bets where ought to be opportunities to make money timing the market and consequently the above problems don’t even apply, I will regard myself as free to ignore your challenge. It’d be like me saying you have to “get rich” off your opinions about the effects of Soviet-era propaganda in America. That is to say, it would be completely silly and irrelevant since you’re not making a commentary on the future status of the market, which is what makes the ability ot make money a worthwhile test for your commentaries on smartphones in the first place.

    1. >I am using efficiency in the sense of trying to maximise process outputs (computational, institutional, …), not computational efficiency specifically.

      Good, I’m glad that’s cleared up.

      >No. I don’t accept the importance of efficiency as a rationalisation of human activity, and it is simply narrowminded to try to tout it as some kind of unavoidable paradigm of thought.

      Next up: Roger decides causality is “simply narrowminded”. So boring, and in such unfortunate conflict with our lovely idealism.

      >But according to the economic-rationalistic POV we can just write off the value of striving for ideals based on illusory rational prescriptiosn for behaviour.

      You are attributing this position to a person who has spent the best part of thirty years writing open-source software and and mobilizing the hacker culture in pursuit of ideals. Ideals of software quality, freedom, and individual empowerment which I have written about on this blog and elsewhere at possibly tedious length.

      I don’t often talk about the amount of struggle and sacrifice this required, or the degree of self-discipline I had to exert to achieve my goals, or the damage I took along the road, because in general I despise that kind of honking. I don’t believe suffering is a ticket to virtue, and I refuse to try to cash mine in for admiration.

      But only idealism got me through – the unwavering belief that there was a better world within reach and I could grasp it, and that this was worth all the effort and the sacrifices that effort demanded of me.

      Your failure to understand how this meshes with my use of “economic-rationalistic” language is your failure, not mine. Goddess knows I’ve explained my tactics often enough. The thing I understand that RMS doesn’t is this: successful idealism is the kind that frees human action to be more efficient.

      Or, to put it a slightly different way and perhaps one more palatable to you, a better world is one in which there is less waste. Less waste of time, less waste of effort, less waste of potential. Less unnecessary pain. Less deadweight. Less transaction overhead. Lower search costs.

      This is how efficiency connects with idealism. When your idealism tries to move the universe in an inefficient direction (imposing central economic planning over markets being an extreme example) it’s doomed. Only tragedy waits down that road. The beneficial and effective idealists are the ones who spot a way to surf efficiency gradients, overcoming opposition (aristocrats, bureaucracies, monopolists, closed-source vendors) that collects rent from the existing inefficiencies.

      When you understand this – really understand it – you, too, will be able to change the world.

  74. You can only mathematically ‘prove’ anything about software when you have an accepted mathematical [formal] framework with which to formally model your system.

    Consider Z. (that’s pronounced “zed” coz a limey invented it ;)

    With this, I can express the structure and behavior of a finite state machine – otherwise known as “every computer system known to man”

    Doing so is hard. It requires intelligence. Such endeavors are expensive.

    Therefore risk plays a part in the calculus of determining the level of formality to employ with any given software project. A nuclear power plant vs an MP3 player…which one do you really give a shit about fucking up?

    Increasingly, even the less-than-critical software that interfaces with regular folks’ lives is getting flamed for being shitty….I work to efficiently bring the assurances of formal software engineering to conventional ‘commodity’ programming.

    You can mathematically prove that software will behave in a specific manner, just as you can prove that a bridge will support a specific load. The overarching caveat is that flaws in any underlying model (eg. CPU floating-point errors vs structural metallic flaws) can result in unpredictable failure. Welcome to the world of science.

  75. The growing dependence on electronics, software, and network robustness is both a blessing and a curse. It is undoubtedly a major factor in the very large productivity increases (and attendant improvement in standard of living) that has so benefited society in the last three decades. But it also makes us uniquely vulnerable in the event of a catastrophic collapse. Which doesn’t necessarily have to occur via inherent flaws or malfunctions (see Carrington Event of 1859). The human body’s immune system needs to be challenged regularly in order to be strong (robust) when a serious infection occurs. Perhaps we need a few “minor” crashes occasionally in order to restore our ability to function in daily life without all our electronic helpers?

  76. Tom: I have often wondered if systems would be better if storage devices had built-in, random data loss.

    But this doesn’t solve the problem that machines are starting to decide how people behave, even on those who claim to be controlling them (e.g. programmers and hardware designers).

  77. I actually have a degree in software engineering from a university in Ontario. Were I to spend 4 years working under somebody with an engineering license in Ontario, I’d be able to apply to be a professional engineer and use the P.Eng. after my name.

    Given that, much of what is being said here is way off-base. From an engineering standpoint, reliability is just one of the many variables to optimize. It does mean that when you start a project you have to ask questions like “how might this project harm somebody’s life?” and “what is the liability impact in the event of different classes of failure?”. However, that doesn’t mean that you are looking for perfection. Reality is full of random stuff. One of the simple things which we learned when taking our engineering professionalism course is that Shit Happens. You can design a bridge to withstand the load of 3 times as much traffic as rated while in a category-5 hurricane. But that doesn’t mean that people won’t die on it if hit by an asteroid.

    Consider: What is the safety impact of a word processor crashing. Take a moment and come up with a case where it matters. The President is unable to send out a cease-fire notice because Word crashes, possibly costing millions of lives? 2 competing solutions: spend 20 billion dollars to make the application much more robust … or buy a mechanical typewriter for $300?

    Where does that type of investment make sense? In cases where people’s lives are at risk: medical devices, core airplane control software, core automobile control software, software used to perform other engineering analysis. That’s the majority right there.

    Everything else is based on business case. Microsoft Word has to be reliable enough that people will be willing to fork over cash for the next version … and no more. There’s no ethical obligation to do more – it isn’t dangerous and the customers are already saying that their cash for it, good or bad, is a fair trade. The worst liability exposure is processing refunds from dissatisfied customers. Very few companies are willing to pay for software which comes with a warranty covering lost revenue in the case of failure (though somebody would be willing to provide that warranty at a price, no doubt).

    Let’s look at another case brought up earlier in this thread – video game systems. Were you an electrical engineer, you’d have to ensure safety and regulatory compliance first of all (I may miss a few details – this isn’t my area of expertise, but I’m not going to let that stop me, though it should stop you from taking my word on it). In order to do that, as a first order approximation, you can ensure electrical safety by putting in a regulated power supply provided by a 3rd party which has a max current cutoff (circuit breaker), a high-temperature cut-off, and a maximum supply of 12 V. Add to that a metal, grounded chassis and mandatory use of RoHS components inside and you are golden. The metal chassis provides a Faraday cage so you’re good from an FCC compliance issue. The 12V power supply with current and temp cut-offs means that you have 1st order safety, regardless of what’s put in on the boards inside. Your partner could randomly glue chips down to a board and have a pet monkey decide which pins to wire together and you’re still safe. If stuff inside catches on fire, RoHS at least should ensure that the smoke is non-toxic. Once you do this, you’ve fulfilled the professional engineering requirements. Now you just have to figure out how much work to put on getting something useful and reliable inside, but that’s much more of a business decision than an engineering decision, much like the decision on when to ship commercial software is usually a business decision and not an engineering decision.

    In any case, my experience is that software development is very much like engineering. It’s just that in contrast to most traditional engineering, most software development doesn’t have any life-critical risks associated with it.

    1. >Where does that type of investment make sense? In cases where people’s lives are at risk: medical devices, core airplane control software, core automobile control software, software used to perform other engineering analysis. That’s the majority right there.

      You left out marine navigation systems, which is the case I have to worry about wearing my GPSD-lead hat. And while a large-scale Internet crash probably wouldn’t cause a lot of prompt deaths, the second order effects would be brutal. Nowadays it might stress the economy past its breaking point, resulting in temporary collapse of food and fuel transport networks and many avoidable deaths.

      But I agree. Outside these cases the actual value of adding another decimal place to mean time between failure drops off dramatically. Often enough so that the up-front overhead to get us to computer languages that would add that decimal place is greater than the actual cost of downstream failures, especially the time-discounted cost.

      This is part of what I’ve been trying to explain to Roger Phillips. I do not mean to sneer at him when I say his thinking about these issues shows the biases typical of an academic computer scientist. His job is to be an academic and push in the direction he is pushing – but, Garret, I suspect you agree with me when I say he’d be more effective if he could also think like an engineer and an economist. In the real world, it’s all about efficiency and tradeoffs.

  78. Next up: Roger decides causality is “simply narrowminded”. So boring, and in such unfortunate conflict with our lovely idealism.

    You can make fun of me all you like. If you can’t see that this is narrowminded you need to step out of your bubble. And yes, causality is an invention of the human mind. This is why operationalism and the like are harmful to your ability to reason.

    The entirety of the remainder of your post is completey predicated on maximising efficiency. Consequently it doesn’t even address my argument other than to tout your personal view of epistemology as canonical.

    1. >The entirety of the remainder of your post is completey predicated on maximising efficiency.

      No, Roger. The rest of my reply was about situating efficiency in an ethical context, and ethics in an efficiency one – mating microeconomics to morality. How to make idealism actually work.

      As a scientist you should understand about consilience. Your insistence on high walls between “efficiency” and “idealism” is a consilience failure. The fact that you can’t join those two partial maps is a sign that both are seriously flawed, even in describing their own domains.

      I don’t have that problem. Which is why I can change the world.

  79. Everything else is based on business case.

    What a great principle for our age. Now we can all get back to the important business of neophillia.

  80. And yes, causality is an invention of the human mind.

    In the sense that humans defined the concept, or in the sense that all the apparent correlation in the physical processes of the world is illusory?

  81. In the sense that humans defined the concept, or in the sense that all the apparent correlation in the physical processes of the world is illusory?

    In the sense that if you tell me X causes Y that’s just a construct of your mind. It may repeat itself, but that’s all it is. Which is obvious. I’m not attempting a radical denial of knowledge. I’m pointing out that knowledge is ineffable and people who try to bottle it in definitions end up deceiving themselves. We can have scientific knowledge, but only in domains where it happens to work. Systems operate quite happily without theories (most of the time, so do people).

    But people keep trying to redefine truth based on mind constructs instead of simply using them as tools. So like software the means becomes the end and people are convinced that what they want or need is what is convenient to the continued existence of their precious dogmas..

    1. >In the sense that if you tell me X causes Y that’s just a construct of your mind. It may repeat itself, but that’s all it is. Which is obvious.

      Yes, it is. Now let’s see if someone other than me can spot the flaw in Roger’s implied argument.

      >But people keep trying to redefine truth based on mind constructs instead of simply using them as tools.

      Excellent. That is the beginning of wisdom, right there. Keep following up that line of critique, by all means. Just don’t be surprised when you land at the sort of brutal predictivism and instrumentalism you’ve criticized me for.

  82. I do not mean to sneer at him when I say his thinking about these issues shows the biases typical of an academic computer scientist. His job is to be an academic and push in the direction he is pushing – but, Garret, I suspect you agree with me when I say he’d be more effective if he could also think like an engineer and an economist. In the real world, it’s all about efficiency and tradeoffs.

    You misunderstand me. I quit academia. An academic’s job is to produce papers, and most academics proliferate complexity. I am not pushing formal methods here. I’m not sure how anyone got that impression. I am pushing for simplicity, which has numerous benefits, the most important of which is the ability to back out of design decisions. This has nothing to do with academia, which I assure you is perfectly complicit with industry in pushing for more complexity (how else will we justify writing more papers?)

  83. After reading the above I tend to apply The Thomas Theorem to this talk of “Holding up the Sky”. Perhaps I shouldn’t, but it appears applicable here.

    1. >After reading the above I tend to apply The Thomas Theorem to this talk of “Holding up the Sky”. Perhaps I shouldn’t, but it appears applicable here.

      I don’t see how, but after reading up on the Thomas Theorem I suspect you might be on to something interesting. Please expand on your thought.

  84. Again Eric you try to belittle my by invoking some dogma that you think you can just apply mindlessly to extinguish idealism. I’m not saying you have to adopt my idealism. I’m saying you can’t rationalise it away as defective based on your narrow view of rationality.

    1. >I’m saying you can’t rationalise [idealism] away as defective based on your narrow view of rationality.

      I’m not trying to rationalize any kind of idealism away. I’m showing you how to connect it to reality.

      And I’m bothering because I think you’re very bright and at least trying to ask the right sorts of questions of the universe. This is further than most people get, even if you’re still stuck with a bunch of defective premises and language-driven category errors.

      You said “But people keep trying to redefine truth based on mind constructs instead of simply using them as tools.” This is a more powerful insight than I think you understand as yet. Pursue it without fear and you will learn much.

  85. Just don’t be surprised when you land at the sort of brutal predictivism and instrumentalism you’ve criticized me for.

    Following things to “logical conclusions” is exactly what I’m warning against. This is the kind of watershed problem I’m talking about. You see a good idea (software, science, …) and you ruin it by assuming that pushing the idea to an extreme for its own sake will lead you to wisdom (or “efficiency”). You have brain damaeged yourself into being unable to deal with ambiguity, seeking always to resolve it with some stupid theory. And you have the temerity to patronise me on the subject of wisdom.

    1. >You have brain damaeged yourself into being unable to deal with ambiguity, seeking always to resolve it with some stupid theory. And you have the temerity to patronise me on the subject of wisdom.

      Sorry, but I understand the universe better than you do. I demonstrate this by (a) being able to engineer change on scales you cannot match, and (b) being able to explain how that engineering arises from a consilient theory that connects what you separate into disconnected and antagonistic fragments as “idealism”, “pragmatism”, “ethics”,”philosophy”, and “economics”. In reality it is all one – and I don’t say that as a mystical or rhetorical flourish, being able to see and use the connections has hard consequences.

      The above is not a claim that my understanding is perfect. It is, though, pretty entertaining to be accused of being “unable to deal with ambiguity” when the core of my philosophy is an understanding of how fragile and contingent the relation between language and reality is. The map is not the territory, the word is not the thing defined. There is no identity, anywhere. And binary, unambiguous categories are artifacts of the map, not realities of the territory.

      You almost get this. It’s interesting, watching you struggle with it. Don’t feel belittled; most people aren’t even capable of the struggle, and I respect the fact that you’re trying.

  86. >In the sense that if you tell me X causes Y that’s just a construct of your mind. It may repeat itself, but that’s all it is. Which is obvious.

    Yes, it is. Now let’s see if someone other than me can spot the flaw in Roger’s implied argument.

    Which one? There’s the obnoxiously circular problem that apparently if I make claim C and claim C is manifestly true, Roger claims (ahem) that C is either tautological or vacuous.

    If we get past that one, though, there’s the confusion between the fact and mechanism of causality. Any undergraduate physics student has to get past that one, but the mere fact that we can’t explain why some phenomenon occurs doesn’t mean that we can’t provide a useful working definition and demonstrate that it does in fact work.

    There may be more, but trying to follow his arguments is increasingly feeling like squeezing sand.

  87. >Please expand on your thought.

    Ontologically speaking, there has been descriptive text that would ascribe “Holding up the Sky” to those within a specific community; Hackers.

    Such a statement implies that the sky itself is animate, thereby capable of movement, implied movement of either an “up” or “down”. Your expressed idealistic view point would be for Hackers to continue this long standing tradition of “Holding it up.”

    In reading your original post, and comments contained thereafter, I noticed the awkward implication that the sky is in fact capable of movement, thus needing to be held up. It screamed The Thomas Theorem because if those of a like mind do succeed in holding up the sky, what a sad, lowly goal for Hackers that would be.

    The consequences in assigning a thought process or idealism of “Holding up the Sky” to the Hacker Community are large, because from some Hackers standpoints, there is something beyond the sky. My Hacker friends and I always felt striving for that which is beyond the sky leads to results. Thus we choose not to bring the consequences of limiting our viewpoints to only the status of the sky to our passion.

    *shrug* That’s what ya get from Jarhead Hackers :)

    1. >Such a statement implies that the sky itself is animate, thereby capable of movement, implied movement of either an “up” or “down”.

      Oh. That’s far too literal an interpretation. The intended reference was literary and metaphorical – specifically to the Greek myth of Atlas holding up the sky, and to parallel motifs in other mythologies including Norse. The intent is not that they sky is animate, but that maintaining the order of the universe requires unceasing effort.

      The “sky” in question is the Internet in particular and the shared software infrastructure now critical to our civilization in general. Of course there are interesting things beyond the sky, and hackers are properly concerned with those. That has always been true, and in pointing out that we are all Atlases now I do not mean to suggest it should or even could change.

  88. Which one? There’s the obnoxiously circular problem that apparently if I make claim C and claim C is manifestly true, Roger claims (ahem) that C is either tautological or vacuous.

    I never said any such thing. You can only think in terms of logic (education-induced brain damage), so you naturally think I’m talking propositions.

    If we get past that one, though, there’s the confusion between the fact and mechanism of causality. Any undergraduate physics student has to get past that one, but the mere fact that we can’t explain why some phenomenon occurs doesn’t mean that we can’t provide a useful working definition and demonstrate that it does in fact work.

    You have, like a classic nerd who understands only limited engineering domains, totally missed what I said. I am not saying you can’t do some useful science. I am not saying you can’t _use_ definitions for some end. That does not make them fact. You might even be able to pretend something is fact for some end, but it does not magically turn it into objective truth.

    There may be more, but trying to follow his arguments is increasingly feeling like squeezing sand.

    Because you are unable to talk/think without turning everything into logic, a topic I doubt you really understand anyway. Actual pure mathemticians understand what logic is and have no trouble grasping these arguments when I put it to them. Because they understand mathematics is just a bunch of systems contrived by humans for human purposes and that they don’t represent objective truth. Engineers on the other hand just apply mathematics so they have no real understanding of its nature.

  89. Interesting that you chose an Atlas reference for the feelings you expressed regarding your thoughts on maintenance of the Internet.

    Again within the community of Hackers I associate, we strive more towards Herculean Pillars which do accomplish the same thing as Atlas had done and yet free us to go ‘above and beyond’.

    You would have a hard time convincing me that the order of the universe, or moreso it’s maintenance, requires effort. :) I’m more of a girl who believes it is mainenance of self that requires effort, then I am aligned with the Universe and thus able to see my ideal of it are often “off base.”

    Interesting exchange. Thanks!

  90. I am not saying you can’t do some useful science. I am not saying you can’t _use_ definitions for some end. That does not make them fact. You might even be able to pretend something is fact for some end, but it does not magically turn it into objective truth.

    It’s sometimes even hard to parse what you’re saying. If what you’re saying now is simply that our definitions and descriptions of the reality of the universe are approximations that have utility but are not in themselves Ultimate Truth, I don’t think you’ll find anyone here who disagrees with that; the answer to why we usually use Newtonian mechanics is that “it’s usually good enough”.

    On the other hand, these approximations are unquestionably approximating something real, even if ineffable, and sometimes they are good enough approximations that they reveal some interesting characteristic of the underlying reality that we hadn’t been particularly aware of.

    “Causality”, for example, is a term describing the repeated relationship between events X and Y. The explanations for the connection may be spot on down a few levels of abstraction (e.g., why the fluorescent light turns on when you flip the switch) or completely spurious (e.g., why the rain falls after you sacrifice seven children), but the inaccuracy of any particular explanation doesn’t invalidate the idea that causality exists—and the use “causality” as an approximation has proved good enough over the timespan of multicellular life that I’m quite confident it’s reflecting something real, whatever that may be.

  91. You would have a hard time convincing me that the order of the universe, or moreso it’s maintenance, requires effort.

    I believe Eric’s meaning “universe” in the sense of “a particular state of human society that we wish to preserve”, not in the sense of “the cosmos, which will (probably) keep on going just fine on its own”.

    1. >I believe Eric’s meaning “universe” in the sense of “a particular state of human society that we wish to preserve”, not in the sense of “the cosmos, which will (probably) keep on going just fine on its own”.

      That is correct. I am not an Aztec, believing that the sun won’t keep coming up without constant blood sacrifices. :-)

  92. Christopher we are not talking about Newtonian physics. You have “engineers diseease” in that you fail to distinguish between domains and engage me in a pointless deconstruction of language instead of trying to assimilate the point. You are trying to eliminate the inherent ambiguity by using terms such as “approximation”. You ignore iatrogenics because you probably haven’t even heard of the term or thought deeply about the notion. You search for specific notions when the very point is to be nonspecific. You’ll find normal people who don’t spend their time dealing with narrow domains understand these issues quite readily. Whereas academics and engineers are brain damaged into attempts at deconstruction.

    I’m saying all models, theories, definitions and notions of “truth” are entirely disputable. Just in the same way that words are just things we use to do things and they change over time and people have different ways of speaking. If you think that’s obvious, tell that to Eric. Because he thinks he’s got the market on truth cornered.

  93. Eric your argument boils down to your believe that your dick is bigger than mine. Nobody is impressed.

  94. Eric touts his autistic obsession with achieving intellectual purity as a sign of intelligence, when it constitutes a subtle but dangerous form of mental retardation. See how easy it is to be patronising and narrow-minded?

  95. Christopher we are not talking about Newtonian physics. You have “engineers diseease” in that you fail to distinguish between domains and engage me in a pointless deconstruction of language instead of trying to assimilate the point.

    I’m having a hard time understanding your accusation that I can’t deal with ambiguity because I should be separating issues into clearly-delineated domains instead of specific notions.

    I’m saying all models, theories, definitions and notions of “truth” are entirely disputable.

    Are you saying that my model of why a light bulb comes on when I flip the switch (electrons flow through wires in response to a difference in electric potential) isn’t any more true or reliable than Bob’s model (the switch pinches a demon’s foot, and he lights up when angry_?

  96. You ignore iatrogenics because you probably haven’t even heard of the term or thought deeply about the notion.

    No, I haven’t thought deeply about the “specific” notion as it relates to medicine. I spend quite a lot of time, however, thinking about unintended consequences. If your pronouncement was to imply something beyond that I don’t consider the external implications of some idea (which, incidentally, might apply outside its “domain”), I welcome the clarification.

  97. Read my post again Christopher and try responding to something I actually said. I do not say anything about an inherent inability to rate one thing against another. If I said that the status of a movie as a “classic” was always going to be a disputable notion would you start accusing me of thinking nobody could like one movie more than another? Of course not.

  98. @esr:
    “You said ‘But people keep trying to redefine truth based on mind constructs instead of simply using them as tools.’ This is a more powerful insight than I think you understand as yet. Pursue it without fear and you will learn much.”

    We’ve been bashing Philip Dick lately, but his quote is very applicable here:

    “Reality is that which, when you stop believing in it, doesn’t go away.”

    Truth is simply that which corresponds to reality. A statement is “true” in this sense if it helps us make correct predictions, and “false” if it leads us astray. And if it’s “not even wrong” in the Pauli sense, it’s just sophistry that leads us nowhere. (Mathematicians have a different definition of true/false that is useful to them, but less so to normal human life.)

    Seek mind constructs that help you to understand the world, the human race, and everything us. Reject that which leads you into a logical black hole, even if you want to believe it.

    1. >“Reality is that which, when you stop believing in it, doesn’t go away.”

      Yes, that quote is pretty good. Too bad Dick went insane and lost contact with reality-as-Dick-defined-it.

      >Truth is simply that which corresponds to reality. A statement is “true” in this sense if it helps us make correct predictions,

      Technical point: Those two claims sound similar but lead to significantly different philosophies. Correspondence theories of truth (your definition 1) actually have severe problems stemming from the fact that you have to make ontological presuppositions about “reality” very early in your thinking in order to have a confirmation theory. Predictivism (your definition 2) puts confirmation theory first and has no such problem.

  99. Christopher you ignore the iatrogenics of knowledge. Your attempts at deconstruction (which you seem to have stopped, mercifully) are part of the problem. People dealing with physics talk about causality because it helps their work. We can perpetuate fictions about causality inside that field of study because they are harmless there. And I wouldn’t be so asinne as to tell a physicist that his (tested) theories were fiction, because I would be presuming upon a domain he knows more about than me. But trying to synthesise general theories of knowledge/mind/truth is a surer path to idiocy than ambiguity and inconsistency.

  100. Cathy you assume a prediction-based view of truth, except people function perfectly fine without prediction. Much human action is simply intuition or instinct, but people are brain damaged by the university system into rationalising this away as being driven by theory. Which is a purely faith-based presumption.

    1. >Much human action is simply intuition or instinct, but people are brain damaged by the university system into rationalising this away as being driven by theory.

      All human action is theory-driven, but the theory may not be explicit – in fact usually it is not. Instincts and intuitions express implicit causal theories about things crucial to the organism, acquired by evolutionary history.

      Thus for example, the suckling instinct in a mammalian infant expresses a microtheory something like “If you are hungry, and you seek something that looks and smells like a nipple and suck on it, you will get food.” This microtheory is expressed as hardwiring in the limbic system, inherited rather than learned. It is not encoded is language or as a set of propositions – indeed it’s not connected to any conscious representations at all.

      This microtheory does not correspond to our usual intuitions about “theory”, basically because we can’t argue ourselves into it or out of it. But it exhibits the most important characteristic of theory, which is that it enables successful goal-seeking by connecting achievable pre-conditions to a desired post-condition by predicting the results of action.

  101. The joy of time-zones. I wake up and will need hours to wade through the comments. But we cannot let Eric’s popcorn get stale. So I will pick up the thread again.

    This was about my statement that Programming cannot be Engineering.

    @Daniel Franke
    “Of course you can. It is common practice in every discipline of engineering to make design trade-offs in the name of reliability.”

    That would mean you would remove all TC applications from your tool-chain and OS. A bold effort.

    @Philip Rogers
    “But we should be straight on the facts. And the fact that the proofs always omit certain details is simply the nature of maths.”

    I was obviously too cryptic in my comment. Let’s clarify.

    The application

    Programming should be engineering people say. What does an engineer want from an application? Set tolerances, scaling, determinism, modularity

    Set tolerances: Upper bounds on time, space, and failures
    The application cannot be Turing Complete (TC), as there are no upper bounds on time and space used by a TC application. The failure probability, aka Halting problem or decidability, is a non-computable number. No upper bounds on the failure probability either.

    Scaling: Time and Space use should scale polynomial in task size
    So the functionality should not include problems in NP

    Determinism: Interrupts are non-deterministic. Interrupt handling makes a program non-deterministic. No interrupts.

    Modularity: You want a tool box and libraries.
    Great, but then if any module or combinations of modules is TC or handles NP problems, the whole application runs in the above problems of tolerances and scaling. And any finite state module coupled to a hard drive is a potential TC module.

    So you can succeed when your application is not TC, not in NP, has no interrupts, and each and every module or combination of modules is proven not TC and not in NP.

    That disqualifies every scripting language and interpreter that might be TC, and many algorithms that are in NP.

    Now the proof of correctness

    We want the same features from the proof: Set tolerances and scaling are important here.

    What is a proof of correctness: An automated procedure that takes a program and halts with True or False.

    So you create a Automaton that reads the specification and the program and decides whether the program simulates the specification correctly or not.

    That Automaton should not be TC, or else you cannot set upper bounds on time and space, and the probability of failure becomes uncomputable.

    But maybe a proof does not need a TC automaton. But does it scale?

    I often see naive approaches which state that real application are never TC, but simple finite state machines. However, a proof of correctness that simply checks all states scales exponentially in the size of the program. That is not a useful or practical approach.

    One general strategy to decide whether a program can fail is setting up a list of conditions that must be satisfied for the program to fail. Then decide whether there are, or are not, circumstances where the conditions are satisfied and the program fails.

    That would be a list of logical conditions (A = B, not C) connected with AND and OR. That proof strategy will probably be in N SAT. Therefore, the proof of correctness could very well be in NP. Which does not scale nicely.

    So here too, we must specify that the proof Automaton should be not TC and the algorithm not in NP.

    Conclusion
    So I repeat, I think programming will never be reduced to “engineering”.

  102. Causality. Ahh nice.

    Every event was made possible by other events that happened before it. Often, it is possible to point out earlier events that were crucial for the existence of the event under scrutiny. These are called the causes of that event.

    These (obvious) statements are fundamental to the empirical sciences. From this position, people make a bold step and reverse the chain: Given causal events, we can determine, or predict, the effects.

    Except from a vanishingly small number of cases, this reasoning is wrong. It is next to impossible to take the state of the universe and then predict future effects in principle.

    The fundamental reason is that you would need to have a copy universe run its course to see what happens. That would take just as much time and energy as having the current universe run its course.

    Simplified reasoning. Consider the universe to be a computer with the events being the program. To predict the outcome of a computation, you need to run it as a simulation on another computer. That would take at least as much time and effort as to let it run on the original computer.

  103. > Cathy you assume a prediction-based view of truth, except people function perfectly fine without prediction. Much human action is simply intuition or instinct,

    But our intuition uses predictions. We are just not aware of this predictions because we can not observe directly how our intuition is working.
    When i’m walking my brain predicts where the ground is otherwise i would fall over. I’m just not aware of it.

    1. >When i’m walking my brain predicts where the ground is otherwise i would fall over. I’m just not aware of it.

      That is correct. Our nervous systems are a whole bunch of hardwired predictive microtheories with a slightly more general cognition engine on top that we think of as “conscious mind”. We don’t experience these as theories or predictions only because we don’t have introspective access to them.

  104. Actual pure mathematicians understand what logic is and have no trouble grasping these arguments when I put it to them. Because they understand mathematics is just a bunch of systems contrived by humans for human purposes and that they don’t represent objective truth.

    Engineers on the other hand just apply mathematics so they have no real understanding of its nature.

    You are right saying “mathematics is just a bunch of systems contrived by humans”, at least when you are saying this about pure axiomatic mathematics. The notion of “truth” in axiomatic mathematics is what allows us to have simultaneously Euclidean geometry and non-Euclidean geometries, etc.

    But at least if you replace “engineers” by “physicists”/”scientists” you are wrong in saying “Scientists on the other hand just apply mathematics so they have no real understanding of its nature”. Mathematics is map, not a territory. A good example here is Einstein trying to formulate General Relativity; he grokked “physics” of general relativity but at beginning had problems expressing it using mathematics to gain precise predicitive ability. When a mathematician (unfortunately I don’t remember his name) was able to formulate it cleanly, he gave all credit to Einstein understanding of physics.

    And I think the same applies to engineers. Understanding of nature is more than ability to manipulate equations, to “apply mathematics”. Physics is more than applied math ;-)

    1. >When a mathematician (unfortunately I don’t remember his name) was able to formulate it cleanly, he gave all credit to Einstein understanding of physics.

      You are probably thinking of Hermann Minkowski.

    1. >More likely Tullio Levi-Civita

      You are right. I missed that Jakub was speaking of general relativity rather than special. Duh. I should get some sleep.

      You are also right that Levi-Civita is undeservedly obscure. I only know of him because I invented a theory of multilinear forms when I was 17. I was keenly disappointed when a math professor informed me that it was isomorphic to tensor calculus and I was a century behind this semi-forgotten Italian mathematician.

      Oh well, that’s not as bad as four years earlier, when I invented polyalphabetic encryption while reading a book on codes and cyphers – only to find out a chapter or two later that I was 500 years behind Blaise de Vigenère (actually, Giovan Battista Bellaso invented it earlier, but I didn’t learn that until many years after). I was only a little comforted to learn that “my” method wasn’t broken until the late 1800s.

      1. To complete the sequence, I should report that I invented a system isomorphic to Boolean algebra when I was 16 by thinking about greatest-common-divisor and least-common-multiple as operations on numbers. The coolest part was noticing that these operations were mutually distributive. Eventually I figured out that this was the algebra of factor sets. So I was 150 years behind George Boole, that time :-(.

  105. Eric, here’s an abstract question for you to ponder.

    If you could replace your organic body (in its entirety) with a machine, would you do it?

    1. >If you could replace your organic body (in its entirety) with a machine, would you do it?

      Oooh. That one is well worth a blog post. Thanks for asking.

  106. “The notion of ‘truth’ in axiomatic mathematics is what allows us to have simultaneously Euclidean geometry and non-Euclidean geometries, etc.”

    Yes, that’s why I was careful to separate mathematical truth from real-world truth in my previous post.

  107. “It is next to impossible to take the state of the universe and then predict future effects in principle.”

    Sure, but that refers to *perfect* prediction. We can certainly predict many things ‘well enough’ for a ‘limited time’ so as to make engineering possible and practical. Now apply that to Winter’s proof that you can’t apply engineering principles to software creation….

  108. “I would think this refers to David Hilbert…”

    No, the link that Winter provided has the real story. The story with Hilbert is:

    Once Werner Heisenberg had invented his matrix formulation of quantum mechanics, he was upset over its inherent clumsiness, so he went to see Hilbert. Hilbert listened as he described his new theory and his problems. Once he was done, Hilbert suggested to Heisenberg that he rewrite his theory in the form of a differential equation. Heisenberg went away muttering that clearly Hilbert did not appreciate the beauty of his theory…..

  109. “But every hour you spend on GPSD is an hour not spent on Battle For Wesnoth, and vice versa. The tradeoffs are indeed zero-sum. ” Is pure time spent the proper measure though? Assume each project has diminishing returns on daily effort, which I don’t think is an unreasonable. A lost additional hour on GPSD may return more value as the first hour spent on some other project, even if the projects are completely independent and adding in some cost for project switching. If spending a little time each day on Wesnoth furthers avoids burnout on GPSD, the improved programmer happiness could turn into a net positive for GPSD.

    Hackers may be able to focus for a long time on a particular project when needed, and context switching when you are in the middle of working on something is a real cost, but that doesn’t make it a good idea to be hyper focused all the time.

  110. @ESR – as someone who never fully adopted the basic premise of OSI and FSF and other organizations promoting “open” as the only proper model for production of software; I find it interesting watching you try to discuss engineering ideals while always avoiding the economic results of them. A little disturbing too, is your apparent inability to recognize that your ideal and promotion of it to students where and when they were most vulnerable and malleable has translated into the transfer of enormous wealth from quite literally ten of thousands of hard working engineers and placed that wealth into the hands of all those who went to business school instead of engineering school. Discussions of social responsibility and openness always result in the engineer missing out on what is rightfully his or hers and the businessman makes off with the ill gotten gains. By helping to institutionalizing this idea, and with others getting their cult following, you and your generation fundamentally altered the economics of the value of the production of software and limited the wealth potential of literally tens, if not hundreds of thousands, probably millions of people. Until you begin to recognize this basic condition of the human experience in software development, you’re not going to be able to address the final end point of pretending to be holding up the sky; which as other readers have pointed out – looks like so much hubris.

    1. >the businessman makes off with the ill gotten gains

      So, how is this different from being a cubicle drone for a proprietary vendor? You don’t get to capture more than a tiny fraction of the value you produce there, either. You don’t own what you produce; you can’t dispose of it, and you can’t take it away with you. Business guy makes off with the gains again.

      In reality that’s not a result of closed or open, it’s where you end up when somebody else’s capital concentration is a critical factor of your production. Sometimes that’s the way it has to be, notably when your productivity requires capital equipment too expensive for an individual. Programming used to be like that, back when computers were expensive, but it isn’t now.

      OK, maybe you’re imagining a situation in which you’re an independent developer selling direct to the public. It’s the FSF, not me, that thinks you’re evil if you want to issue under a proprietary license – in fact, if your product is something like a game rather than a tool that requires downstream maintainance, I’ll encourage it.

      There’s lots more going on here that you don’t understand, but a blog comment is not the place to explain it. Go read my papers.

  111. @Cathy:
    “Truth is simply that which corresponds to reality. A statement is ‘true’ in this sense if it helps us make correct predictions.”
    @esr:
    “Technical point: Those two claims sound similar but lead to significantly different philosophies. Correspondence theories of truth (your definition 1) actually have severe problems stemming from the fact that you have to make ontological presuppositions about “reality” very early in your thinking in order to have a confirmation theory. Predictivism (your definition 2) puts confirmation theory first and has no such problem.”

    I hadn’t thought about this issue before, but I see your point. I assume that “predictivism” normally defines predictions in terms of sense data (predictions about what sensory inputs we will receive in a given situation — a Humean approach) vs. making assumptions about what is “really” happening “out there” in Kantean a priori mental construct space. Otherwise, you run into the issues you point out in (1).

    Yes? No? I have not done nearly the level of philosophical reading and thinking that you have, and I know I don’t have all the right reference terms.

    1. >I hadn’t thought about this issue before, but I see your point. I assume that “predictivism” normally defines predictions in terms of sense data (predictions about what sensory inputs we will receive in a given situation — a Humean approach) vs. making assumptions about what is “really” happening “out there” in Kantean a priori mental construct space. Otherwise, you run into the issues you point out in (1).

      Correct on all points.

  112. @LS:
    > No, the link that Winter provided has the real story. The story with Hilbert is: …

    That story about Hilbert and Heisenberg was not the one I was referring to; Hilbert played a key role in the development of General Relativity, specifically in helping discover the Einstein Field Equation.

    Winter’s link talks about Levi-Civita helping Einstein with tensor calculus as part of Einstein’s work on General Relativity; however, Levi-Civita did not actually do any work on the physics of GR himself. Hilbert did: after Einstein had visited him in June-July 1915 and talked about his efforts on GR, Hilbert thought about what Einstein had said and eventually came up with a derivation of the same field equation for GR that Einstein derived, but by a much shorter and more elegant route: basically, Hilbert came up with an action for gravity that, when extremized according to standard methods, gave the Einstein Field Equation in just a few lines of calculus, instead of by the much longer route that Einstein took. I first read about this in Kip Thorne’s book, Black Holes and Time Warps, but it’s covered in most historical treatments of the subject. The “Physics” section of the Wikipedia page on Hilbert gives a very brief overview:

    http://en.wikipedia.org/wiki/David_Hilbert

  113. @Winter

    > That would mean you would remove all TC applications from your tool-chain and OS. A bold
    > effort.

    Of course not. I don’t fault you for that conclusion, though. I made the same one when I was in school and thought it invalidated the whole premise. Engineering is done given not just requirements but also assumptions. What assumptions you are allowed to make is based on your industry and accepted practice. For example, a pithy remark that I’ve heard about the new generation of convection-cooled nuclear reactors is that “you never have to design against a loss-of-gravity incident”. Which is true (or perhaps I should say “true”), unless you are in the orbital spacecraft design business at which point your commodes must take into account “loss” of gravity.

    Much like a civil engineer (or architect) designing a house based on standard load-bearing specifications for dimensional lumber, etc., you can write software based on the compiler doing “the right thing” given compliant input. It is possible that the compiler is broken in as much as it is possible that a 2×12 has non-visible defects which make it unsuitable for holding up a floor. As I noted above – the key principle of engineering is that “shit happens” and you can’t design around every possible flaw (though you may be able to test your way out of it).

    > Set tolerances: Upper bounds on time, space, and failures
    > The application cannot be Turing Complete (TC), as there are no upper bounds on time and
    > space used by a TC application. The failure probability, aka Halting problem or decidability, is a > non-computable number. No upper bounds on the failure probability either.

    Oh, please. You’re assuming that because you can’t prove that something halts under all possible conditions that it can’t be shown to halt under all conditions we care about. Moreover, even if that’s the case, you can still handle that through external mechanisms. The Linux OOM killer addresses this. There are other ways to do so.

    > Scaling: Time and Space use should scale polynomial in task size
    > So the functionality should not include problems in NP

    No in most cases we care about we worry about resource utilization, not the category of problems. If you have to make 4 deliveries today in your fly-in-airmail service, calculating the shortest route is trivial in any time humans care about because the size of the problem we actually care about is small enough, even though the problem is, strictly speaking, a NP-complete traveling-salesman problem.

    > Determinism: Interrupts are non-deterministic. Interrupt handling makes a program
    > non-deterministic. No interrupts.

    Everything is both deterministic or non-deterministic, depending upon the level you look at the issue. At work we ran into a software problem which was demonstrably caused by radioactive decay. Yeah … that was fun to diagnose.

    However, in a lot of systems, the interrupt handling time doesn’t matter. Does it matter if your GPS takes an extra 2us to update the display.

    Consider wind loading on a building. Sure, it’s rated for maximum wind speeds, but you have to take into account both gusting and continuous winds. You can’t just wave away the fact that you might be operating in a hostile world.

    > So you can succeed when your application is not TC, not in NP, has no interrupts, and each
    > and every module or combination of modules is proven not TC and not in NP.

    Only if you need something which provides provable operation (as opposed to bounded operations) in all circumstances, which is not the case. Even in the more traditional engineering disciplines, that isn’t guaranteed. Resistors come with +/- tolerances. Structural beams can be de-rated under different attachment schemes.

    > What is a proof of correctness: An automated procedure that takes a program and halts with
    > True or False.

    You want your wall clock to halt with a True or False?

    > Conclusion
    > So I repeat, I think programming will never be reduced to “engineering”.

    Proofs are used in engineering. They are very important. However, judgment is always required. How much tolerance is needed? Are the specifications appropriate for the task? Many/most times the whole is not provably correct in any discipline. Individual components are. Certain configurations are. But these are then assembled into larger units based on a design, the whole of which is not completely proven mathematically. For example, where’s the proof of the latest Intel microprocessor? I don’t mean the logic … I mean proving proper behavior while accounting for the distributed resistance, inductance and capacitance of every single trace internally … while accounting for every possible deviation of semiconductor layer thickness which might occur during manufacturing. Suddenly this is an NP-complete problem, yet we don’t have a problem calling this engineering.

  114. @Peter Donis: Yes, but that’s not the point. Einstein had already formulated his theory before he saw Hilbert. That Hilbert saw a more elegant formulation is beside the point. He saw a more elegant formulation of QM, too (as per the story I quoted). All this goes to show that Hilbert was a great mathematician, but not a physicist.

  115. @LS:
    > Yes, but that’s not the point. Einstein had already formulated his theory before he saw Hilbert. That Hilbert saw a more elegant formulation is beside the point. He saw a more elegant formulation of QM, too (as per the story I quoted). All this goes to show that Hilbert was a great mathematician, but not a physicist.

    No argument on that point; in fact, that’s a good way to get this sub-thread back on track with the original comment that prompted it. :-)

    Just as an aside, though, there is some controversy among historians about exactly how much Einstein owed to Hilbert in reaching the final form of the field equation. This Wikipedia page contains some info (as well as info on disputes about how much of special relativity was really Einstein’s original contribution):

    http://en.wikipedia.org/wiki/Relativity_priority_dispute

    But as the original comment that I referred to noted, Hilbert himself always gave Einstein primary credit for the physics. He said that “every schoolboy in the streets of Gottingen knows more differential geometry than Einstein” (I haven’t been able to find a definite reference for where he said this, but I’ve seen it quoted often), but added that it was still Einstein who came up with the physics of GR and not the mathematicians.

  116. Causality is not defined in physics, because its not observable.
    We just have correlation and models. Causality is only observable in a model or in a mind, but not in the real world. In the real world, we can only see correlations.

  117. @esr:
    “In reality that’s not a result of closed or open, it’s where you end up when somebody else’s capital concentration is a critical factor of your production. Sometimes that’s the way it has to be, notably when your productivity requires capital equipment too expensive for an individual. Programming used to be like that, back when computers were expensive, but it isn’t now.”

    I suspect this is more a result of social capital (you need someone else with different skills to market & sell your product, or you need a group of developers with different skills to group together and develop a modern game) that financial capital in many cases. Sure, some geeks have gone out to start companies, but relatively few are Bill Gates or Michael Dell. A great example of this is Woz and Steve Jobs; they would not have gotten anywhere founding a major company without Mike Markkula to step in and provide guidance and credibility in the early days (partly because they would not have been handed the investment capital).

    Take a look at the old Stephen Levy book “Hackers” for a look back at the days when one or two programmers could write and market a major game, or the slightly later days when a game developer was a free agent who got about 30% of the price of every copy of the game he wrote.

  118. One more sidenote about relation between physics and mathematics. Sometimes physicists invents some new math that they use to formulate their models, to only later be turned into formal math theory (like with Dirac delta “function” and theory of distributions (generalized functions)… or not (like is the case of quantum field theory, or integration over trajectories, which still do not have axiomatic mathematical formulation… as far as I know…).

  119. “Much like a civil engineer (or architect) designing a house based on standard load-bearing specifications for dimensional lumber, etc., you can write software based on the compiler doing “the right thing” given compliant input. It is possible that the compiler is broken in as much as it is possible that a 2×12 has non-visible defects which make it unsuitable for holding up a floor. As I noted above – the key principle of engineering is that “shit happens” and you can’t design around every possible flaw (though you may be able to test your way out of it).”

    I couldn’t help but want to point out that the “standards” for load bearing lumber had to be seriously revised a recently, because the quality of trees producing it had declined

    http://www.nelma.org/news-and-events/industry-news/alsc-announces-decision-on-strength-value-reduction-for-southern/

    Similarly, our internet is degrading, not only with double and triple nat, but with ever-decreased nat timeouts (disabling things like long running phone calls and ssh), and upcoming port range restrictions on carrier grade nat, nearly every service requiring that you login via the web in order to get to the internet, services refreshing dhcp every couple of minutes, dns and time service being blocked, most ports besides 80 and 443 being block, the inability to deploy new protocols like sctp, etc…

  120. @IGnatius T Foobar: “imminent death of the ‘net predicted!”

    Nobody involved in the bufferbloat effort is predicting global congestion collapse. Nobody is ruling out various forms of local/global congestion collapse, either! as it’s already proven that bloat renders ineffective congestion avoidance on normal tcp streams…

    http://queue.acm.org/detail.cfm?id=2076798

    Although the moderator of this conversation tried really hard to get a quote to that effect…

    We’re all very careful to avoid predictions of doom, and have merely been trying to head off the problem at the pass… as data centers get higher speeds, and wireless covers the globe in various forms, and tcp gets pushed further and further out of its design range.

    My inspiration for the bufferbloat effort is the Y2K effort, where, once alerted, and with enough time to do a thorough job… we did our jobs so well as for the mundanes to wonder what all the fuss was about, jan 1, 2000.

    http://www.taht.net/music/uncle_bills_helicopter.mp3

    I’d like again, to bring spaceship earth, in for a safe landing.

    In my lab, prior to last august, I could *easily* get tcp induced congestion collapse on a wireless network.

    This was largely due to infinite retries in the wireless aggregation stack – a problem that has now been solved in multiple common devices – not bufferbloat, directly. I slept a lot easier once that bug was found, but still the bloat is bad especially on wireless. Our internet has
    a few design assumptions in it that limit effective communication to roughly a lunar distance or two.

    Last week, I induced another form of collapse via a multicast broadcast storm – but in part thanks to the new BQL and sfqred code in Linux 3.3 – hardly noticed until it got really out of hand.

    That said, there is still plenty left to do about bufferbloat and related problems (ipv6 deployment) if we are to continue using voice comms in particular over the internet.

    As one example: The FCC is considering obsoleting the old phone system, and I’d hate to be without that backup, particularly in the case of various forms of civil disasters, knowing how fragile the internet has become, in multiple ways.

  121. businessman makes off with the ill gotten gains

    In addition to Eric’s points above, it’s always fun to vilify “the middleman” for being a leech, but except for middlemen set up by government monopolies, middlemen got where they were because they added some value people were willing to pay for. Perhaps this value was in transporting goods (merchants) or in allowing people with extra capital to put it to use in promising ventures (finance). Until you’ve run at least one business, don’t underestimate the amount of execution required to develop and provide a product, make customers aware of its existence, convince them to buy, collect their payments, and all the while take care of the recordkeeping and everything else a business needs.

  122. @Garrett
    “Engineering is done given not just requirements but also assumptions. What assumptions you are allowed to make is based on your industry and accepted practice.”

    In my experience, the assumption that the user will enter a faultless script or program is not warranted.

    You might prove that a web-browser, command shell, or editor is “correct” and will perform exactly what is asked from it. And then it loads a script or plugin that hangs it. So, the house you build was rock solid according to all the specifications, but it was build on ice that might melt any day.

    @Garrett
    “As I noted above – the key principle of engineering is that “shit happens” and you can’t design around every possible flaw (though you may be able to test your way out of it).”

    Again, what use is “engineering” a building on an ice floor on the lake? The shuttle or plane must keep flying, even when the interpreter correctly runs a non-terminating program.

    So, you now are at the point that you not only have to vet the web-browser or editor, but also every script you want to run on it. And every set of input data used by these scripts. This really takes the “General” out of “General Computer”.

    @Garrrett
    “Oh, please. You’re assuming that because you can’t prove that something halts under all possible conditions that it can’t be shown to halt under all conditions we care about.”

    You might be surprised how many times a program I used hanged because I gave it an incorrect script. And the fault was in the script/data, not in the program. But there are more programs that are Turing Complete than interpreters, command shells, or scriptable programs.

    @Garrett
    “No in most cases we care about we worry about resource utilization, not the category of problems. If you have to make 4 deliveries today in your fly-in-airmail service, calculating the shortest route is trivial in any time humans care about because the size of the problem we actually care about is small enough, even though the problem is, strictly speaking, a NP-complete traveling-salesman problem.”

    Any programmer will learn quickly that she should stay away from some algorithms. But we are not talking about your 4 deliveries, but about hardware drivers, schedulers, and memory management.

    @Garrett
    “However, in a lot of systems, the interrupt handling time doesn’t matter. Does it matter if your GPS takes an extra 2us to update the display.”

    Think network cards, routers, web-servers, disk drives, File systems, WiFi drivers, multi player games, ATMs. There are people who would like to get these correct.

    @Garrett
    “Sure, it’s rated for maximum wind speeds, but you have to take into account both gusting and continuous winds.”

    Winds do not have to be processed, but ignored. Even I can write a program that ignores interrupts. But a program that correctly handles interrupts is something different.

    @Garrett
    “Only if you need something which provides provable operation (as opposed to bounded operations) in all circumstances, which is not the case.”

    No, you cannot get bounded operations! And while flying, I want the software to work under “all circumstances”. The same when trading over the internet. I know people who want their filesystem to always work and never get data corruption.

    What you are saying is that you can “engineer” programs when they are unimportant. But not when it really matters.

    @Garrett
    “> What is a proof of correctness: An automated procedure that takes a program and
    > halts with True or False.
    You want your wall clock to halt with a True or False?”

    When I ask whether a clock gives the correct time, I want “yes” or “no” as an answer, not 5.30PM.

    @Garrett
    “How much tolerance is needed?”

    My first statement was that you cannot give upper bounds on time and resource uses for TC applications. And the failure probability of a TC device is a non-computable number.

    I see no reason to revise these statements.

    @Garrett
    “For example, where’s the proof of the latest Intel microprocessor?”

    Intel gets by with a lot of effort to keep their on-chip software working. They do not give tolerances on the software side nor on the processing part, only on the hardware specs.

    My suspicion is that people do not have the correctness of Intel’s processors in mind when they say “Software should be engineering”.

  123. I heard that there was WIFI jamming at the end of the Occupy Wall Street protests.
    Maybe we need to update RFC 1149 to support micro SIMs? I have spent several fruitful decades in the pursuit of the simple miracle of open source technology, in my mind it is the clearest statement of the goodness in the nature of man. The yin and yang of validity versus caprice spin in this very powerful and mysterious breeze.

    I gotta go. Thanks Eric.

    Flint

  124. Interesting findings about open source uptake in the corporate world.

    Interestingly, we also observed statistically significant differences on gross margins and (positive) profits between users and non-users of OSS. The causation here could work in both directions: companies that adopt OSS achieve higher profits through its lower cost, or well-run highly profitable companies tend to adopt OSS. In contrast we found that companies using an OSS system have significantly lower dynamic financial indicators (capital spending and sales growth) than those using proprietary systems; i.e. that OSS adoption thrives in stable environments. Finally, we also found that organizations with knowledge-intensive workers are apt to adopt OSS, but also that OSS is more likely to be adopted by large organizations with less productive employees. For instance, it’s more likely to find OSS on the workstation of a call center employee than on a security trader’s desk.

    As an OSS contributor and advocate I was startled by the study’s results. The paper’s findings came to me as an unwelcome surprise. Apparently, the main reason for adopting OSS is lower cost and higher operating efficiencies; OSS appears to be unwelcomed by highly-productive employees and in rapidly growing and volatile organizations. Arguments frequently put forward in favor of OSS regarding its flexibility and the retention of technological know-how were shattered through findings showing exactly the opposite. Organizations that need flexibility choose proprietary software, as do highly-paid employees who could supposedly most benefit by tinkering with OSS to make it fit their needs.

    Somehow, Eric, though this study’s results surprised its co-author, I don’t think they’ll surprise you. I think what’s happening is that the benefits of OSS only become obviously visible at scale, over long periods of time. For a shorter term solution that a small number of people need now, a proprietary solution that exists now is infinitely more valuable than an open source solution yet to be written or adapted.

    1. >Somehow, Eric, though this study’s results surprised its co-author, I don’t think they’ll surprise you. I think what’s happening is that the benefits of OSS only become obviously visible at scale, over long periods of time.

      That is correct. The up-front cost savings are readily visible but minor; the downstream effects in terms of reliability, reduced business risk, freedom from vendor lock-in, and absence of licensing-compliance costs are far more important but less visible ex ante. One effect of this is that net-present-value accounting will have the same tendency to undervalue these gains that it shows for other longer-term investments.

      Sigh. I pointed this out fifteen years ago. One of the things you learn from doing work that lots of people describe as “groundbreaking” is that those people still only listen to the parts they want to hear. Nobody paid attention when I warned that high-investor-multiple software startups were a thing of the past, either.

  125. Much of this discussion strikes me as surprisingly religious. Where else does the notion that something can’t be virtuous unless it requires suffering come from? (How else do you explain deliberate walls between idealism and efficiency, and trying to break them down as perversion?)

  126. Math truth is not different from physical truth. I think the simplest way to see this is to cast the statements neutrally. “The conclusions [Riemann algebra] follow from [Riemann axioms].” Riemann doesn’t contradict Euclid; they’re not competing descriptions of a single thing; they’re simply describing different things.

    Correspondence and predictive truth in fact collapse into each other.
    If only sense-data can be seen, then only sense-data exist. They ARE the ‘real world.’ Whether it is ‘out there’ or not becomes irrelevant.
    Problem is they can’t be the real world. That’s kind of the whole point of science. You can manipulate your sense-data, but it doesn’t make the target sense-data change. For example, when you see rain and step outside, you feel wet. You can’t stop yourself from getting wet by manipulating whether you see rain. You could go further into VR and manipulate the wetness sensation, but it won’t stop your wool from shrinking, your leather from warping, or your shoes from stinking. And so on.

    If you had to choose, would you pick feeling accomplished and being nothing or feeling nothing and being accomplished? Sick/healthy?

    Going the other way, the collapse occurs by noting that the ‘real’ world is a tool for manipulating sense-data.

  127. @Winter

    I’m going to avoid addressing each of your individual points. I don’t wish to be rude, but you seem to know just enough to get yourself confused. Stuff like Godel’s theorems are of little interest outside pure mathematics. What matters is what can be done in practise. There is nothing to say an interrupt-driven program cannot be verified. Calculi for dealing with non-deterministic programs have been around for some time now. What’s difficult is figuring out how to apply them effectively.

    I also dispute that mathematics is needed for engineering. It’s not as though any other enegineering discipline hinges completely on mathematics. It’s mostly rules of thumb and redundancy.

  128. @Roger Philips

    I don’t wish to be rude, but you seem to know just enough to get yourself confused.

    Do not worry, I do consider it very likely that I am confused.

    @Roger Philips

    What matters is what can be done in practise.

    Indeed, that was what I was trying to address, what does engineering need to make software verification matter in practice. And if it can be done, why do we not see, say, a fully verified micro-kernel with hard-real time tolerances and verified upper bounds in resource use, run time, and failure probability?

    People write embedded OS’ all the time. Why not make one that is proven secure and conquer the embedded space? It is not that the DoD would not pay for it.

    @Roger Philips

    There is nothing to say an interrupt-driven program cannot be verified.

    Yes indeed. No contest to that. The rub is in the thing that is verified. You can verify that the logic of an interrupt-driven program will do what it is supposed to do when it handles an interrupt. But the problem is whether it will behave predictably under load. How to handle the timing issues in the verification? (a genuine question, I do not know)

    An interrupt driven program will have to handle interrupts at the moment they arrive (hard-real time) or at least be able to handle them fast (soft-real time). Naively, you have to check for every internal state how a sequence of interrupts will affect it. The naive workload grows exponentially with program state (size).

    @Roger Philips

    Calculi for dealing with non-deterministic programs have been around for some time now. What’s difficult is figuring out how to apply them effectively.

    My point. Effectively would mean you are able to set upper bounds on unwanted behavior (the shuttle must keep flying). The proverbial edge cases. You would know better how these calculi handle that. My impression has always been that engineers are very unhappy with non-deterministic behavior with unbounded effects.

    @Roger Philips

    I also dispute that mathematics is needed for engineering. It’s not as though any other enegineering discipline hinges completely on mathematics. It’s mostly rules of thumb and redundancy.

    Yes indeed. The only thing an engineer wants is to be able to set tolerances on components and to complete design checking efficiently (say, polynomial time).

    If you exclude any type of scripting or command shells, limit applications to the computational power of stack automatons without external memory, and exclude ill behaving algorithms (eg, in NP), you will most likely be able to set reasonable upper bounds on failure, resource use, and run time. But that takes out a lot of the “General” in “General Purpose Computer”. You effectively reduce the computer to a special purpose appliance.

    It is unclear to me whether you can design the verification check in such a way that you exorcise the halting problem and you can ensure checking to complete in polynomial time. That is important, because in engineering, you want to be able to check your designs effectively. You would better know how that works.

    And obviously, we are talking not about individual applications, but about the complete software stack, from OS and drivers up to the UI and application programs.

    If this is all possible, why is this hidden from us? Why does it take so much more time (orders of magnitude?) to check a small micro kernel than to write it?

  129. Just a small suggestion: Eric, you probably meant ‘Newcomen’, not ‘Lycoming’. Newcomen was the inventor of an early steam engine (initially hand-operated), preceding Watt’s design. But correct me if I’m wrong. Regards.

    ESR says: Correct, thinko fixed.

  130. Indeed, that was what I was trying to address, what does engineering need to make software verification matter in practice.

    It already has, it’s just hard work and it comes out slowly. There’s an automated line of the Paris Metro that runs on software developed using the B method (this was done years ago). Verified compilers have become fashionable lately, including at least one reasonable one for most of C.

    And if it can be done, why do we not see, say, a fully verified micro-kernel with hard-real time tolerances and verified upper bounds in resource use, run time, and failure probability?

    Because those are all difficult things to prove properties about. But if you’re interested in the state of the art you might want to check out the SEL4 project. I forget where they are up to, but they have a kernel written in C that they have “proven” free of memory safety errors. There are serious definitional problems here btw, so be wary anyone uses the word “proof” in computer science.

    Yes indeed. No contest to that. The rub is in the thing that is verified. You can verify that the logic of an interrupt-driven program will do what it is supposed to do when it handles an interrupt. But the problem is whether it will behave predictably under load. How to handle the timing issues in the verification? (a genuine question, I do not know)

    I haven’t ready any work on this. But the first step is probably to model the timing properties of the computer. It doesn’t have to be precise, just conservative.

    An interrupt driven program will have to handle interrupts at the moment they arrive (hard-real time) or at least be able to handle them fast (soft-real time). Naively, you have to check for every internal state how a sequence of interrupts will affect it. The naive workload grows exponentially with program state (size).

    These kinds of proofs (involving side effects) are generally considered difficult. But it is an ongoing area of research. One thing to consider is that the current systems just aren’t designed for verification. Usually, these things are approached by abstraction. So you would probably handle the problem of interrupt-handling semantics as its own project. It’s common to work by construction, letting the proof guide your work (this can make proofs considerably easier).

    My point. Effectively would mean you are able to set upper bounds on unwanted behavior (the shuttle must keep flying). The proverbial edge cases. You would know better how these calculi handle that. My impression has always been that engineers are very unhappy with non-deterministic behavior with unbounded effects.

    The point with verification is not to prove that the whole system is perfectly correct. In bridge building, they do not provide a complete proof that the system will work. They isolate the things that worry the designers the most and try to get them right. With software it seems to me that resource usage is easier to deal with by leaving lots of redundancy in resources than other problems.

    Yes indeed. The only thing an engineer wants is to be able to set tolerances on components and to complete design checking efficiently (say, polynomial time).

    If you exclude any type of scripting or command shells, limit applications to the computational power of stack automatons without external memory, and exclude ill behaving algorithms (eg, in NP), you will most likely be able to set reasonable upper bounds on failure, resource use, and run time. But that takes out a lot of the “General” in “General Purpose Computer”. You effectively reduce the computer to a special purpose appliance.

    Applications generally are special purpose. I’m not sure what kind of systen you’re proposing to verify here. A shuttle with Python scripting? You could in principle verify the behaviour of Python programs. But you’d have to have access to the program ahead of deployment to verify it, or you need an automated proof system to check it at runtime. I don’t understand why you would need the latter case.

    If you really need to bound the resources of a program at runtime, you’d build a virtual machine that adjudicated resource limits and verify that it works. This would cut your Python programs off if they used too many resources though. But again, I don’t really see the problem you’re trying to solve here.

    And obviously, we are talking not about individual applications, but about the complete software stack, from OS and drivers up to the UI and application programs.

    These would be verified individually. For example, you can prove that your C compiler preserves some properties of the input program (e.g. its semantics). That theorem can then be invoked in a higher level proof about the whole system.

    If this is all possible, why is this hidden from us? Why does it take so much more time (orders of magnitude?) to check a small micro kernel than to write it?

    It’s not hidden, except by journal paywalls and the enormous investment it takes to absorb all the material. As to why it’s so hard, I do not know. It’s not clear to me that it is harder to write a microkernel than to prove its correctness. I doubt any microkernel in existence is “correct”. Even the ones that have been verified aren’t really proven correct. They’ve just had some very specific properties proven. The term “proof of correctness” usually means “conformance to a functiional specification” or something like that. Which still leaves room for errors. So I suspect you’re holding a double standard here.

  131. @Roger Philips
    “But if you’re interested in the state of the art you might want to check out the SEL4 project. I forget where they are up to, but they have a kernel written in C that they have “proven” free of memory safety errors.”

    Thanks for the tip. I will have a look.

    I think I was after a different type of “verification” than what is actually used/useful in the field. I also think I am less confused now ;-)

  132. I found this, put it in my Calibre list for reading:

    seL4: Formal Verification of an Operating-System Kernel
    In seL4: Formal Verification of an Operating-System Kernel, Communications of the ACM, June, 2010 Klein et al

    http://www.sigops.org/sosp/sosp09/papers/klein-sosp09.pdf

    …report on the formal, machine-checked verification of the seL4 microkernel from an abstract specification down to its C implementation. We assume correctness of compiler, assembly code, hardware, and boot code.

    seL4 is a third-generation microkernel of L4 provenance, comprising 8,700 lines of C and 600 lines of assembler. Its performance is comparable to other high-performance L4 kernels.

    We prove that the implementation always strictly follows our high-level abstract specification of kernel behaviour. This encompasses traditional design and implementation safety properties such as that the kernel will never crash, and it will never perform an unsafe operation. It also implies much more: we can predict precisely how the kernel will behave in every possible situation.

    This was a comment on:
    http://lambda-the-ultimate.org/node/3916

    Overall the paper is more of an experience report than an in depth exploration of the kernel and its proofs but there is a some meat to be found. More information can be found at the sel4 website.

    1. >Your “Holding up the sky” reminds me of Housman’s “Epitaph on an army of Mercenaries.”

      Might be more optimistic if we were getting paid…

Leave a Reply to Zygo Cancel reply

Your email address will not be published. Required fields are marked *