Risk, Verification, and the INTERCAL Reconstruction Massacree

This is the story of the INTERCAL Reconstruction Massacree, an essay in risk versus skepticism and verification in software development with a nod in the general direction of Arlo Guthrie.

About three hours ago as I began to write, I delivered on a promise to probably my most distinguished customer ever – Dr. Donald Knuth. Don (he asked me to call him that, honest!) had requested a bug fix in INTERCAL, which he plans to use as the subject of a chapter in his forthcoming book Selected Papers on Fun And Games. As of those three hours ago Donald Knuth’s program is part of the INTERCAL compiler’s regression-test suite.

But I’m not actually here today to talk about Donald Knuth, I’m here to talk about risk versus skepticism and verification in software engineering – in five part harmony and full orchestration, using as a case study my recent experiences in (once again) calling INTERCAL forth from the realm of the restless dead.

(Feel free to imagine an acoustic guitar repeating a simple ragtime/blues tune in the background. For atmosphere.)

Those of you coming in late may not be aware that (1) INTERCAL is the longest-running and most convoluted joke in the history of programming language design, and (2) all modern implementations of this twisted, sanity-sucking horror are descended from one that I tossed off as a weekend hack in 1990 here in the town of Malvern Pennsylvania (the manual describing the language goes back to 1972 but before my C-INTERCAL there hadn’t been a running implementation available in about a decade).

Since then, the attention I’ve given C-INTERCAL has been rather sporadic. Years have gone by without releases, which is less of a dereliction of duty than it might sound like considering that the entire known corpus of INTERCAL code ships with the compiler. INTERCAL attracts surreality the way most code attracts bitrot; after one longish maintainence hiatus (around the turn of the millennium, whilst I was off doing the Mr. Famous Guy thing on behalf of open source) I discovered that INTERCAL had nucleated an entire weird little subculture of esoteric-language designers around itself, among whom I had come to be regarded as sort of a patriarch in absentia….

Despite my neglect, every once in a while something like that would happen to remind me that I was responsible for this thing. Donald Knuth provided the most recent such occasion; so I gathered together my editors and debuggers and implements of of destruction and dusted off the code, only to discover that it had been a full seven years since I’d last done so. (I can see the tagline now: “INTERCAL has ESR declared legally dead, film at 11.”)

A week of work later, I was even more nonplussed to discover that others had been doing serious work on the compiler while I wasn’t looking. Notably, there was one Alex Smith (aka ais523, hail Eris, all hail Discordia!) a doughty Englishman who’d been shipping a descendant of my 2003 code since 2006. With lots of new features, including a much more general optimizer based on a technique that could be described as a compiler compiler compiler. (That’s “compiler to the third meta”, for those of you in the cheap seats.)

I straightaway wrote Alex explaining the challenge from Knuth and suggesting we defork our projects. He agreed with gratifying enthusiasm, especially when I explained that what I actually wanted to do was reconstruct as much of the history of C-INTERCAL as possible at this late date, and bash it all unto a repo in a modern distributed version control system which he and I could then use to cooperate. Now, early in this essay I introduced by stealth one of the topics of my discourse on skepticism and verification, the regression-test suite (remember the regression-test suite?). This is another one, the DVCS. We’ll get back to the DVCS.

Reconstructing the history of C-INTERCAL turned out to be something of an epic in itself. 1990 was back in the Dark Ages as far as version control and release-management practices go; our tools were paleolithic and our procedures likewise. The earliest versions of C-INTERCAL were so old that even CVS wasn’t generally available yet (CVS 1.0 didn’t even ship until six months after C-INTERCAL 0.3, my first public release). SCCS had existed since the early 1980s but was proprietary; the only game in town was RCS. Primitive, file-oriented RCS.

I was a very early adopter of version control; when I wrote Emacs’s VC mode in 1992 the idea of integrating version control into normal workflow that closely was way out in front of current practice. Today’s routine use of such tools wasn’t even a gleam in anyone’s eye then, if only because disks were orders of magnitude smaller and there was a lot of implied pressure to actually throw away old versions of stuff. So I only RCSed some of the files in the project at the time, and didn’t think much about that.

As a result, reconstructing C-INTERCAL’s history turned into about two weeks of work. A good deal of it was painstaking digital archeology, digging into obscure corners of the net for ancient release tarballs Alex and I didn’t have on hand any more. I ended up stitching together material from 18 different release tarballs, 11 unreleased snapshot tarballs, one release tarball I could reconstruct, one release tarball mined out of an obsolete Red Hat source RPM, two shar archives, a pax archive, five published patches, two zip files, a darcs archive, and my partial RCS history, and that’s before we got to the aerial photography. To perform the surgery needed to integrate this, I wrote a custom Python program assisted by two shellscripts, topping out at a hair over 1200 lines of code.

You can get a look at the results by cloning from git://gitorious.org/intercal/intercal.git which is the resulting git repo. Now, friends, you may be wondering why I bothered to do all this rather than simply starting a repo with ais523’s latest snapshot and munging my week’s worth of changes into it, and all I’m going to say about that is that if the answer isn’t intuitively obvious to you you have missed the point of INTERCAL and are probably not a hacker. A much more relevant question is why I’m writing about all this and what it has to do with risk versus skepticism and verification in software engineering. That’s a good question, and the answer is partly that I want you all to be thinking about how software-engineering practice has changed in the last twenty years, and in what direction it’s changed.

Software engineering is a huge exercise in attempting to control the risk inherent in writing programs for unforgiving, literal-minded computers with squishy fallible human brains. The strategies we’re evolved to deal with this have three major themes: (1) defensive chunking, (2) systematic skepticism, and (3) automated verification.

I’m not going to go on about defensive chunking much in the rest of this talking blues, because most of the tactics that fit under that strategy aren’t controversial any more. It’s been nearly forty years since David Parnas schooled us all in software modularity as a way of limiting the amount of complexity that a programmer’s brain has to handle at one time; we’ve had generations, in the tempo of this field, to absorb that lesson.

But I am going to point out that is highly unlikely we will ever have another archeological epic quite like C-INTERCAL’s. Because another form of defensive chunking we’ve all gotten used to in the last fifteen years is the kind provided by version-control systems. What they let us do is make modifications with the confidence that we can revert chunks of them to get back to a known-good state. And, as a result, hackers these days create version-control repositories for new projects almost as reflexively as they breathe. Project history tends not to get lost any more.

Distributed version control systems like git and hg and bzr help; they’re astonishingly fast and lightweight to use, lowering the overhead of using them to near nothing. And one effect of DVCSes that I’ve confronted in the last couple of days, as ais523 and I got the new C-INTERCAL repo and project off the ground, is to heighten the tension between development strategies that lean more on systematic skepticism and development strategies that lean more on automated verification.

I’m going to sneak up on the nature of that tension by talking a bit about about DVCS workflows. Shortly after I created the C-INTERCAL repo on gitorious, ais523 and I had a misunderstanding. I emailed him about a feature I had just added, and he pointed out that it had a bug and he’d pushed a correction. I looked, and I didn’t see it in the repo, and I asked him, and here’s what he said:

I pushed it to a separate repository, <http://gitorious.org/~ais523/intercal/ais523-intercal> I thought the normal way to collaborate via git was for everyone to have a separate repository, and the changes to be merged into the main one after that. Should I try to push directly to the mainline?

Here’s what I said in reply:

Yes. git workflow is highly variable, and the style you describe is normal for larger projects. Not for small ones, though. I’ve worked in both styles (my large-project experience is on git itself) so I have a practical grasp on the tradeoffs.

For projects the size of C-INTERCAL (or my gpsd project, which has at most about half a dozen regular committers) the most convenient mode is still to have a single public repo that everybody pulls from and pushes to. Among other things, this workflow avoids putting a lot of junk nodes in the metadata history that are doing nothing but marking trivial merges.

This is not quite like regressing to svn :-), because you can still work offline, you’re not totally hosed if the site hosting the public repo crashes, and git is much, *much* better at history-sensitive merging.

Now, this may sound like a boring procedural point, but….remember the regression-test suite? Have a little patience and wait till the regression-test suite comes around on the guitar again and I promise I”ll have a nice big juicy disruptive idea for you right after it. Maybe one that even undermines some of my own previous theory.

Here’s what ais523 came back with, and my next two replies telescoped together:

> Ah; my previous DVCS experience has mostly been in small projects where
> we kept different repos because we didn’t really trust each other. It
> was rather common to cherry-pick and to ignore various commits until
> they could be reviewed, or even redone from scratch…

Interesting. The git group functions this way, but I’ve never seen it on
any of the small projects I contribute to. Makes me wonder about
cultural differences between your immediate peer group and mine.

I should note that on the gpsd project, one of the reasons the
single-public-repo works for us is that we have a better alternative
to mutual trust – an *extremely* effective regression-test suite. The
implicit assumption is that committers are running the regression
tests on every nontrivial commit. Why trust when you can verify? :-)

I guess I’m importing that philosophy to this project.

Hm. I think I should blog about this.

Why trust when you can verify, indeed? But I now think the more interesting question turns out to be: Why distrust when you can verify?

Keeping different repos because you don’t really trust each other, cherrypicking and having an elaborate patch-review process, being careful who gets actual commit privileges in what – this is what the Linux kernel gang does. It’s the accepted model for large open-source projects, and the hither end of a long line of development in software engineering strategy that says you cope with the fallibility of squishy human brains by applying systematic skepticism. In fact, if you’re smart you design your development workflow so it institutionalizes decentralized peer review and systematic skepticism.

Thirteen years ago I wrote The Cathedral and The Bazaar and published the generative theory of open-source development that had been implicit in hackers’ practice for decades. If anyone living has a claim to be the high priest of the cult of systematic skepticism in software development, that would be me. And yet, in this conversation about C-INTERCAL, as in several previous I’ve had about my gpsd project since about 2006, I found myself rejecting much of the procedural apparatus of systematic skepticism as the open-source community has since elaborated it…in favor of a much simpler workflow centered on a regression-test suite.

Systematic skepticism has disadvantages, too. Time you spend playing the skeptic role is time you can’t spend designing or coding. Good practice of it imposes overhead at every level from maintaining multiple repositories to the social risk that an open-source project’s review-and-approval process may become as factional, vicious and petty-politicized as a high-school cafeteria. Can there be a better way?

The third major strategy in managing software-engineering risk is automated verification. This line of development got a bad reputation after early techniques for proving code correctness turned out not to scale past anything larger than toy programs. Fully automated verification of software has never been practical and there are good theoretical reasons (like, the proven undecidability of the Halting Problem) to suppose that it never will be.

Still. Computing power continues to decrease in cost as human programming time increases in cost; it’s inevitable that there has been a steady interest in test-centered development and setting programs to catch other program’s bugs. Conditional guarantees of the form “I can trust this software if I can trust its test suite” can have a lot of value if the test suite is dramatically simpler than the software.

Thirteen years ago I wrote that in the presence of a culture of decentralized peer review enabled by cheap communications, heavyweight traditional planning and management methods for software development start to look like pointless overhead. That has become conventional wisdom; but I think, perhaps, I see the next phase change emerging now. In the presence of sufficiently good automated verification, the heavyweight vetting, filtering, and review apparatus of open-source projects as we have known them also starts to look like pointless overhead.

There are important caveats, of course. A relatively promiscuous, throw-the-code-through-the-test-suite-and-see-if-it-jams style can work for gpsd and C-INTERCAL because both programs have relatively simple coupling to their environments. Building test jigs is easy and (even more to the point) building a test suite with good coverage of the program’s behavior space isn’t too difficult.

What of programs that don’t have those advantages? Even with respect to gpsd there are a few devices that have to be live-tested because their interactions with gpsd are too complex to be captured by a test jig. Operating-system kernels and anything else with real-time requirements are notoriously hard to wrap test harnesses around; even imagining all the timing problems you might want to test for is brutally hard to do. Programs with GUIs are also notoriously difficult to test in an automated way.

I think this objection actually turns into a prescription. We cannot and should not junk the habits of systematic skepticism. Open source is not going to become obsolete, any more than previous big wins (like, say, high-level languages) became obsolete when we figured out how to do open source methodically. But what we could be doing is figuring out how to design for testability and do test-centered development on a wider range of programs, with the goal (and the confident expectation) that doing so will reduce the overhead and friction costs of our open-source processes.

The aim should be to offload as much as possible of the work now done by human skepticism onto test logic so that our procedures can simplify even as our development tempo speeds up and our quality improves. The signature tools of the open-source world over the last fifteen years have been new kinds of collaboration engines – version control systems, web servers, forge sites. Perhaps the signature tools of the next fifteen years will be test engines – coverage analyzers, scriptable emulation boxes, unit-test frameworks, code-auditing tools, and descendants of these with capabilities we can barely imagine today.

But the way of the hacker is a posture of mind; mental habits are more important than tools. We can get into the habit of asking questions like “What’s the coverage percentage of your test suite?” as routinely as we now ask “Where’s your source-code repository?”

And friends, they may think it’s a movement. The INTERCAL Reconstruction Anti-Massacree movement. No, wait…this song’s not really about INTERCAL. It’s about how we can up our game – because the risk and challenges of software engineering never stand still. There’s an escalator of increasing scale that made the best practices of 1990 (the year C-INTERCAL was born) seem ridiculously patchy in 2010, and will no doubt make today’s best practices seem primitive in 2030. Our tools, our practices, and our mental habits can’t stand still either.

161 comments

  1. I’m accustomed to ESR’s technical postings either summarising something I already knew – and doing so so much better than I had read it before that it amounts to new knowledge, or covering a topic that is original to me.

    This, on the other hand, feels like preaching the gospel of TDD – well written, sure (that goes without saying) but no more or less than others.

    So am I catching up, or is ESR slowing down?

  2. Richard, it’s difficult for me to tell if you are conflating automated testing with test-driven development. What I took away from this post as “Yes, automatic tests can be hard sometimes, we should be figuring out how to make them not hard” which, in many ways, has very little to do with the current testing gospel.

  3. > Perhaps the signature tools of the next fifteen years will be test engines – coverage analyzers, scriptable emulation boxes, unit-test frameworks, code-auditing tools, and descendants of these with capabilities we can barely imagine today.

    I think that the next big things will be related to code normalization.

    We sort of understand data normalization. We know that any fact that is in two or more places is wrong in some of them some of the time absent heroic measures. We also know that copies can be used to increase performance. (We haven’t done much with those two facts in combination, but we at least know them and act on them separately.)

    The same is true of code. Look at your bugs. What fraction are instances of “I fixed X in some places but not others?” or something similar? What fraction of your time is spent managing dependencies?

    The mantra “Don’t Repeat Yourself” is a plea for code normalization, but how many of our tools actually help?

    Yes, much of object oriented programming implements some aspects of code normalization, and so does the use of libraries, and frameworks, but these merely allow code normalization, they don’t help implement it.

    And yes, this includes tests.

  4. Fully automated verification of software has never been practical and there are good theoretical reasons (like, the proven undecidability of the Halting Problem) to suppose that it never will be.

    Turing-completeness is not a serious barrier to automated verification, since the vast majority of practically useful algorithms (including e.g. any algorithms which can be characterized by computational complexity class) can be easily proven to terminate.

    All type-safe programming languages perform fully automated verification of _some_ correctness properties. Most type systems in common use can only provide very limited safety guarantees, but others are expressive enough to encode complete formal specifications and verify that their implementation is correct. In the research community, there is lots of progress being made wrt. expressive type systems, and much of that work will soon make its way into production languages.

    It is true that weakly typed languages are becoming more popular at present, but this is largely because of the inconvenience of specifying types by hand (i.e. manifest as opposed to inferred typing) and the ease of writing weakly typed code. Future type systems will most likely support forms of type inference and mixing typed with untyped code, which should address both concerns.

  5. @esr – first of all, thank you for a wonderful write-up, as usually precisely putting into precise and convincing wording what I’ve been trying to express for some time now.

    One minor quibble – the automated testing revolution is somewhat independent of open-source development model – it’s bigger than that.

    To be more specific, many closed source shops (at least financial houses – no idea about software companies) have people who, like me, not merely fully embrace this mode of thinking/mental habit you presented but actively push it into the culture (to literally quote my main 2 contributions to a 2-years-ago “Test Steering Committee” final plan for our fairly large company, “Start measuring code coverage percentage ASAP as an important metric” and “buy, build or steal automated GUI testing framework”).

  6. Along the lines of automated testing. While it does not have the coverage necessary to completely replace the open source overhead, Phoronix has an automated test set up for the kernel. They found a major regression in the 2.6.32-rc5 kernel and were able to tell when the regression was introduced. Even automated tests that check subsystems of large projects can be helpful.

  7. In my limited UI experience, I’ve found that I can usually design my programs based on a command-object pattern, and then just wire a menu and set of keyhandlers onto an event bus. (The best system I’ve ever found for doing this was Ruby with GLADE, but GLADE seems to have fallen out of use as of late.) It seems to me that software testing can easily be made much more complete if the program logic is internally driven in an automatable fashion; this is basically an extension of the well known UI/engine separation pattern. Do you have any offhand idea how widely this pattern is being applied in new or reworked projects?

  8. I would like to make 2 comments, coming from my current employment at a large company which derives most of its revenue from a single software product.

    First, automated testing is essential. Depending upon how you slice and dice them, we have something close to 1000 software engineers working on a single software product which is tightly-coupled by current standards. This means that if I make an innocuous change, it is quite possible for this to affect the product in ways which I don’t fully understand. Let’s face it, not all internal APIs are thoroughly documented with supported inputs, output and exception conditions. It is very easy for the product not only to become unusable, but unbuildable (many years ago there were week-long stretches where this was the case). To ensure that neither of these things happen, we have the following mandatory process for check-ins:
    – Peer review of any change. This is pretty simple – ask the people who know the area you’re working in if what you are proposing is reasonable. Pretty painless, though it does involve some waiting.
    – Full build of the source tree. You never know when a tiny little change is going to break something, or declare a duplicate global variable, or something else.
    – Automated regression test which ensures that the codeline is still in a useful state. That is, my check-in should not prevent anybody else from being able to work or develop on the software. This takes about 5 hours. Of course, this is a small subset of functionality, but ensure that most people using most functions will be able to keep working.

    The second point is that the least-frequently tested cases are the error-handling ones. The common use cases (that is, the functionality you’re actually trying to write) are easy to detect problems in. Those which don’t get used often (or at all) never get tested by deliberate use, and therefore only by test suites, which also tend not to focus on error handling.
    Consider a web server product. A regression suite for this would probably create a clean configuration in a virtual machine, install the product and set it up with some known layout. Then, multiple scripted clients would do things like ensure that data can be read, that GET and POST calls work. Bonus points if somebody decides to do permissions checking. Permissions checking is considered to be “error handling”. Not nearly.

    What people don’t think of doing is opening up a request socket and streaming the contents of /dev/urandom into the server. How about opening up a socket and then not sending anything. How about reading a file and having the permissions on that file change halfway through the read.

    This is why code coverage metrics are important. To handle the nasties on the big, wide-open Internet you need to have code to handle exceptional cases (which might lead to vulnerabilities). But unless you know that they have been tested, you might just open up a vulnerability or critical error in a bug in your error-handling code. An obvious NULL-pointer dereference in an error-handling routine just looks bad.

  9. What happened to the 27 8 by 10 color glossy photos with the circles and the arrows and the paragraph on the back of of each one explaining what it was to be used in evidence against us?

  10. > The implicit assumption is that committers are running the regression
    > tests on every nontrivial commit. Why trust when you can verify? :-)

    Right but there is another fairly obvious step there, isn’t there? Why make this assumption at all? Why not have out tools take a soft commit and only allow a hard commit when a back end process checks out the whole code with this new soft commit and verifies the regression automatically?

    Those of you unfamiliar with Microsoft’s TFS product might not be aware that it provides the ability to do this, and also provides for many other automatic check in control rules, such as mandated code review sign off, and perfect completion of static analysis tools.

    FWIW, a lot of the problem with static analysis tools (by which I mean automatic bug detectors) is both that languages are way to complicated (C++ here’s looking at you), and secondly in a demand for self expression the culture demands more dimensions of programing freedom that are necessarily appropriate. For example, C would be a much easier language to analyze if we could agree on a subset of the pre-processor that was allowable. However, that is just not possible in that culture.

    Frankly, Microsoft has done some really interesting work in this field. C# is a much more tractable language to analysis, and they have a Talmudic set of rules defining what is considered a good program, including I18n, naming conventions, indentation, documentation and so forth. Their static analysis tool checks for both bugs using whole program analysis, and deviations from these softer standards too. (It also makes for much better editing tools too.)

    There is a lot more that tools could do, but I think MS is doing some really interesting work in this field. And some of the MS halo are producing some amazing add ons to enhance this work too.

    I agree though that TDD is weakest at the front (GUI) and at the back (database.) And mobile doesn’t make that any easier.

    I’d also suggest that part of the biggest problem is a certain programming culture. To give one example, one of the most effective static analysis tools is the compiler itself. Yet, despite this, programmers regularly disable many of these checks. Why is the default warning level of a compiler not set at the highest level? Why isn’t the first instinct of every programmer to crank it up to the max, and deal with the false positives? False positives are much easier to fix than missed negatives. Especially so since false positives are found very early, and missed negatives are found very late.

  11. Jessica Boxer Says:
    > I agree though that TDD is weakest at the front (GUI)

    Just as an additional thought on this, the challenge with GUI testing is not input, it is output. It is very easy to describe a serious of actions on the part of a user to a GUI. The big challenge is determining whether those clicks produced the correct result. The primary reason for that is there isn’t a good quality language to describe the output of a GUI in terms that are sufficiently precise to add verification value, but sufficiently imprecise as to not be brittle to microscopic changes. Defining a language like that is a task that would impact software development in a very positive way. Any takers?

  12. > Those of you unfamiliar with Microsoft’s TFS product might not be aware that it provides the ability to do this

    Yeah, unlike git, cvs or svn which don’t have commit hooks…

    Anyway, I was going to make the same point that version control commit hooks allow to automate requirements on commits, for example running regression tests. So I consider that a good point :)

  13. TOK Says:
    > Yeah, unlike git, cvs or svn which don’t have commit hooks…

    Curious that you don’t understand the difference between hooks and work flow.

  14. > Curious that you don’t understand the difference between hooks and work flow.

    Do enlighten me, then, what we want to achieve here that hooks do not provide for? J. Random Coder tries to commit something to version control. VC runs acceptance tests and either takes the commit in or declines. I haven’t worked a lot with the older tools, but git at least definitely can either accept or decline a commit based on the commit hook.

  15. TOK Says:
    > Do enlighten me, then,

    There is a big difference between what something does and what it may be able to do. I assume you are a programmer. My experience is that programmers often find it difficult to draw this distinction.

  16. (Yes, I’m a SW engineer for a living.)

    > There is a big difference between what something does and what it may be able to do.

    It is true that automated tests on commit are not a widespread habit (from what I’ve seen), but I know at least one fairly large company (which also does open source development) that has embraced commit testing using git and friends. The keyword is “continuous integration” — automated tests are a requirement for CI to work on a large scale software project.

    The point is that we don’t need no stinkin’ MS tools, we already have the capability in widely used open source tools. We just need to start using it more.

  17. > It is true that weakly typed languages are becoming more popular at present, but this is largely because of the inconvenience of specifying types by hand (i.e. manifest as opposed to inferred typing) and the ease of writing weakly typed code.

    The lack of ease wrt statically typed systems is intentional and symptomatic of a bigger issue.

    I once asked a static typing advocate why the parts of the code that I wasn’t working on had to be type-correct. He responded that type correctness was the goal. That’s wrong. Efficiently developing working programs is the goal. A type system that helps with developent by humans is a good thing. A type system that impedes development by humans is a bad thing.

    Note that type systems are actually just weak representation checkers, and that’s not the kind of error that I make. In some contexts, adding apples to oranges makes sense while it’s a mistake in others. A type system can be persuaded to help check either constraint, but if both appear in the same program….

  18. TOK Says:
    > The point is that we don’t need no stinkin’ MS tools

    Microsoft makes great development tools, that is a plain fact. Only a fool or a demagogue refuses to learn from the successes of their enemy.

  19. Andy Freeman Says:
    > The lack of ease wrt statically typed systems is intentional and symptomatic of a bigger issue.

    FWIW, i have recently moved from working is strongly typed languages (C# and C++) into weak or duck typed languages (JavaScript and Python.) My experience has not been a positive one. Part of it might simply be that I haven’t developed the right mindset and habits, but, especially so in JavaScript, I find the fact that you don’t tell the compiler what is going on significantly reduces its ability to find mistakes.

    Let me give one simple example:

    if(object.userid == currentUserId) save(object.userId);

    // Bug, for some reason save never gets called!!!

    This is an extremely subtle bug that can be extremely hard to find, and is directly due to the loose type system (in this case the class system) of JavaScript.

    It is a curious counter trend I suppose. This thread is about the increasing use of tools to detect bugs in software, and strict typing has always been one of the main tools to do that. However, the trend in language design is away from that.

    There is a learning thing going on for me that started a few years ago. A guy I worked with asked me why const correctness was so important. I believed that it was, but I couldn’t really give him a good answer. However, my basic philosophy is the more you tell the compiler what you mean, the more likely you are to have it find your bugs for you. Writing virgin code is a lot easier than finding bugs, especially when you have to find them at 2am with the deadline clock ticking. We should not tar the entire explicit typing world with the brush of C++’s brain dead syntax.

    I’m not saying weak, or non explicit type systems are worse. I’m just saying that the transition has been difficult and bug prone for me. I see some advantages to newer type approaches, but I am not yet convinced the pros outweigh the cons.

    1. >I see some advantages to newer type approaches, but I am not yet convinced the pros outweigh the cons.

      Languages with type inference can combine the advantages of static typing with the flexibility of dynamic. I think we’ll see more of this.

  20. > Microsoft makes great development tools, that is a plain fact. Only a fool or a demagogue refuses to learn from the successes of their enemy.

    The thing is, nothing to learn in this case. Commit verification is old news. So is mandatory Acked-By: signoff, albeit perhaps not hardcoded in a tool, but part of a project’s workflow anyway. Something like that is also trivially easy to automate, when so desired.

  21. Dyspeptic Curmudgeon. The judge wasn’t gonna look at them anyhow, so they left them out. Also, Tums is very helpful.

    Guest: I have now used “normal” statically typed languages, dynamically typed languages, and type-inference languages quite a bit, and I still find that type-inference languages force you to think about types every moment, with the additional problem that existing compilers for ML, at least, don’t do a good job explaining what’s actually wrong, so while you know your program is broken, it’s hard to see what to do to fix it. With dynamic typing, I only need to think about types when the duck actually quacks. Typed Scheme (or Typed Racket, now) is supposed to be much better about this, but I haven’t actually used it.

    Jessica Boxer: Your immediate problem is that you are using ==, which is always a mistake in JavaScript. Read Douglas Crockford’s JavaScript: The Good Parts, and use === to test equality; JSLint can enforce this.

    For others, JavaScript === is true iff its two arguments have the same type and value; == attempts to type-cast its arguments by “complicated and unmemorable” rules if they have different types. These rules are actually intransitive, which makes == unfit to be an equivalence relationship: "0" == 0 is true, as is 0 == "", yet "0" = "" is false.

  22. Just my 2 cents:

    People here seem to be conflating TDD and automatic testing. They’re not the same thing. TDD is a design and development methodology. With TDD, before you even write one line of code to implement a new feature, you write a tiny little test which tests the result of that new feature. This little test, more often than not, will not compile, of course. So you write *just enough* code to get it to compile. At this point the test will run – but it will, of course, fail (or it should – if it doesn’t, something is wrong). So you write *just enough* code to get it to run, committing as many sins as you want in order to do this. Seriously, anything you want – cut and pasted code, hardcoded and rigged values, etc. The goal is just to get the test to pass (or to “get the bar to green” as they say). Then, once your test passes, you “pay for your sins” and you refactor to your heart’s content (refactoring is the only coding you’re supposed to do outside the context of a test)

    True, the end result of TDD is a suite of automatic tests – a huge number of tiny, focused automatic tests. But just having a bunch of automatic tests does not mean you’re following TDD. Writing the tests *after* you finished coding for example, would not be TDD.

    It’s supposed to be a design methodology; when you write the tests first, often when you don’t have a really solid idea of what the whole thing is supposed to do, you tend to start at the bottom with the stuff you’re most familiar with and work your way up. The end result is (supposed to be) a layered, decoupled system – almost for free, because its just easier to write the tests that way. YMMV of course.

    It’s also supposed to let you refactor with confidence, since the army of tiny little tests that are the result of TDD are supposed to tell you when something is not quite working as expected. Again, YMMV.

    Jessica Boxer Says:
    > I agree though that TDD is weakest at the front (GUI)

    > Just as an additional thought on this, the challenge with GUI testing is not input, it is output. It is very easy to describe a serious of actions on the part of a user to a GUI. The big challenge is determining whether those clicks produced the correct result. The primary reason for that is there isn’t a good quality language to describe the output of a GUI in terms that are sufficiently precise to add verification value, but sufficiently imprecise as to not be brittle to microscopic changes. Defining a language like that is a task that would impact software development in a very positive way. Any takers?

    TDD is indeed difficult with GUIs. One thing that helps is the MVP pattern (I think Martin Fowler calls it the “Passive View”). You divide your GUI into a “View” and a “Presenter”. The View is as dumb as you can possibly make it, really just a bag of widgets from the point of view (heh) of the Presenter, where all the GUI logic resides. The Presenter listens for events from the widgets and, in response, does things like turn the visibility of this widget on or off, or makes that widget enabled or disabled. The View mostly just takes care of laying the widgets out.

    When you do this, the Presenter is amenable to fast unit testing. The View is not – but it’s so dumb you don’t care. Depending on how you coded things, testing to see if a button click had the right effect is often just a matter of testing whether a particular Panel had been made visible or not.

  23. @JessicaBoxer:

    I agree though that TDD is weakest at the front (GUI) and at the back (database.) And mobile doesn’t make that any easier.

    There are plenty of tools for automated GUI regression testing. It’s a pity that most open source projects don’t use them. You’re right about mobile, though; AFAIK, there aren’t any GUI testing frameworks for mobile apps. OTOH, I think part of why this is so is that mobile GUIs are still in their infancy, and over time, I think this situation will change.

    There is a big difference between what something does and what it may be able to do. I assume you are a programmer. My experience is that programmers often find it difficult to draw this distinction.

    This made me giggle. Yes, you’re right that VC commit hooks aren’t just for executing automated tests during commit and accepting or rejecting the commit based on the results of the test. However, open source projects tend to use them either for automated testing, or for notification purposes, or, more usually, both.

  24. @Desmond: Yes, MVP, or as I call it, MVC (Model/View/Controller) works by separating out the “View” (GUI) from the “presentation logic” (Controller) from the backend code (Model).

    It does help with modularity quite a bit, which indirectly helps with testing, of course. OTOH, integration testing is still needed where the three layers are stitched together, and that’s where GUI testing frameworks come in.

  25. OBTW, there is now an open-source SCCS clone called GNU CSSC; the acronym stands for “Compatibly Stupid Source Control”. My employer is actually using this on some very old Unix boxen that can’t be upgraded, probably because the version of SCCS that comes with them is not Y2K-compliant.

    Of course the version of sccs that still ships with Solaris 10 is also open source.

  26. @esr –
    > “Languages with type inference can combine the advantages of static typing with the flexibility of dynamic. I think we’ll see more of this.”

    Would you mind expanding on the second part of that statement?

    Aside from a brief mention in “C++0x”, and the fact that the Perl “in full production by 2020” 6 is supposed to have it, do you actually see evidence that type inference will ever make it out of a lab language domain where it appears to reside and into an actual widespread production usage?

    BTW, I second the impression that John Cowan had for ML. When I had to use it, the implementation of type inference just wasn’t conductive to “forget about type related workload” goal. Lucky for me, the usage was only for a class.

  27. > Languages with type inference can combine the advantages
    > of static typing with the flexibility of dynamic.
    > I think we’ll see more of this.

    To give a bit more detail, because it’s a really interesting feature of Python, you can:

    def foo( parm1, parm2 ):
    if type(parm1) != int: raise TypeError(“must use foo( int, … )”)

    The parm1 is now statically typed (albeit not checked until runtime) but parm2 is still dynamically typed, a.k.a. duck typed, and the whole function foo() is now automatically partially templatized in the C++ sense of templatized, because you can call foo() with nearly any type for parm2 as long as the method names called from foo are defined inside parm2.__class__.

  28. > Languages with type inference can combine the advantages of static typing with the flexibility of dynamic. I think we’ll see more of this.

    I agree that we’ll see more but the only significant advantage is that complete programs will be shorter because they have fewer explicit type tokens.

    Type inference doesn’t help with incomplete programs, aka programs under significant development.compile.

    For example, when I’m working through one side of a conditional, any time that I have to spend on the other half to satisfy a type checker is bad because it diverts my focus. With python, I put an “assert False” on the other half and I’m good to go until I actually need code on that other side. With a “type safe” language, I have to put enough fluff into that other half to ensure that the program as a whole is type safe and type inference doesn’t help at all here.

    Advocates of static typing don’t see this as a problem.

  29. > the additional problem that existing compilers for ML, at least, don’t do a good job explaining what’s actually wrong, so while you know your program is broken, it’s hard to see what to do to fix it.

    It’s not just ML. As Steve Yegge put it, “The classic example is SML, which is so fanatically typed that you’re guaranteed never to get a runtime exception, because you will never get your goddamn program to compile. There’s nothing more fun than having your compiler tell you: “Error: expected type (int, int, int) but got type (int, int, int)”.

    Yes, I know that SML is ML, but all of implementations of languages with comparably strict typing systems behave the same.

  30. @Morgan Greywolf: The MVP pattern, as least as championed by Google, is subtly different from “traditional” MVC in that the MVC view is split in half, and everything that has any sort of logic whatsoever goes in the presenter. This does facilitate both testing and automatic binding (such as the GWT UiBinder and libglade), but it enforces such a close binding between the workflow and the view layout that using a common controller base among significantly different views (such as a desktop and a mobile interface that do things differently rather than just having differing amounts of chrome) requires varying degrees of contortions.

    I’ve come to the point in a GWT project I’m building that I’m using a hybrid, with a presenter layer for certain interfaces on top of a more traditional controller implementation. This (theoretically) permits better testing, including browserless testing, while still giving me the flexibility to have varying interfaces. We’ll see how it turns out.

  31. Type inference errors should be better programmed to illustrate the solution tree in code.

    Untyped languages can not be referentially transparent, which is required for massive scale composability and parallelization (think multi-core and virtual mesh networking).

    Adaption (evolution) to dynamic change requires the maximum population of mutations per generation, i.e. independent actors.

    Copute the dots.

  32. # esr Says:
    > Languages with type inference can combine the advantages
    > of static typing with the flexibility of dynamic.

    The problem with inference is that sometimes the wrong inference is made, and, because it is not explicit, it can often be extremely hard to find and debug such problems.[*]

    There is a move in static typed languages toward explicit implict typing, which is to say declaring a type is to be inferred. This exists in C# already via the var declaration, and, as far as I remember, is slated for the next rev of C++ too. For example:

    using(var data = new DataContext())
    {
    var thisUser = (from user in data.Users where user.userId == userId select user).firstOrDefault();
    doSomething(thisUser);
    }

    Here the var declarations say infer the type from the rhs, both in a simple expression and a more complex Linq type expression. However, you are making an explicit request to infer the type. Certainly in C# the types don’t even need to exist outside the declaration, being very much like the inferred types in more complex languages:

    var myItem = from user in data.Users select {user.name, user.id};

    Which defines an anonymous type similar to struct { string name; int id;};

    Is explicit or implicit the appropriate default? Not sure. However, as I said earlier, typing virgin code is a lot easier than fixing subtle bugs.

    [*] I should say this is partly a tools issue. We have run time interactive debuggers, but no compile time interactive debuggers.

  33. jb> sometimes the wrong inference is made

    (repeat) The type inference algorithm (compiler) could display the tree of the solution space it traversed, so the decisions made by the algorithm at each node in the tree may be correlated in code.

    jb> no compile time interactive debuggers

    I assume that is what you mean.

    jb> declaring a type is to be inferred
    jb> explicit or implicit the appropriate default

    Implicit is inferred (the default) when not explicit.

  34. The problem with inference is that sometimes the wrong inference is made, and, because it is not explicit, it can often be extremely hard to find and debug such problems.[*]

    For most dynamically typed languages, this isn’t much of an issue, however. In Python, for example, you can always either check the type and raise an exception (less “Pythonic”) or at least check to make sure that the object passed has the attributes you’re looking for (more “Pythonic”):

    def deleteUsers(Users):
    for user in Users:
    if not user.hasattr(“userid”):
    raise TypeError(“We expect Users to be a list of User objects”)
    else:
    deleteUser(user.userid)

    Of course, in this example, we hope that other objects types don’t also have an attribute called “userid”. :)

    1. >For most dynamically typed languages, this isn’t much of an issue, however. In Python, for example, you can always either check the type and raise an exception (less “Pythonic”) or at least check to make sure that the object passed has the attributes you’re looking for (more “Pythonic”):

      In general, in the controversy between pro-strong-typists and pro-dynamic-typists, I come down on the dynamic-typists side. Not because static typing isn’t useful, but because I think it’s better to have your assertions about the semantics of your data be explicit (as in this Python example) rather than implicit via static typing. Among other things, this has good effects on the locality of your logic – the assertions tend to be near where the data gets used rather than being far away in an interface declaration somewhere.

      There may be machine-efficiency reasons to prefer static typing, but that’s a different discussion having nothing to do with the verifiability of code.

  35. There is nice example of advantages of inferred statical (checked at compile time) typing in languages like Haskell and ML, namely an example of merge sorting, where an error in program results in nonsensical inferred type of function. Mind you, the error message doesn’t tell you where the bug is.

    Strong Typing by M-J. Dominus

  36. esr> may be machine-efficiency reasons to prefer static typing

    If you prefer eventually utilizing only a fraction of the cores (deadlock complexity limit and/or more cores than independent multitasks). This will be a real issue within a decade, for non-servers.

    And if you prefer an upper limit on composability complexity, where that upper limit might ever be an issue. It would be analogous to the “pay me now or pay me worse later” argument you made against Tovalds:

    http://esr.ibiblio.org/?p=2426
    http://lwn.net/2000/0824/a/esr-sharing.php3?

    “Untyped languages can not be referentially transparent, which is required for massive scale composability and parallelization (think multi-core and virtual mesh networking).”

    esr> between pro-strong-typists and pro-dynamic-typists

    Best is strong typing for referentially transparent code (long-term code reuse), and dynamic typing where needed (prototyping, custom jobs with low economy-of-scale).

    esr> better to have your assertions about the semantics of your data be explicit

    They are local if the functions are small and typed. The non-type assertions still need to be made in typed code.

  37. # Morgan Greywolf Says:
    > In Python, for example, you can always either check
    > the type and raise an exception (less “Pythonic”) or
    > at least check to make sure that the object passed
    > has the attributes you’re looking for (more “Pythonic”):

    I’m not sure I read your tone on this, so let me ask explicitly. Are you claiming that this is better? Is it your view that performing these checks at runtime is a better approach than performing them at compile time? If you are, I have to respectfully disagree.

    > Of course, in this example, we hope that other objects types don’t also have an attribute called “userid”. :)

    I find your levity troubling, because, inevitably, there will always be another object with an attribute called userid. Mr Murphy guarantees it, and I don’t like trusting my deadlines to Mr Murphy.

    esr wrote:
    >I think it’s better to have your assertions about the semantics of
    > your data be explicit (as in this Python example) rather than implicit
    > via static typing.

    That seems entirely backward to me. Inferred typing is much less explicit that explicit typing. I don’t really understand why you think it is the other way around. Morgan’s example checked the typing very late, when the program ran. I think that is a terrible solution, unless there is no other choice. Late bugs are much harder to fix than early bugs. In a sense, my opinion is that one of the core principles of language design should be “be brittle to bugs, as early as possible”.

    I still haven’t formed a strong opinion, but what I have seen so far is that inferred typing is concise, and subtle. I don’t think either is a positive attribute in software code. (Just to be clear, I don’t think loquacious is a positive attribute either.)

  38. esr> verifiability of code

    Referential transparency is provable. Verification of any Turing complete machine is never complete.

    jb> inferred typing is concise, and subtle

    Be explicit where it matters, be concise so you can read your algorithm. You (programmer) control the fine-grained slider.

  39. @JessicaBoxer

    I’m not sure I read your tone on this, so let me ask explicitly. Are you claiming that this is better? Is it your view that performing these checks at runtime is a better approach than performing them at compile time? If you are, I have to respectfully disagree.

    Most dynamic languages (Python, LISP, Ruby, Lua, etc.) aren’t compiled, they’re interpreted. There is no “compile time.”

    But I wholeheartedly agree with what esr says here:

    In general, in the controversy between pro-strong-typists and pro-dynamic-typists, I come down on the dynamic-typists side. Not because static typing isn’t useful, but because I think it’s better to have your assertions about the semantics of your data be explicit (as in this Python example) rather than implicit via static typing. Among other things, this has good effects on the locality of your logic – the assertions tend to be near where the data gets used rather than being far away in an interface declaration somewhere.

    Better to check your semantics right in your routine! Makes you code far more readable, far more re-usable by other coders, and it makes bugs very transparent.

    I find your levity troubling, because, inevitably, there will always be another object with an attribute called userid. Mr Murphy guarantees it, and I don’t like trusting my deadlines to Mr Murphy.

    Lighten up. :) My example is a very poorly-coded one because I only had about 15-30 seconds to write it. I can certainly do better, as can any Python coder worth his salt.

  40. # Morgan Greywolf Says:
    > Most dynamic languages (Python, LISP, Ruby, Lua, etc.)

    What are all these pyc files all over the place then? Whatever you call the phase of initial transformation of the program text into an executable model is the same thing. The point I am making is that the earlier you check things, and the earlier you find problems, the better off you are.

    > Better to check your semantics right in your routine!

    Why not do everything in your routine? Why have routines at all? Why not do everything in one place. Abstraction serves a very powerful purpose, whether it is abstraction by using functions or abstraction by using types. It is better to delegate the checks to one central place, and focus of the functionality, rather than have to constantly repeat the same checks everywhere and bury the real cheese if a clutter of minutiae.

    > Makes you code far more readable,

    No it doesn’t. It clutters it with irrelevant details. What ever happened to “don’t repeat yourself”.

    > far more re-usable by other coders,

    Absolutely not. It binds particulars of you implementation into every phrase of your code. That is the very antithesis of re usability.

    > and it makes bugs very transparent.

    As clear as a lost wedding ring is a landfill.

    > Lighten up. :) My example is a very poorly-coded one

    That’s fine, I wasn’t being hard on your code per se as much the principles underlying it, namely that you should use explicit code to perform checks that can be done automatically by a program. I apologize for the word levity, it wasn’t meant to be as heavy as it came across.

    1. >That’s fine, I wasn’t being hard on your code per se as much the principles underlying it, namely that you should use explicit code to perform checks that can be done automatically by a program.

      And what checks would those be? I have three decades of experience with statically-typed languages, going back to the pre-1985 era before C stomped all its competitors into thin goo, and my opinion is that what static typing gives you isn’t actually assurance of correctness but rather the illusion of same, an illusion no less false for being beloved of many programmers. All those “automated” checks that you’re not (say) mixing integers with strings distract attention from the checks for higher-level semantic coherency that we should be performing but usually don’t because we’re so conditioned to think that part is done when the compiler is no longer issuing errors or warnings.

  41. # Desmond Says:
    > The View is not – but it’s so dumb you don’t care.

    To be honest, I think this is nuts. From my experience, the majority of application development that I have seen is concerned with the bag of widgets. Most of the complexity in the apps I have seen is in the presentation and acquisition of data from the user. Nearly all of this involve subtle little tweaks in the GUI. And a great deal of visual bugs arise from the ballooning number of micro modalities in the GUI.

    Unit testing AJAX apps is a pain in the butt. And I’d guess there are a lot more people doing that than are doing apps that have all their heft behind the visuals.

  42. # Jocelyn Says:
    > Be explicit where it matters, be concise so you can
    > read your algorithm. You (programmer) control the fine-grained slider.

    I’m with Jocelyn on this one.

  43. esr Says:
    > All those “automated” checks that you’re not (say) mixing integers with strings

    It isn’t about integers and strings, it is about assuring an entity really does have a member variable called userId, not userid. And making sure that is true, even in code that is hard to make fire in your unit tests. It is about having objects that can automatically check that they are internally consistent, because they are built as complete types in one place, rather than softly distributed throughout your program.

    And it is about removing all the checks that Morgan proposes are necessary to ensure this so that your code isn’t cluttered with them in many places rather than one, allowing you to see the higher level semantic coherency without loosing it in the flotsam and jetsum of bookeeping tasks.

    To put it another way, it is about modularity and abstraction.

  44. Typing can pay enormous benefits in re-usability (composability), but for example not if your language has virtual inheritance, as it violates Liskov Substitution Principle:

    http://okmij.org/ftp/Computation/Subtyping/

    The C* languages don’t get close to the possible benefits for various reasons.

    Also typing can enforce a fine grained referential transparency in a language yet to be created but formerly mentioned.

  45. That’s fine, I wasn’t being hard on your code per se as much the principles underlying it, namely that you should use explicit code to perform checks that can be done automatically by a program.

    And what checks would those be? I have three decades of experience with statically-typed languages, going back to the pre-1985 era before C stomped all its competitors into thin goo, and my opinion is that what static typing gives you isn’t actually assurance of correctness but rather the illusion of same, an illusion no less false for being beloved of many programmers. All those “automated” checks that you’re not (say) mixing integers with strings distract attention from the checks for higher-level semantic coherency that we should be performing but usually don’t because we’re so conditioned to think that part is done when the compiler is no longer issuing errors or warnings.

    I recommend reading Strong Typing by M-J. Dominus. In there you can find a comment about FORTRAN, Pascal and C… all of which are statically typed, but weakly typed. The type system detects some errors, produces spurious warnings, and sometimes has to be overriden (via casts and other such mechanisms).

    What about method signatures? Here you have explicit typing close to code, typing which avoid boilerplate code checking for correctness of arguments.

    I agree thought with argument that while implicit typing removes need for some boilerplate code, it might force one to think constantly about types, while dynamic typing allow you to not think about types unless it is a place where it matters.

    BTW traits (Smalltalk) / roles (Perl 6, Moose in Perl 5, BETA, …) are superior to duck typing ;-)

    1. >I recommend reading Strong Typing by M-J. Dominus. In there you can find a comment about FORTRAN, Pascal and C… all of which are statically typed, but weakly typed.

      I’ve read it. I agreed previously that strong typing with an inference engine is interesting, so it shouldn’t surprise you that I agree with his praise of the sort of inference that ML does.

      Still, I don’t think this is a good substitute for being explicit about higher-level invariants, entry conditions and exit conditions – the kind of thing the Eiffel crowd calls programming by contract. And I think the usual effect of static typing is to fool people into thinking they don’t need to do that sort of thing because static-typing constraints will magically make some sort of semantic type coherence happen. Except, of course, that it doesn’t.

  46. esr> three decades of experience

    I’m 45 and began coding in 1983, or with digital logic in 1978. My IQ is roughly 130 – 140, but off the charts in the conceptual (generative essence) realm. I’ve coded on three large (~100,000 lines) commercially successful (10,000 – million users) software, co-author of one, sole author of one. What probably motivated me to learn programming is I took my toys apart since infancy, and I was frustrated that the newest toys had black boxes with silver legs in them that couldn’t be disassembled physically (actually I tried, but didn’t have a powerful microscope).

    esr> don’t think this is a good substitute for being explicit about higher-level invariants, entry conditions and exit conditions

    Ideally, there shouldn’t be higher level invariant conditional branches locally. My first comment on this page was, “Referential transparency.”. A function that does not depend on external state, rather only on its inputs, is infinitely composable (re-usable/robust). Thus it is better to use a type which is explicit about invariants, than to check invariants numerous times rendering every function broken. For example, instead of checking that an Int must be positive, used an Unsigned Int. If you have an invariant that is not a built-in type, then create a class for it and enforce the condition on instantiation, instead of on every propagation of the input of that type to a function.

    Throwing an exception breaks referential transparency in the outer functions thus composability.

    I had a clue that I might know more about this than you, when I read recently at this blog that you had just broad-stroked Haskell for the first time, several months after I had deep studied it (for first time) and modeled a new language with the goal of making referential transparency a data type to make composability a fine-grained tradeoff:

    http://copute.com/dev/docs/Copute/ref/Functional_Programming_Essence.html#Lazy_Evaluation
    http://copute.com/dev/docs/Copute/ref/function.html
    http://lists.motion-twin.com/pipermail/neko/2010-January/002717.html

    Eric if you have a chance, please check your email at esr@thy…, I have some very important emails for you there about future of the internet. It needs your attention please.

    1. >My first comment on this page was, “Referential transparency.”.

      Which I ignored at the time because it showed that you are (at best) seriously confused about these issues. Referential transparency, or the lack of it, is a property of individual functions or compositions of functions; it’s orthogonal to the type system of the language they’re expressed in. You can have referential transparency in a compiled statically typed language like Haskell or in a dynamically-typed interpreted one like any pure-functional subset of LISP.

  47. The end-to-end principle of the internet is being destroyed. 99% of the permutations of connections (composability) are unavailable without writing a program like a virus (STUN, firewall tunnelling through HTTP RFC 3903). Security models are conflated unnecessarily forcing a white-listed hell (blaming security violations on the same origin policy erroneously, eventually to be one where only server farms are white-listed through proxies and firewalls, enterprises already blocked HTTPS because they can’t filter it, they will block any port and protocol they can’t control):

    http://www.ietf.org/mail-archive/web/http-state/current/msg00939.html
    http://www.ietf.org/mail-archive/web/http-state/current/msg00938.html

    Server farms can’t adapt to dynamic change:

    http://video.google.com/videoplay?docid=6413987104216231786&q=blind+watchmaker&total=117&start=0&num=10&so=0&type=search&plindex=1#
    (watch from 34 – 36 mins)

    Larger populations of mutations anneal in less generations, i.e. adapt and evolve faster to dynamic change. See the complex airfoil in the video in the above link.

    The end-to-end internet is a scalable virtual mesh topology because even though the most efficient physical network is trunk+branches (bifurcating tree, see to 37 min in video), the cache proxies eliminate the distance when it matters.

    I have more to say about this, if any one is interested. Otherwise, I will refrain from disturbing. Thanks for reading.

  48. esr> you are (at best) seriously confused
    esr> You can have referential transparency in a compiled statically typed language like Haskell
    esr> or in a dynamically-typed interpreted one like any pure-functional subset of LISP.

    Er, you forget the other case, where the language allows the _OPTION_ for the function type to be referentially nontransparent ;)

    In that case, the type system enables the safe interaction between referentially transparent and non-referentially transparent code. In Haskell this is the state monad. In Copute, it is (will be) the ‘purity’ and ‘immu’ keywords.

    This is very important. I will explain more later.

  49. > the other case, where the language allows the _OPTION_ for the function type to be referentially *NON*transparent

    This goes directly to the theme of this essay– verification.

    Languages which are purely (no exceptions allowed) referentially transparent, generate programs which have no state– they return outputs only. Most useful programs need a state-machine. Referential NONtransparency every where, creates unnecessary/obfuscated states.

    The ability to mix the two types of code is critical to minimizing the complexity of the state-machine that we have to verify. Without static typing, the compiler can not enforce (check) the explicit and/or implicit referential transparency assumptions coded.

  50. In Haskell, the State Monad binds order-of-execution into every function it gets passed into (it is input implicitly by type inference, which confuses some people, i.e. it is not a global variable):

    http://copute.com/dev/docs/Copute/ref/Functional_Programming_Essence.html#State_Monad

    In my proposed Copute, which allows mixing dynamic typing and static implicit+explicit type (a la HaXe), there is a much finer granularity by declaring functions ‘pure’ or impure (‘function’) and declaring non-static class members ‘mutable’ or ‘immu’.

    I discussed the theoretical advantages over Haskell:

    http://code.google.com/p/copute/issues/detail?id=8#c29

    So far there is very little peer review, but I am confident it is heading the direction that we MUST due to the inability to scale referentially NONtransparent code to concurrency and composability (aka re-usability) needs of multi-core, multi-mashups, and the neural net quality of the E2E principle of the internet (which I described only briefly in prior comment). I think the reason the internet is being held back from its potential boils down fundamentally to this conceptual generative problem and innovation. I make some points why open source can’t displace Windoze in consumer market share without this key generative innovation (has to do with consumer want to click a single button, so they need more diversity of applications, but contribution to open source requires too much investment because code bases are too monolithic).

    There is some start at implementation, but it is woefully incomplete:

    http://copute.com/dev/docs/Copute/ref/

  51. “Perhaps the signature tools of the next fifteen years will be test engines – coverage analyzers, scriptable emulation boxes, unit-test frameworks, code-auditing tools, and descendants of these with capabilities we can barely imagine today.”

    Which would demand that programming languages evolve so that code should become more regular, more like data. LISP tried it and failed the popularity competition, plain simply because we think in natural languages and want to write stuff like Johnny.TakeOut(TheGarbage) and not (TheGarbage TakeOut Johnny). If this prediction is true, then once again it will be more important for the code to be computer-friendly in the parsing sense and perhaps a tad bit less important to be human-friendly.

    But perhaps it would be high time to stop thinking about code as something in a text file written with a certain syntax. After all code IS data, and our current paradigm of code is roughly equivalent to the “databases” stored in plain CSV files.

    Why can’t we have our code in as modern ways as our data – write it in any way you please, this language syntax or that language syntax, or maybe through GUI tools etc. and then parse the whole thing and stored it parsed in some sort of a database, from which you can generate different views, views in this language, that language, XML, LISP etc. ?

  52. > It isn’t about integers and strings, it is about assuring an entity really does have a member variable called userId, not userid.

    What fraction of your errors are like that as opposed to “where did that user id come from?”

    Like I wrote above, the errors that concern me have to do with adding apples to oranges, or not, depending on context.

    Type systems check representations. Yes, you can overload representations with a lot of semantic consequence (separate types for your apple and orange inventory, for example), but doing so is very brittle and still tends to miss important things. (I forgot to mention that the april apples and the june apples should be handled separately.)

  53. > Yes, you can overload representations with a lot of semantic consequence

    If don’t use run-time overloading (virtual inheritance), then what semantic consequence do you have in mind?

    Seems to me you get the type your function requires, as long virtual inheritance is not used.

    > (separate types for your apple and orange inventory, for example),
    > but doing so is very brittle and still tends to miss important things.

    Can you give me an example? Seems to me it is more robust, as long as virtual inheritance is not used.

    > (I forgot to mention that the april apples and the june apples should be handled separately.)

    class AprilApple : Apple {}

    What is problem?

  54. # Andy Freeman Says:
    > What fraction of your errors are like that as opposed to “where did that user id come from?”

    In JavaScript, I’d say about 40%. And the thing that concerns me is all the hidden errors that haven’t been discovered, because JavaScript interpreters do such a poor job of error checking. Of course, JavaScript sucks, so my comment is hardly fair. I am not experienced enough with Python to make an intelligent estimate right now. My experience with pure functional languages is even more limited, though my personal opinion is that life is one big series of side effects, and to say different is to live in denial.

    > Like I wrote above, the errors that concern me have to do with adding apples to oranges, or not, depending on context.

    Perhaps a specific example would serve to clarify your point. Apples and oranges aren’t really helping me understand your point.

    > Type systems check representations. Yes, you can overload
    > representations with a lot of semantic consequence (separate
    > types for your apple and orange inventory, for example),
    > but doing so is very brittle and still tends to miss important
    > things.

    I largely disagree with this. Of course, like everything, it depends on the skill of the programmer and designer. However, well written object oriented programs model their types on real world things. And real world things are much less subject to change than imaginary design artifacts.

    > (I forgot to mention that the april apples and the june apples should be handled separately.)

    Yes, type hierarchies and polymorphism do a great job modeling this type of variation in real world objects, don’t you think? The thing about April apples and June apples is that they might be handled differently is some small respects, but they are broadly the same in most other respects.

  55. my opinion is that what static typing gives you isn’t actually assurance of correctness but rather the illusion of same, an illusion no less false for being beloved of many programmers

    I submit that you’re ignorant of dependently typed settings such as Coq. Type systems like Haskell and Caml can be thought of as lite versions of the same idea; the type checker generates proofs of a limited set of invariants. These limited invariants can be leveraged to enforce properties past the static type system, for example using run-time assertions. If I have an A, I know P(A) because I asserted that in all of the exposed mechanisms for constructing values of that type.

    1. >I submit that you’re ignorant of dependently typed settings such as Coq.

      I’m aware of Coq – I’ve read two of the papers and looked at screen shots – but it makes my brain hurt. This is significant, since I’m an expert programmer who used to be a mathematician with a concentration in logic. The fact that Coq ia inaccessible to me suggests that it’s just too hard to use to get any traction in the real world. I think Haskell is at or near the complexity limit of what’s actually deployable.

      Anyway, I think you’re changing the subject; while Coq is formally a type system, it’s far, far more powerful than what programmers (including me) normally think of under the rubric of static typing. Among other things, it lets you declare invariants in a much more general way. So rescuing “static typing” by invoking Coq as an example is a kind of sophistry that”s doomed to end in a pointless definitional wrangle. Not going there, thank you.

  56. jb> life is one big series of side effects, and to say different is to live in denial

    My job will be to convince you otherwise and make it as easier as programming with your favorite language in a similar syntax, including the capability to elegantly and finely-grained interleave dynamic typing.

    I assert that most of the program is referentially transparent and only small portions need to deal with state. For example, most iterations are referentially transparent (each, some, fold, scan). As was the shift to OOP, the shift to referential transparency will take a little effort, then it will just feel natural and make complete sense (“why didn’t I do this before!”).

    Haskell is not for the masses. That is what I want to fix. I could use some help.

  57. esr: “I’m aware of Coq – I’ve read two of the papers and looked at screen shots – but it makes my brain hurt. ”

    Out of curiosity, could you expand on this? What parts of Coq were difficult to understand for you?

    The underlying type system (the Calculus of Inductive Constructions) _is_ very hard to understand and will probably be for the foreseeable future. But other parts of Coq are rapidly improving: for instance, new versions of Coq include a declarative proof mode (which largely does away with “tactics” and linear “proof scripts”, replacing them with human-readable proof sketches). The stdlib is also improving in usefulness and some of its inconsistencies are being resolved.

    Anyway, for the purposes of this discussion, the interesting point about Coq is not that it is formally a “type system” (you’re right, this is nitpicking), but that systems like Coq may have the potential to span the range (in user-friendliness and expressive power) between ML-like static typing and full software verification. (For instance, google “Hoare State Monad” for a fairly elegant extension of Coq to the verification of stateful, imperative programs). Any comments on this point of view?

    Jocelyn: you have some good insights, but your comments here are tedious and distracting. If you want people to pay attention to your ideas, write some blog posts or put some pages up on your website. Link to them in a *brief* comment if relevant to the discussion. Don’t write multiple lengthy comments which are only tangentially relevant.

    1. >The underlying type system (the Calculus of Inductive Constructions) _is_ very hard to understand

      Yes. Yes, it is. I started out with a huge advantage in background over most programmers, in that I’m quite comfortable with formal logic and axiomatic deduction. But I never felt like I got my head around CIC. Even just the notation was a significant barrier.

  58. > you have some good insights
    > Don’t write multiple lengthy comments

    Apologies. Didn’t realize I had so many already

    > which are only tangentially relevant

    My prior reply was relevant to Jessica’s claim that side-effects are _necessarily_ prevalent, and to Eric’s point about Haskell being at the deployable complexity limit (of the typical programmer).

  59. My experience with pure functional languages is even more limited, though my personal opinion is that life is one big series of side effects, and to say different is to live in denial.

    Can you elaborate on exactly what you’re espousing here? Haskell permits side-effects and imperative notations.

  60. Can you elaborate on exactly what you’re espousing here? Haskell permits side-effects and imperative notations.

    Jessica Boxer may have a point here. There are some fairly fundamental reasons to think that ‘the Haskell way of thinking’ cannot deal in a truly natural way with many real-world concepts, such as “state machines”, “reactive systems”, “interactive processes”, “communication protocols”, and yes, real-world “behaviors” and “mutable states”. Generally speaking, the object-oriented paradigm does a better job of modeling these.

    The functional-programming / theory-of-PLs community is working on developing a full understanding of why this is the case and formalizing the features of “object-oriented” reasoning, but the problem is inherently hard, since it involves esoteric issues such as the categorical duality between F-algebras and F-coalgebras. For reference, see e.g. Bart Jacobs’ Introduction to Coalgebra: Towards a Mathematics of States and Observations.

  61. # Roger Phillips Says:
    > Can you elaborate on exactly what you’re espousing here?
    > Haskell permits side-effects and imperative notations.

    I’m espousing ignorance in a weak attempt at humor. But I’m sure that’s what you have come to expect from me, right Roger?

    FWIW, I wish I had the time to do justice to learning Haskell well. I think it would be a mind broadening experience.

  62. To set the record straight on Lisp:

    There are ten serious implementations of Common Lisp (four are proprietary). Exactly one of them is a bytecode-based interpreter. Similarly, there are about 14 serious implementations of Scheme (one is proprietary). Exactly two of them are bytecode-based interpreters, and they are both meant for embedding. Otherwise, interpreter-only implementations are toys.

    Lisp is not an “interpretive language”, though it traditionally comes with an interpreter for development, debugging, and the occasional interpretation job embedded at run time. Indeed, McCarthy’s original 1965 Lisp 1.5 implementation was already equipped with both a compiler and an interpreter.

    As for static and dynamic typing, the problem that the respective advocates have is that they are talking past one another, because they use the term “type” in different senses. In dynamic typing, a type is a class of (run-time) values; in static typing, it is a class of program expressions. Even the most strongly statically typed program ends up running on a typeless interpreter implemented in firmware, after all. (I would credit the LtU participant who pointed this out, but it’s a hard thing to google for.)

  63. Even the most strongly statically typed program ends up running on a typeless interpreter implemented in firmware, after all.

    Which means that some form of runtime checking is the only way to be sure that the data you got is the data you were expecting, regardless of language. If more people understood this, then perhaps we wouldn’t have so much buggy, insecure crapware out there. Personally, I’ve only recently begun to truly grok this idea, and I’ve been changing my coding habits to deal with it.

    I learned to code on statically-typed languages (Pascal and C, which are strongly and weakly-typed, respectively), and these days I prefer dynamically-typed languages (Python, Lua, Ruby) when I can use them, so I actually grok the differences between dynamically and statically typed languages pretty well, and I would expect most people on this forum would as well.

  64. There are some fairly fundamental reasons to think that ‘the Haskell way of thinking’ cannot deal in a truly natural way with many real-world concepts, such as “state machines”, “reactive systems”, “interactive processes”, “communication protocols”, and yes, real-world “behaviors” and “mutable states”.

    This argument would be a lot more compelling to me if I hadn’t written absolutely all of those things in a functional style with no great loss over OO (with which I am also very familiar, very, very familiar) and in some cases great gain. In Erlang, though.

    Haskell’s big problem right now is that it is perhaps the first truly new paradigm in practice to emerge in a long time. Yes, the idea is ancient, but the academic push to make Haskell not only pure but practical is relatively new. Consequently the practical body of knowledge of “how to do things” in the language has been undergoing very rapid change in the past couple of years, and it remains unclear what exactly will be the result of all this change. I’ve been trying to dabble in it, and I am in the weird position of having consistently experienced the following cycle: 1. Encounter a major problem, put my task away and give up. 2. Two weeks later, read about a new library or approach that solves the problem without being non-Haskell. I’ve been through this at least five times in the past several months.

    I’m still following it because there is still a lot of promise there, but I could comfortably recommend forgetting about it for now and checking back in another year or two. While Haskell-the-syntax is an older language, in a sense “Practical” Haskell was only born two years ago, tops, and it suffers from youth.

    1. >This argument would be a lot more compelling to me if I hadn’t written absolutely all of those things [mutable states, etc.] in a functional style with no great loss over OO (with which I am also very familiar, very, very familiar) and in some cases great gain.

      I agree, except that I got the lesson from LISP rather than Haskell.

  65. > Even the most strongly statically typed program ends up running on a typeless interpreter implemented in firmware

    Nonsense. Type constraints are enforced at compile time and provably reflected in the generated firmware machine code, e.g. the function that inputs an Apple will not be operating on an Orange. Whereas with run-time typing, it could be.

    With OOP classes, there is not limit on the richness of a type. If a function is referentially transparent (no side effects), then it is possible to prove it operates within the constraints of the input and output types. Thus follows there is no limit to the richness that can be automatically verified by the compiler.

  66. > > (separate types for your apple and orange inventory, for example),
    > > but doing so is very brittle and still tends to miss important things.

    > Can you give me an example? Seems to me it is more robust, as long as virtual inheritance is not used.

    I gave you an example – in some contexts it is appropriate to add apples to oranges but it isn’t in other contexts.

    How does on define the apple and orange types so static type checking does the correct thing in both contexts.

  67. > > Even the most strongly statically typed program ends up running on a typeless interpreter implemented in firmware

    > Nonsense. Type constraints are enforced at compile time and provably reflected in the generated firmware machine code,

    Provably reflected implies that there is some artifact of those type constraints in the generated firmware machine code.

    Since one of the supposed benefits of static type checking is that such checks are NOT in the generated code, that is a curious claim.

    And no, the fact that a given function is only called in certain circumstances is not a “reflection” of type constraints.

  68. > the fact that a given function is only called in certain circumstances is not a “reflection” of type constraints

    It is when the function is referentially transparent (which btw means it can’t change state in the instance of the input type, but it can return a modified copy), because it is provable that there can be no side-effects reflected into the HLL layer other than the outputs of the function, including effects originating from the firmware implementation layer. *

    > in some contexts it is appropriate to add apples to oranges but it isn’t in other contexts

    AppleCanAddFruit : Apple {}
    OrangeCanAddFruit : Orange {}
    addFruits( new AppleCanAddFruit( Apple ), new OrangeCanAddFruit( Orange ) )

    Or if you prefer more strict:

    AppleCanAddOrange : Apple {}
    OrangeCanAddApple : Orange {}
    addAppleToOrange( new AppleCanAddOrange( Apple ), new OrangeCanAddApple( Orange ) )

    * Caveat that lazy evaluation results in garbage collection indeterminacy, which is an external side-effect. I proposed a solution to Simon, the creator of Haskell. This is not relevant to my point though.

  69. Jocelyn: This solution breaks the subtyping/Liskov substitability principle. Making ApplesCanAddFruit a subtype of Apples (and similar for Oranges) implies that ApplesCanAddFruit can be used in any context in which Apples can be used. But if the AddFruits method returns an ApplesCanAddFruit object, then an object which has had Oranges added to it could be used in a context where a pure Apples object is expected.

    The best solution, as far as I can see, is to make both Apples and Oranges subtypes of a Fruits type, by adding type coercion functions from Apples and Oranges to Fruits. Then the AddFruits method takes and returns Fruits, and since Apples and Oranges can always be treated as fruits, no subtyping/LSP violation occurs.

  70. # Andy Freeman Says:
    > I gave you an example – in some contexts it is appropriate to add apples to oranges but it isn’t in other contexts.

    Really? When was the last time you actually had some apple objects and added some orange objects? Object oriented design is based on real world examples, no theoretical strawmen. If your question is over a system for the management of actual fruit, and there are different ways of handling apples and oranges in the processing plant, then it is a reasonable example. If that is what you want to go with, can you offer examples of where you might want to add them or not add them depending on the context? If you can do that, then I can make a reasonable attempt to give you an answer as to what the appropriate approach is. If you just want to talk in artifacts of theory, then OOD doesn’t apply, and I don’t have an answer for you.

  71. John Cowan Says:
    > In dynamic typing, a type is a class of (run-time) values; in static typing, it is a class of program expressions.

    Right, and these facts have consequences in the capabilities of a programming language. Isn’t that what we are talking about?

  72. > Jocelyn: This solution breaks the subtyping/Liskov substitability principle

    Incorrect, LSP is only a problem when using virtual inheritance (run-time typing), for which I provided a reference in a prior comment.

    > But if the AddFruits method returns an ApplesCanAddFruit object

    I didn’t declare the output type, but it would necessarily be Fruit or ApplesAndOranges.

    > ApplesCanAddFruit can be used in any context in which Apples can be used

    So can ApplesAndOranges:

    class ApplesAndOranges : Apple, Orange {}

    > an object which has had Oranges added to it could be used in a context where a pure Apples object is expected

    Not a problem if properly using data encapsulation (calling the class methods, which btw will be overridden by ApplesAndOranges).

    > The best solution, as far as I can see, is to make both Apples and Oranges subtypes of a Fruits type, by adding type coercion functions

    Not necessary to conflate like that, as explained above.

  73. Since one of the supposed benefits of static type checking is that such checks are NOT in the generated code, that is a curious claim.

    I think you and the post you are replying to are both conflating the advantages of weak typing vs strong typing with the advantages of static typing vs. strong typing.

    An example is Pascal. Borland-compatible Pascal dialects[1] are statically-typed, but they are strongly-typed, so they do perform such checks at runtime.

    With weakly-typed languages like C, no such checking is performing.

    [1] Borland Pascal, Turbo Pascal, Delphi’s Object Pascal, and Free Pascal

  74. > With weakly-typed languages like C, no such checking is performing

    I think you mean checks that are implied by the type, e.g. checking for buffer overruns. Thus I think strong typing is synonymous with static typing, the distinction only applying to languages that were only partially typed (e.g. C where you can access memory directly).

  75. “I have three decades of experience with statically-typed languages, going back to the pre-1985 era before C stomped all its competitors”

    But these were the old kind of type systems, of strings and integers, weren’t they? And not these modern types like gimme any animal that happens to implement the IAvian and ICarnivore interfaces? These levels of checks are much more higher level.

    Of course, the problem with this approach is that single inheritance or single interface implementation is not enough, we want multiple of at least one of these. Then the next logical step is that when specify what interfaces should a parameter implement, a simple AND relationship between them is insufficient, we want OR and NOT, and at that point we are approaching the “yo dawg, I’ve put a programming language into your programming language” point which means the direction might not be a good one.

    1. >But these were the old kind of type systems, of strings and integers, weren’t they?

      See my response to Roger Phillips. If you say “static typing” to a programmer (even one like me, with a strong background in formal logic and programming-language design), what you’ll evoke is don’t-mix-ints-and-strings, not a really advanced type system that nibbles at the edges of the Curry-Howard isomorphism. It’s a kind of sophistry to defend “static typing” by pointing at Haskell or Coq or ML, because what it means in the real world is (at best) the kind of tsuris Java inflicts.

      So, my position unpacks to: dynamic typing and explicit contracts are better than the “static typing” you can now get in any production language, but the combination of really strong typing with type inference might be better than that. Someday. Haskell is, if nothing else, interesting food for thought on this score.

  76. Incorrect, LSP is only a problem when using virtual inheritance (run-time typing), for which I provided a reference in a prior comment.

    AFAICT, this is not correct–by definition, a subtype should be substitutable for the supertype, In the non-virtual case, the fact that a subtype is being substituted for the supertype can be detected at compile time, but this shouldn’t make any difference from a semantic POV.

    Not a problem if properly using data encapsulation (calling the class methods, which btw will be overridden by ApplesAndOranges).

    This is handwaving. My interpretation of the above is that you are proposing to make Fruits (ApplesAndOranges) a subtype of Apples and Oranges by throwing away all the useful invariants of the supertypes. (For instance, suppose that ApplesAndOranges is a class of objects which contain a number of apples and oranges. Should the ApplesAndOranges::length() method return different results from Apples::length() or Oranges::length() ? Should there be separate methods for length(), numApples() and numOranges()? Similar considerations apply to iterate(), etc.)

    The way I see it, your solution basically flattens the type hierarchy in return for a dubious documentation benefit (labeling functions which work with the “apples” and “oranges” from a Fruits object), but with no useful safety guarantees.

  77. Thus I think strong typing is synonymous with static typing, the distinction only applying to languages that were only partially typed (e.g. C where you can access memory directly).

    No. C is a weakly-typed, statically-typed language. In C, types can be explicitly or implicitly cast to another type. Pascal, OTOH, supports relatively few implicit type casts; you must specifically convert a. Also, pointers can be explicitly cast in C, while in Pascal they cannot be. (In the preceding paragraph, you could “s/Pascal/Java” and it would still be true, BTW.)

    To understand the difference, consider the following two simple examples:

    Pascal example:

    program test;

    var
    i,n : integer;
    f,r : real;

    begin
    i:=10;
    f:=20.0;
    r:=f*i;
    writeln(r);
    r:=i;
    writeln(r);
    n:=i*f;
    writeln(n);
    end.

    C example:

    #include <stdio.h>
    void main() {
    float f;
    float r;
    int i;
    int n;
    i=10;
    f=20.0;
    r=f*i;
    printf("%2f\n",r);
    r=i;
    printf("%2f\n",r);
    n=i*f;
    printf("%i\n",n);
    }

    The main difference between the two is that the Pascal compiler (I used Free Pascal) will fail compilation at the third-to-last line of the first example, while gcc will happily compile the second example. Note that the Pascal compiler has no trouble at all with ‘r:=f*i;’ because integer values cast to reals without any loss of information.

    The C compiler, on the other hand, will peform the calculation specified by the expression ‘i*f’ as a float and then implicitly casts the result n to an integer value.

    That’s at least part of what I mean by “strongly-” vs. “weakly-” typed.

  78. “No. C is a weakly-typed, statically-typed language. In C, types can be explicitly or implicitly cast to another type.”

    I prefer to use ‘weakly typed’ for languages in which “holes” in the type system (such as type coercions and explicit type conversions) meaningfully impact type safety guarantees. After all, type-conversion functions can be written in any language, and some type-safe languages such as Coq support implicit type coercions. It would be unhelpful to describe such languages as “weakly typed”.

  79. > inheritance…multiple…a simple AND relationship between them is insufficient, we want OR and NOT

    I had proposed an ordered, pattern-matching (anonymous interface wildcards) solution:
    http://copute.com/dev/docs/Copute/ref/class.html#Wildcard_Interface
    Could be further generalized with regex and/or any Turing-INcomplete construct.

    > defend “static typing” by pointing at Haskell…or ML…means in the real world is (at best) the kind of tsuris Java inflicts
    > combination of really strong typing with type inference might be better than that. Someday

    Agreed. My discussion on the ‘const’ keyword from C++ is especially relevant as it contrast the truris with liberation of correct model:
    http://code.google.com/p/copute/issues/detail?id=8#c31

    IMHO, the problem Haskell has is that its type system and syntax is too different. And the Monad model for mixing state is aloof and confusing. What the proposed Copute does is build off HaXe’s Javascript-like, strongly-typed syntax, improving the rough edges and injecting all of what is important about Haskell in an improved granularity, making it more intuitive for existing programmers. My plan was initially to compile to HaXe and leverage the HaXe compiler (because I don’t think I could accomplish more than that by myself in one step and HaXe targets several popular languages and VMs).

  80. guest wrote:
    > subtype should be substitutable for the supertype, In the non-virtual case, the fact that a subtype is being substituted
    > for the supertype can be detected at compile time, but this shouldn’t make any difference from a semantic POV

    So we agree that Liskov Substitution Principle says the property of an instance of a subtype should be substitutable for the corresponding property of the supertype. You are arguing that I presented a violation. First, Wikipedia gives a misleading violation example:
    http://en.wikipedia.org/wiki/Liskov_substitution_principle#A_typical_violation
    That example violation is using virtual inheritance, because without virtual inheritance, then calling setWidthHeight() on a Rectangle interface for an instance of Square, actually calls Rectangle::setWidthHeight(), not Square::setWidthHeight(). Thus, we can conclude that without virtual inheritance it is impossible to make Square a subtype of Rectangle, which I would argue is how it should be. Correct models encourage correct semantics (contrasted with incorrect model, see ‘const’ in my prior comment), which eliminate the truris domino effect that Eric (and I at least) love almost-more-than chewing sand.

    > Should the ApplesAndOranges::length() method return different results from Apples::length() or Oranges::length()

    Depends if your function inputs an Apple or an Orange. In the example I had provided, ApplesAndOranges’s inherited from both (i.e. their collections were never conflated), not a collection of both mixed together. Apple was semantically Apples, and Orange semantically Oranges (each were already collections).

    > your solution basically flattens the type hierarchy in return for a dubious documentation benefit

    ApplesAndOranges may contain both Apples and Oranges. ApplesAndOranges interface may add methods and properties. How is that only documentation benefit?

  81. > AppleCanAddFruit : Apple {}
    > OrangeCanAddFruit : Orange {}

    In other words, you’re using both a type and instance explosion. Reminder – defining types is overhead.

    > addFruits( new AppleCanAddFruit( Apple ), new OrangeCanAddFruit( Orange ) )

    You were complaining about the overhead of dynamic checks but think nothing of creating instances that serve no purpose other than to keep the type system happy. Note that said instance creation may be very expensive.

  82. esr Says:
    > dynamic typing and explicit contracts

    It isn’t clear to me why exactly that is much better than the static typed equivalent, which is to say interface programming. There is a trade off for sure: multiply the number of interfaces or contract definitions to minimize the requirements at each point in the code, or reduce the number of interfaces or contract definitions but get less minimalistic contract requirements.

    But it isn’t obvious to me that the trade off is better with or without dynamic typing. I suppose you could define the contracts actually at the point of use, but that seems to me to clutter the code with things that should be packaged and abstracted. But maybe that is what you are thinking.

    I might add that I think most of the languages I have used do a very poor job in allowing the programmer to specify expectations and assumptions outside of the type system. Assert works, but is not widely used. Heck, I had to write my own set of functions in JavaScript to provide that functionality. Which is crazy because I never met a language that needs it more than JavaScript. I had hear, as you mentioned, that Eiffel is an exception,but have never really looked at it personally.

    1. >It isn’t clear to me why exactly that is much better than the static typed equivalent, which is to say interface programming.

      Part of the reason is an accidental cohesion between dynamic typing and having a rich type ontology, including variable-extent types and first-class containers like lists and hashes. Writing entry/exit contracts is a huge pain in a fixed-extent language with a poor type ontology (i.e., C); it’s much easier in a language like Python.

      There’s also a lot to be said for languages like Python in which there’s no such thing as an interface declaration. This means there’s a single point of truth (SPOT) about what the class does, which is the class body. In Java or C++ there’s a whole class of bugs with interfaces falling out of sync with implementations that are just pointless struggles with the language itself. Not all languages with the SPOT property are dynamically typed, but all production languages with that property are.

      A significant thing about the dynamic- vs. static-typing holy war is that what disputants are often most attached to is not actually a property of the type system at all but rather of a feature that fellow-travels with it, like SPOTness or variable-extent types with garbage collection.

  83. > Really? When was the last time you actually had some apple objects and added some orange objects? Object oriented design is based on real world examples, no theoretical strawmen.

    In my “real world”, items of different types, apples and oranges if you will, can be combined in some circumstances but not others. Is that “really” not true of your “real world”?

    For example, if I’m making a fruit salad, both my apple and orange inventories are relevant. If I’m making an apple pie, my orange inventory isn’t as relevant (except for its effect on my storage and bank account).

    > If that is what you want to go with, can you offer examples of where you might want to add them or not add them depending on the context?

    If the type system contortions depend on such details then my claim about brittleness are true because the circumstances will change over time.

  84. >> AppleCanAddFruit : Apple {}
    >> OrangeCanAddFruit : Orange {}
    >
    af> In other words, you’re using both a type and instance explosion. Reminder – defining types is overhead.

    jb> multiply the number of interfaces or contract definitions to minimize the requirements at each point in the code

    It is up to the programmer to choose what is optimal in each case, as it depends on what is being abstracted and how many times that interface will be reused. Copute proposes to allow mixing of static, dynamic, explicit, and implicit (inference) typing, as well as fine-grained declaration of referential transparency invariants, all on top of a Java/Javascript-like familiar syntax. In short, nirvana in theory ;) I am anarchist (perhaps more so than Eric, I understand the guaranteed mathematical failure of futures contracts or surety). I don’t want to force you to do anything, I want to give you tools to mix different combinations and compete with those that prefer other gradients (all within same language).

    >> addFruits( new AppleCanAddFruit( Apple ), new OrangeCanAddFruit( Orange ) )
    >
    af> You were complaining about the overhead of dynamic checks but think nothing of creating instances that serve
    af> no purpose other than to keep the type system happy. Note that said instance creation may be very expensive.

    That is not conceptually more hassle than addFruits() inputting dynamic types and necessarily asserting they are Apple and Orange, except the benefit goes to the static typing done at _compile time_. And if there will be many instances of re-use of new AppleCanAddFruit( Apple ), then we can do it once, versus in every function.

    Actually the above example was unnecessary, because addFruit() should input Apple and Orange and then return them in ApplesAndOranges. There is no need to place a restriction on the input in for the case of merging collection types into one interface. Nevertheless the discussion above is relevant to our overall discussion.

    jb> it isn’t obvious to me that the trade off is better with or without dynamic typing

    The tradeoff that matters for provability (automated compile time verification) is referential transparency. The ability to mix stateful and stateless code at a fine granularity is IMO what really matters, but those compile time contracts can not checked without static typing (explicit or implicit is okay). Except in minimalist pure functional dynamically typed languages such as LISP, apparently dynamic typing degenerates to stateful (unprovable) every where, :
    http://code.google.com/p/copute/issues/detail?id=8#c24

  85. In my “real world”, items of different types, apples and oranges if you will, can be combined in some circumstances but not others. Is that “really” not true of your “real world”?

    For example, if I’m making a fruit salad, both my apple and orange inventories are relevant. If I’m making an apple pie, my orange inventory isn’t as relevant (except for its effect on my storage and bank account).

    IOW, Apples and Oranges are subclasses of Fruit. Your fruitSalad.make() method accepts instances of Fruit, but your applePie.make() method only accepts instances of Apple.

  86. An example is that many languages don’t have a built in type for natural number, i.e. not including 0, which is applicable for example to auto-incremented record id in SQL. If a function that inputs a dynamically typed record id (say from a REST api) asserts it is not 0, it forces the caller to check for the assert fail case. Thus dynamic typing forces checks both inside and outside the function, and those checks propagate hierarchically outwards like dominos, analogous to the problem with ‘const’, meaning a potentially infinite proliferation of obfuscated regression (verification) branches:
    http://code.google.com/p/copute/issues/detail?id=8#c31

    Superior methodology is to create a natural number static type.

  87. Superior methodology is to create a natural number static type.

    How is this “superior methodology”? The function which imports the record ID from REST will still have to typecast the ID to, say, PositiveInteger. Since this typecast can fail, it is logically equivalent to an assertion that the ID is greater than zero. It is true that assertion failures can propagate outwards, but this “propagation” can be formalized through exception handling and related constructs, such as the error monad.

    A more concrete problem is that all functions which return the more refined type (be it called PositiveInteger or “NaturalNumber plus the assertion that it is greater than 0”) have to maintain its invariant, which is harder than it sounds. Theorem provers and related tools can ease this burden; naïve type checkers cannot.

  88. # Andy Freeman Says:
    > If the type system contortions depend on such details
    > then my claim about brittleness are true because the
    > circumstances will change over time.

    Any system bends well in some directions and is brittle in other directions. Type systems, especially ones that correspond with object oriented design bend well in when applied to real world programming, and can be brittle when questioned in theoretical abstractions.

    So, no, I don’t accept your characterization at all. These systems work well with real programming problems. Apples, oranges, fruit salads? Not so much.

  89. esr Says:
    > Not all languages with the SPOT property are dynamically typed, but all production languages withb that property are.

    What about C#?

    1. >What about C#?

      Er, is that an exception? I know very little about C#, it’s not interesting to me because it has no open-source implementation AFAIK. That basically removes it from the category of “production languages” for anything I or anyone I work with would do.

  90. guest> The function which imports the record ID from REST will still have to typecast

    Not if the web is eventually typed too. But even with the way it is now, we only have to typecast once and in one re-usable class PositiveInteger for all cases, instead an assert every place we expect a non-zero positive integer.
    esr> cohesion between dynamic typing and having a rich type ontology, including variable-extent types and first-class containers like lists

    This mashing of several types into collection type buckets is solved in Copute with a “Union on steroids” named ‘enum’ which I think addresses your entry/exit to/from collection hassle:
    http://copute.com/dev/docs/Copute/ref/class.html#Enumerated_Inheritance
    And collections use parameterized types:
    http://copute.com/dev/docs/Copute/ref/class.html#Parameterized_Types

    esr> interfaces falling out of sync with implementations that are just pointless struggles with the language itself

    I assert this is poor language design, can you give me an example that I can’t fix with language design?

    esr> dynamic- vs. static-typing holy war is that what disputants are often most attached to is not actually a
    esr> property of the type system at all but rather of a feature that fellow-travels

    I assert this is conflation in the mind of the programmer, because they have yet seen the correctly designed strongly typed language of the future.

    guest> failures … “propagation” can be formalized through exception handling and related constructs, such as the error monad

    Exception handling propagates too, unless it is just exits the program, because it is unsafe (unprovable state) to catch the exception higher unless you’ve got referential transparency in the function hierarchy between the catch and the throw. The error monad is okay because it propagates back up through referentially transparent function hierarchy, but in that case you’ve got static typing available so use it instead (unless there is overriding reason not to).

    guest> functions which return the more refined type … have to maintain its invariant

    The class methods (methods can always be used as operators in Haskell and Copute) of the refined type maintain its invariant.

  91. jb> These systems work well with real programming problems. Apples, oranges, fruit salads? Not so much.

    I and Morgan addressed all his points, the static typing worked with Apples, oranges, fruit salads too.

  92. @esr: There’s also a lot to be said for languages like Python in which there’s no such thing as an interface declaration. This means there’s a single point of truth (SPOT) about what the class does, which is the class body.

    Yeah, but don’t forget what I call the “extreme dynamicalness” of Python: classes can be modified or even totally redefined on the fly, at runtime. While this is a very powerful aspect of the language (and one that probably doesn’t get used enough), it means that the class body isn’t always the SPOT. Good thing Python has introspection. ;)

    I’d give an example of this, but WordPress breaks indents badly and non-indented Python isn’t readable.

  93. I and Morgan addressed all his points, the static typing worked with Apples, oranges, fruit salads too.

    Or even dynamic typing. There’s nothing in my post to necessarily indicate language or typing discipline; they each work equally well in that instance.

    My attitude is this: each language has its own way(s) of doing things. Each has advantages and disadvantages. I suppose holy wars over typing discipline will be waged as much as holy wars over endianness, paradigms, or anything else up for debate. The reality is that each language the programmer can learn is simply another tool in his toolbox. So why not learn as many different ones as you can, rather than being a True Believer in any one of them?

  94. ESR, you’re out of date: C# has had the GPLed Mono implementation for six years now, and it is essentially at parity with Microsoft’s, except for a few components like the Windows Presentation Foundation (they have GTK+ support instead). Mono also contains a Visual Basic .NET compiler, a Java compiler, and a workalike of Silverlight called Moonlight.

    Of course, it’s possible that one or more MS patents (or anyone else’s patents) covers Mono, but that can be said of any open-source project. The FSF is rumbling, but Ubuntu has said they don’t think there’s anything to worry about, and Novell has bought indemnity for SuSE users. Red Hat desupported Mono after Novell’s announcement, though I don’t know what they think now.

    A particularly interesting feature of C# 3.0 and later is LINQ, which is what happened after Erik Meijer “sold himself to Microsoft” (as he said in one talk) in order to get functional programming features into the hands of vanilla programmers.

  95. esr> single point of truth (SPOT) about what the class does, which is the class body

    It seems you are concerned about a plurality of implementations of an interface, implementing inconsistent semantics? My answer would be the semantics are dictated by the interface compiler errors (not the documentation), thus any compilable semantics are allowed. To the extent it is a problem, then the cause is defining interfaces and types that do not constrain the invariants of your desired semantics.

    If you mean that that you can’t modify an interface in one place after it has been implemented else where, my answer is it is a desired feature. You can inherit that interface and modify it.

    Is the problem virtual inheritance and thus the obfuscation of re-implementation at run-time? I have already explained that virtual inheritance breaks static typing.

    One example of the problem you experienced or envision, would help me understand.

    1. >It seems you are concerned about a plurality of implementations of an interface, implementing inconsistent semantics?

      No, I’m concerned about all the dumb friction errors we make keeping interface declarations and the implementations they point to in sync. C++ class interface declararions are probably the most notorious examples of brittleness in this regard, but Java has the same general issue. So does any other language (hello, entire Pascal/Modula family) in which classes and modules have to be imported by including a declaration of the interface that has to be maintained separately from the implementation.

  96. Imnsho, C# is heavyweight, burdened by being too conflated with CLR, and it is trending to feature-bloat complexity cobbled together after thoughts.

  97. Eric, I understand and I agree. That is solved in Copute. You will never have that friction.

    In Copute the implementation can never be separate from the interface declaration:
    http://copute.com/dev/docs/Copute/ref/class.html#Inheritance

    There is no separate ‘interface’ and ‘class’, it is always ‘class’. An incomplete implementation of ‘class’ can not be instantiated, and is an inheritable partial implementation and interface declaration. In short, every class is closed.

    I agree that the concept of interfaces which can have multiple implementations is a fundamentally flawed paradigm. I should have recognized that on my prior comment, because obviously I made a conscious effort to remove it from HaXe (from which Copute is derived). I have not had my mind in Copute for several months.

    Tangentially, there is a way to achieve virtual inheritance in Copute, when static typing isn’t desired:
    http://copute.com/dev/docs/Copute/ref/class.html#Virtual_Method

  98. So does any other language (hello, entire Pascal/Modula family) in which classes and modules have to be imported by including a declaration of the interface that has to be maintained separately from the implementation.

    Like C/C++ headers or Pascal’s interface section. Now that I understand what you’re driving at, I totally agree. That’s always been painful. OTOH, the advantage is that you don’t need the entire source to compile code against a library. Why this matters in open source isn’t immediately obvious until you consider compiling, say a Linux kernel driver. The kernel source is huge these days and bandwidth and storage aren’t always available in abundance.

  99. > OTOH, the advantage is that you don’t need the entire source to compile code against a library

    With Copute, the plan is you may distribute interfaces (i.e. ‘class’ without implementation) and upon linking (either at compile or run-time) with the SPOT implementation, errors will be generated if these are out-of-sync. Thus afaics, it is impossible for divergence tsuris, because the consumers of the interface, have to link it to the SPOT of truth to run their code. Minor caveat could be if we enable JIT function-granularity linking, because divergence could go unnoticed for a while.

    In case my prior main point was obfuscated, I re-iterate that without static typing, it is impossible to mix referentially transparent (stateless, provable, automatically verified, concurrent, infinitely composable+re-usable w/o spaghetti traps+obfuscation) code with stateful (all the opposite bad properties, but necessary to make useful programs) code in the same language. And the goal of Copute is to optimize the deployable granularity of that mix, which seems in theory (and some limited practice thus far) to enable+illuminate more cases where referential transparency can be achieved.

    Tangentially, Eric if you have already(?) or will create a blog entry on security and/or E2E principle, I am very interested to correlate to the Herculean importance of the mathematical lesson of your Cathedral and Bazaar model, some dangerous problems with interminably infectious negative ramifications on software freedom, I see gaining momentum.

    Hope the links appear correctly in this post, I am just now learning that this blog allows some html?

  100. I agree that the concept of interfaces which can have multiple implementations is a fundamentally flawed paradigm.

    Wha? What is an abstract data type (ADT) if not an “interface which can have multiple implementations” ?

    Also, what about type classes in languages such as Haskell and Coq? Aren’t Monoid, Order, EquivalenceRelation etc. “interfaces which can have multiple implementations”?

  101. # Morgan Greywolf Says:
    > Like C/C++ headers or Pascal’s interface section. Now
    > that I understand what you’re driving at, I totally agree.

    Let me speak up for the separation of interface and implementation, though in truth I think on balance we are better not separating, but I think it is worth considering the other side.

    There are two reasons I can think of to split the declaration and the definition, to use C++ terminology. The first reason is pretty simple. An interface specification provides documentation on the class’s functionality. It does this without all the clutter of implementation bodies. This is very useful. Of course better is to have the coder write documentation in line (like javadoc or the equivalent) and have it extracted out, and of course tools can solve this problem — more on that in a moment.

    The second reason is abstraction and data hiding. By hiding the implementation from the user you reduce their capability to code against particular details of the implementation that are not part of the contract defined in the class definition, or even contrary to the class definition. This is significant if I want to change the details of the implementation later, as is pretty common.

    As I say on balance, I think that with the present model the cons outweigh the pros. However, let me just point out the sync issues Eric is talking about, which I agree are common, are really a tools issue rather than a programmer issue. In fact this whole discussion is based on a historical model of program text where we save program text in a collection of text files and then define in other text files how they should fit together.

    I tend to see a computer program more as a database of components, entities, functions, libraries, connection strings and so forth, with the ability to compile them all together into a program that is executable. The fact that .h and .c files are separate is a historical artifact more than anything else. If a class is an entity in at database, all we are talking about is two slightly different views of the same data.

    So, FWIW, I think this is really more a tools issue than a fundamental one. And, FWIW, Microsoft tools have a lot of really cool ways to treat your program as a database rather than a collection of text files, especially so with highly reflective languages like C#.

  102. In Ada, interfaces and implementations are maintained separately, but the compiler insists that they are consistent when it compiles the implementation. In the GNAT implementation, if module foo depends on bar and baz, then compiling foo’s implementation generates not only foo.o but foo.ali, which is a text file containing the last-modified timestamp of the sources of bar and baz. (You can compile an interface, but it’s just a syntax check.) The GNAT pre-linker uses the .ali files to verify that you are not trying to link foo with out-of-date bar and baz sources, thus preventing you from screwing yourself. Make and makedepend do this too, but incompletely and kludgily.

    In any case, C#/Java interfaces have nothing to do with this issue: they are all about separation of concerns, since it is routine to have an interface with many implementations simultaneously, and an implementation that supports many interfaces simultaneously. Interfaces allow you to view an object through a stopped-down lens that hides exactly what its class is, and are closely related to duck typing. Unfortunately, it is still necessary to construct an object through its implementation — which is why injection frameworks are so popular in these languages.

    1. >Interfaces allow you to view an object through a stopped-down lens that hides exactly what its class is, and are closely related to duck typing.

      Sure, that’s the useful part of C#/Java interfaces. But you still have the separate declarations and the friction that implies. Having used Python, I say to hell with it; the pain is not worth the gain.

  103. I read it!

    “development strategies that lean more on automated verification”
    — 1) compiler is faster than you.
    (Genesis, part I)

    Why trust when you can verify? :-)
    Why distrust when you can verify?
    — The good points

    But what we could be doing is figuring out how to design for testability…
    — 80%:20% (good enough) is a good practical choice

    …and descendants of these with capabilities we can barely imagine today.
    — “codered” is on my tongue for 3 months or so.

    “mental habits are more important than tools”
    — :) had That expierence This year

    …habit of asking questions like “What’s the coverage percentage of your test suite?”…
    — ;) is 99 better that 98?

    PS. The INTERCAL Reconstruction Anti-Massacree movement. No, wait…

    http://www-cs-faculty.stanford.edu/~knuth/fg.html
    Fix it or not, you’ll be in… (because Oct 15 is release date, enter title into amazon)
    Chapter 7, in INTERCAL

    TPK = Total Party Kill (IMHO)
    http://en.wikipedia.org/wiki/Total_Party_Kill
    (http://acronyms.thefreedictionary.com/TPK)

    & after all… did DK write TPK games in INTERCAL?

  104. >>> addFruits( new AppleCanAddFruit( Apple ), new OrangeCanAddFruit( Orange ) )
    >>
    >af> You were complaining about the overhead of dynamic checks but think nothing of creating instances that serve
    >af> no purpose other than to keep the type system happy. Note that said instance creation may be very expensive.

    >That is not conceptually more hassle than addFruits() inputting dynamic types and necessarily asserting they are Apple >and Orange, except the benefit goes to the static typing done at _compile time_.

    There’s a benefit only if I go to significant trouble at programming time. Typing the dynamic checks is no more work than declaring the types.

    And you still haven’t dealt with the fact that statically-typed languages won’t let me run my program until the whole thing passes the static checks, even parts that aren’t relevant to what I’m working on.

    > And if there will be many instances of >re-use of new AppleCanAddFruit( Apple ), then we can do it once, versus in every function.

    You don’t get to ignore a cost that might be amortized, especially since you’re not admitting that a similar amortization can be used by implementations of many dynamically-typed languages.

  105. > > Any system bends well in some directions and is brittle in other directions. Type systems, especially ones that correspond with object oriented design bend well in when applied to real world programming, and can be brittle when questioned in theoretical abstractions.

    > So, no, I don’t accept your characterization at all. These systems work well with real programming problems. Apples, oranges, fruit salads? Not so much.

    You justified your characterization of OOD as being good for real world problems. Now you’re claiming that it’s good for “real programming problems” which isn’t defined except that it excludes my real world problem….

    One problem is that inheritance is a very leaky abstraction. In the real world, a wrench can be used to hit things and a hammer can be used to turn things.

  106. jb> a tools issue than a fundamental one

    That is a slam dunk, thanks! The reference to the SPOT should be a url (possibly REST), end of story. And it almost is already in HaXe:
    http://haxe.org/ref/packages
    And thus there is no need for the programmer to split the implementation and the interface, because the reference will be an ‘import’ url and the tools should extract the interface and present in code completion.

    guest> What is an abstract data type (ADT) if not an “interface which can have multiple implementations” ?

    Let’s not conflate interface with instances. The SPOT can certainly provide callback hooks so that instances may override implementation of the SPOT interface, without burdening the SPOT implementation to sync with distributed programming. Tangentially, that would not violate Liskov Substitution Principle because it is not run-time (virtual) inheritance.

    If I may contrast with “more eyeballs = shallower bugs” also applies to security models that prioritize a non-evolving network, hoping to incite some future interest among readers here. I will add that SPOT is the antithesis of a centralized, non-evolutionary model. SPOT means each entity has atomic control over its interaction with the (programming) network, the overall network is more scalable and evolvable (especially to dynamic change).

    We can see that Eric’s Cathedral and Bazaar is about the way evolution works, and there are more places where it needs to be adopted, if our internet is to continue to evolve and scale rapidly (the antithesis of which is degeneration because nature is always changing exponentially in some direction…that entropy thang…).

    esr> I say to hell with it; the pain is not worth the gain

    Evolution agrees.

  107. Andy Freeman: With a “type safe” language, I have to put enough fluff into that other half to ensure that the program as a whole is type safe and type inference doesn’t help at all here.

    Andy Freeman: And you still haven’t dealt with the fact that statically-typed languages won’t let me run my program until the whole thing passes the static checks, even parts that aren’t relevant to what I’m working on.

    This is incorrect; Haskell has the trap-door “error” for this purpose, the result of which inhabits every type. Type inference does help here, since it means less typing in this instance.

  108. # Andy Freeman Says:
    > You justified your characterization of OOD as being
    > good for real world problems. Now you’re claiming
    > that it’s good for “real programming problems”

    You’ll have to excuse my sloppy typing. I assumed you understood that programming languages are best applied to programming problems.

    > which isn’t defined except that it excludes my real world problem….

    Not at all. Computers don’t make fruit salad. In the rare cases they do (in a fruit salad factory for example) they don’t exhibit the problems you outlined. So you problem is just theoretical fluff, without any correspondence with the real problems that programmers deal with day to day. I’d be happy to address those sorts of problems, but I’m going to take a pass on fruit salad.

    > One problem is that inheritance is a very leaky abstraction.

    All abstractions leak, it is an intrinsic property of abstraction. I agree that single inheritance is one of the more common leaky abstractions in computer programming, but I don’t agree it is happens all that much in real world practice, and as I am sure you are aware, there are many powerful tools to deal with this particular situation within the static typing domain.

  109. @Andy Freeman
    > Typing the dynamic checks is no more work than declaring the types

    I already explained it is N times more work.

    > statically-typed languages won’t let me run my program until the whole thing passes the static checks

    Copute allows you to mix dynamic (run-time) and static (compile-time) typing. As Roger Phillips says, type inference helps a lot, because later you can change a dynamic type to a static one when you decide it is worth it for your case. Andy, I am not against dynamic typing, we need it for maximize our productivity. I am only arguing that we need static typing too, use the best paradigm for the case at hand. And we need these features in the same language with fine-grained capability to interleave them. Ditto referential transparency and stateful. Programmers benefit from have this control in their hands, rather than a “all or nothing” choice.

    > You don’t get to ignore a cost that might be amortized, especially since you’re not admitting that
    > a similar amortization can be used by implementations of many dynamically-typed languages

    I do because the dynamic case necessarily must be checked at every function input.

    > One problem is that inheritance is a very leaky abstraction. In the real world,
    > a wrench can be used to hit things and a hammer can be used to turn things

    Problems solved, aggregation is not leaky, which is what Copute’s super enum does:

    class Hammer( Tool ) {}
    class Wrench( Tool ) {}
    enum Tools
    {
    Hammer
    Wrench
    }

    It is borrowed from HaXe, but adds some capabilities.

  110. guest> What is an abstract data type (ADT) if not an “interface which can have multiple implementations” ?

    me> Let’s not conflate interface with instances. The SPOT can certainly provide callback hooks so that instances may override
    me> implementation of the SPOT interface, without burdening the SPOT implementation to sync with distributed programming.
    me> that would not violate Liskov Substitution Principle because it is not run-time (virtual) inheritance

    Alternatively re-implement via inheritance:

    class Sub( Super ) {}

    But in Copute, that won’t cause Sub’s re-implementation to be called when Super is the data type (e.g. input to a function), unless virtual inheritance employed, which then breaks Liskov Substitution Principle.

    Thus I conclude the fundamental law applicable to all languages, if we allow implementation to be separate from interface declaration, then we are faced with only two options:

    1) Loss of SPOT tsuris, or
    2) Virtual inheritance.

    Thus to summarize, Copute allows two options. Implementation is never separate from interface, thus you can either choose no re-implementation or re-implementation via virtual inheritance employed. Both of these options are SPOT with no tsuris. Can’t do better than that, because it is a fundamental law.

  111. > > Andy Freeman: And you still haven’t dealt with the fact that statically-typed languages won’t let me run my program until the whole thing passes the static checks, even parts that aren’t relevant to what I’m working on.

    > This is incorrect; Haskell has the trap-door “error” for this purpose, the result of which inhabits every type. Type inference does help here, since it means less typing in this instance.

    Let’s see if I understand. Can I write
    if (condition)
    stuff I’m working on
    else
    error

    and I just have to keep the Haskell happy with “condition” and “stuff I’m working on”?

    Hint: if the answer isn’t “basically”, where the stuff that I missed consists of less than 10 characters that are the same every time, it’s not good enough.

    Note that that’s just one piece. The other piece is that “stuff I’m working on” is often incomplete but there are no conditionals. In other words, I want to end it with “error”.

    If Haskell has this, great.

  112. # Andy Freeman Says:
    > You justified your characterization of OOD as being
    > good for real world problems. Now you’re claiming
    > that it’s good for “real programming problems”

    > You’ll have to excuse my sloppy typing. I assumed you understood that programming languages are best applied to programming problems.

    I was unaware that there were large classes of manipulating real world objects for which programming was inapplicable.

    >> which isn’t defined except that it excludes my real world problem….

    > Not at all. Computers don’t make fruit salad. In the rare cases they do (in a fruit salad factory for example) they don’t exhibit the problems you outlined.

    Oh really? I’m describing a system that helps manage a kitchen inventory. However, the same issues come up at any factory that has multiple outputs where some of the inputs are used for some outputs and not others.

    > > One problem is that inheritance is a very leaky abstraction.

    > All abstractions leak, it is an intrinsic property of abstraction. I agree that single inheritance is one of the more common leaky abstractions in computer programming,

    Who said anything about single inheritance?

  113. # Andy Freeman Says:
    > I was unaware that there were large classes of manipulating real world objects for which programming was inapplicable.

    You can’t be serious.

    > Oh really? I’m describing a system that helps manage a kitchen inventory.

    Oh super! So what specifically is the problem with your kitchen inventory software design? In all the hand wavy theoretic fluff, I forgot what your actual point was.

    > Who said anything about single inheritance?

    I did. My experience is that most of the leaks you refer to occur in SI, MI is much less prone to it. So what exactly is your point again?

  114. >> You don’t get to ignore a cost that might be amortized, especially since you’re not admitting that
    >> a similar amortization can be used by implementations of many dynamically-typed languages

    >I do because the dynamic case necessarily must be checked at every function input.

    That’s simply false.

    Once a type is known, and there are lots of ways to know, that knowledge can be used to avoid subsequent checks.

    In the case of passing a checked value to another function, one way to avoid subsequent checks is to call a version of said function that assumes the relevant type for the appropriate input. One can do similar things with return values. (After all, return is actually a procedure call because calls are really passing closures as the return context – those closures are just functions with context and we’ve already dealt with functions.)

  115. > one way to avoid subsequent checks is to call a version of said function that assumes the relevant type for the appropriate input

    You essentially described type inference. Read below…

    > If Haskell has this, great.

    Thanks. Languages that allow both dynamic and static typing, typically assume dynamic except where a static type propagates via type inference, then you can always declare a type dynamic explicitly to escape out of any undesired propagation of a static type. So then most of your program is type inferred, with only a few key places having explicitly declared static type or dynamic. So then it is very easy to without the static typing tsuris and gradually insert static types where it makes sense for your work flow, design complexity, and desired re-usability (composability).

  116. Let’s see if I understand. Can I write
    if (condition)
    stuff I’m working on
    else
    error

    and I just have to keep the Haskell happy with “condition” and “stuff I’m working on”?

    Hint: if the answer isn’t “basically”, where the stuff that I missed consists of less than 10 characters that are the same every time, it’s not good enough.

    Yes, you can do precisely this in Haskell using “error”.

    Note that that’s just one piece. The other piece is that “stuff I’m working on” is often incomplete but there are no conditionals. In other words, I want to end it with “error”.

    Provided what you enter into that part of the program is sound the compiler will simply infer its type. You can create any stubs you need using error.

    If Haskell has this, great.

    I have never encountered problems producing stubs in statically typed languages.

  117. Oh really? I’m describing a system that helps manage a kitchen inventory. However, the same issues come up at any factory that has multiple outputs where some of the inputs are used for some outputs and not others.

    If I was writing an inventory system I would not model each kind of item as a class, nor would I encode particular business rules relating to kinds of items into the type system. What is there to be gained by such a design? It seems more sensible to define some generic classes to represent meta data (e.g. business rules) regarding particular item kinds and simply compute over them, using the type system to help make sure this generic scheme works as intended. The potential for error relating to your business rules is then isolated to your meta data, which is going to be the easiest thing to debug.

  118. Hint: if the answer isn’t “basically”, where the stuff that I missed consists of less than 10 characters that are the same every time, it’s not good enough.

    In additional to ‘error’ there are other possibilities: UnsafeCoerce and Data.Dynamic. These are more than 10 characters, though :(

  119. @JessicaBoxer:

    > I was unaware that there were large classes of manipulating real world objects for which programming was inapplicable.

    You can’t be serious.

    Sure he can. Ever work with a product data management (PDM/PLM) system? The database objects represent real-world objects — product components and assemblies. Most of the attributes contain real-world data about the part: part numbers, details about weight, size, density, materials and so forth. Often a detailed bill of materials (BOM) is generated from such data.

    You should try not to live your life looking at the world through only your own small peephole of awareness.

  120. Sure he can. Ever work with a product data management (PDM/PLM) system? The database objects represent real-world objects — product components and assemblies. Most of the attributes contain real-world data about the part: part numbers, details about weight, size, density, materials and so forth. Often a detailed bill of materials (BOM) is generated from such data.

    You should try not to live your life looking at the world through only your own small peephole of awareness.

    In this case, as in the fruit salad example, there doesn’t seem to be any compelling reason to have a 1:1 mapping between language classes and real-world classes.

  121. In this case, as in the fruit salad example, there doesn’t seem to be any compelling reason to have a 1:1 mapping between language classes and real-world classes.

    That depends on what you mean by a 1:1 mapping. In the case of some kinds of business logic, there very well could be an object representing a part or a component or an assembly.

  122. “You should try not to live your life looking at the world through only your own small peephole of awareness.”

    That’s actually the No. 1 problem with blogs about programming in the non-technical sense, methodological, project management etc. from Joel on Software to anything about TDD. The authors are convinced that there is such a thing as “programming” and this thing is that thing they themselves do – they assume that everybody has the same circumstances, challenges and goals, and programming is ONE profession. This is the same fallacy as assuming that one can advice about such a general thing as “driving” -as if driving a taxi in the rush hour in NY would be the same thing as driving a tank under enemy fire in a swamp, or driving a 16-wheeler through half of a continent.

    At the very least the industry should be separated conceptually into three distinct categories: technical programming, mathemathical programming, and business logic programming. This is the least minimum, corresponding to three basic realms of tasks and challenges and three basic personality archetypes (the technology-loving geek, the Dijkstra-type algorythimician, the guy with the business administration degree expressing business roles in code), and an ever more finer and detailed categorization would be in order. The last time programming as such was one discipline, one profession and one mentality is around 1980 or 1990 at most.

  123. > >>I do because the dynamic case necessarily must be checked at every function input.

    >>That’s simply false.

    >> one way to avoid subsequent checks is to call a version of said function that assumes the relevant type for the appropriate input

    > You essentially described type inference. Read below…

    You missed that we’re talking about run-time checks in dynamic languages. You claimed, incorrectly, that they were necessarily expensive and pervasive.

    It’s curious that you didn’t realize that techniques for dealing with dynamic types can be used by implementations of dynamically typed languages….

  124. ># Andy Freeman Says:
    >> I was unaware that there were large classes of manipulating real world objects for which programming was inapplicable.

    > You can’t be serious.

    I’m quite serious. Let’s review.

    # Andy Freeman Says:
    > You justified your characterization of OOD as being
    > good for real world problems. Now you’re claiming
    > that it’s good for “real programming problems”

    > You’ll have to excuse my sloppy typing. I assumed you understood that programming languages are best applied to programming problems.

    >> which isn’t defined except that it excludes my real world problem….

    > Not at all. Computers don’t make fruit salad. In the rare cases they do (in a fruit salad factory for example) they don’t exhibit the problems you outlined.

    Wouldn’t it have been easier to simply admit that that was a dumb thing to say instead of trying to defend it?

    > Oh super! So what specifically is the problem with your kitchen inventory software design? In all the hand wavy theoretic fluff, I forgot what your actual point was.

    As I wrote at the beginning, the issue is that the number of apples and oranges can be added in some circumstances but not in others. Getting that wrong is the kind of programming error that I make. (I agree that static typing is good at telling me when I’m trying to add a string to an int, but since that’s not a big problem for me, I’m not willing to pay much to solve it.)

    Static typing advocates were claiming in this thread that static type systems could effectively deal with “real” programming errors, so I asked about how wrt that issue. Note that I didn’t place any limits on the types.

    The solution provided required type proliferation and significant run-time overhead. Maybe there are better solutions, but if someone is claiming that a given tool can solve a given problem, it seems reasonable to assume that said someone can show how to use that tool effectively for that problem.

  125. > In this case, as in the fruit salad example, there doesn’t seem to be any compelling reason to have a 1:1 mapping between language classes and real-world classes.

    I agree, but our resident OOD advocate comes close to disagreeing until we get into details, at which point she’ll say that a given problem isn’t real world, so the issue is moot.

    In any event, I didn’t propose how one would define types to handle my problem, I asked how to use static typing to solve it and 1:1 was what the the static typing advocates came up with.

  126. >>>> I do because the dynamic case necessarily must be checked at every function input.
    >>> one way to avoid subsequent checks is to call a version of said function that assumes the relevant type for the appropriate input
    >> You essentially described type inference. Read below…
    > we’re talking about run-time checks in dynamic languages. You claimed, incorrectly, that they were necessarily expensive and pervasive.
    > you didn’t realize that techniques for dealing with dynamic types can be used by implementations of dynamically typed languages…

    Er, “assumes the relevant type” is type inference in your mind. Saving costs by moving them into your mind, is hardly verification– the topic of this blog thread. Type inference is almost free verification as I explained. Sorry you forced me to be more blunt.

    > The [static type] solution provided required type proliferation and significant run-time overhead

    Incorrect. There was no proliferation of types that wouldn’t also need to be verified for the dynamic case (if you want same degree of verification) and with less manual coding of checks as explain above, and you were wrong to assume that new Sub( Super ) necessarily incurs any run-time overhead (if the constructor does nothing, and if the constructor does something, you would have needed the same in your dynamic type conversion).

    > Static typing advocates were claiming in this thread that static type systems could effectively deal with “real” programming errors

    And we showed you it can with optimum efficiency, except where you don’t really want maximum verification, e.g. you are in hurry, you are prototyping an idea, or low economy-of-scale project. Btw, I agree that your apples and oranges challenge was sufficiently relevant. Thank you for contributing it.

  127. In any event, I didn’t propose how one would define types to handle my problem, I asked how to use static typing to solve it and 1:1 was what the the static typing advocates came up with.

    I’m merely pointing out that these scenarios are not counter-examples to static typing; there is nothing stopping you from implementing your items/ingredients as data classes that are as strictly or loosely typed as you please. The more loosely typed, the more run-time assertions will be required. Unless you’re happy to spend hours inside a theorem prover there are always going to be run-time assertions.

  128. That depends on what you mean by a 1:1 mapping. In the case of some kinds of business logic, there very well could be an object representing a part or a component or an assembly.

    The presence of values that are analogues of real world objects is not particular to object oriented programming. Furthermore, when such values occur it is a mere coincidence of expression. There is no reason to deliberately try to reflect the structure of the real world into a program. The function of a program is to perform a task that is useful, not obtain a pleasant correspondence with reality in its syntax.

  129. # Morgan Greywolf Says:
    > Sure he can. Ever work with a product data management (PDM/PLM) system?

    And now it is you who isn’t being serious. You really don’t think there is a difference between manipulating database representations of objects and manipulating actual objects?

    >You should try not to live your life looking at the world through only your own small peephole of awareness.

    Afraid it is you who is looking through a peephole. Not everything is about computers.

  130. # Andy Freeman Says:
    >Wouldn’t it have been easier to simply admit that that was a dumb thing to say instead of trying to defend it?

    I assure you I say plenty of dumb things, however, I haven’t seen any evidence that this isn’t one of them. It is you who is rambling on with pointless theoretical things and avoiding anything real. Of course that is always a good strategy. The thing about the unfalsifiable is that you can’t falsify it.

    > As I wrote at the beginning, the issue is that the number of apples and oranges can be added in some circumstances but not in others.

    Handwavy. Specifics please.

    > Getting that wrong is the kind of programming error that I make.

    Yet you can’t concretize your claims by providing, say, three specific examples?

    > (I agree that static typing is good at telling me when I’m trying to add a string to an int,

    I refer you back to my comment in response to Eric on this claim. Type systems aren’t much about that at all.

    > Static typing advocates were claiming in this thread
    > that static type systems could effectively deal with
    > “real” programming errors, so I asked about how
    > wrt that issue. Note that I didn’t place any limits on the types.

    I’ll grant you that some such advocates got sucked into your vortex of handwavy theoretical fluff. Frankly I got a headache reading some of that stuff. However, foolish advocacy does not indicate foolishness in that which is advocated.

  131. # Roger Phillips Says:
    > Unless you’re happy to spend hours inside a theorem prover there are always going to be run-time assertions.

    Roger is right. However, perhaps we can agree on this principle at least: it is better to check your assertions as early as possible, and certainly it is better to check your assertions at compile time than run time. (That being a disadvantage of non compiled languages.)

  132. > foolish advocacy does not indicate foolishness in that which is advocated

    Specifically which static typing advocacy on this page was foolish? Quote?

    > Frankly I got a headache reading some of that stuff

    What I wrote was factual and correct.

    > You really don’t think there is a difference between manipulating database representations of objects and manipulating actual objects

    Andy’s theoretical case was sufficient for analyzing some of the issues of static typing versus dynamic typing. Some here are apparently into the theoretical concepts. I thought you had a great point about the separation of interface and implementation being a tools issue only.

  133. Jocelyn Says:
    > Specifically which static typing advocacy on this page was foolish? Quote?

    Actually, and ironically, I was speaking theoretically too. To be honest, I didn’t follow your thread very carefully. My point was that, assuming Andy’s contention was correct (that your argument was bogus), that doesn’t disprove the original point advocated. I have no real opinion as to the quality of your argument, because, as I said, I didn’t follow all that closely. So I apologize for my rather careless words, and for my lack of time to follow your arguments that, from 50,000 feet, looked interesting.

  134. “Perhaps the signature tools of the next fifteen years will be test engines – coverage analyzers, scriptable emulation boxes, unit-test frameworks, code-auditing tools, and descendants of these with capabilities we can barely imagine today.”
    While re-reading my way through the archives I come across this post — and am thoroughly struck by its prescience. For I, a programming generalist by inclination and experience, currently work as a test automation engineer; and what have I been doing for the past fortnight? Writing Python scripts that tie an automated test system to two disjoint emulations of our forthcoming new hardware, in turn automatically kicked off by cron to produce nightly tests of a driver for a device that doesn’t even exist yet. I think back to even two years ago, and such a thing would have been uncommon despite the fact that all the tools needed for it were already in place. Developers just didn’t think in that mould.
    I wonder how many more years it will be before the Linux kernel has a thorough pre-checkin test suite; with the ever-busier -rcs, how long can Linus continue to rely on trust without verification? (Note that this isn’t an ‘Imminent Death of Linux Predicted!’; once the merge rate can’t accelerate any further, it will just continue at its limit. But maybe there are ways to raise that limit.)

Leave a Reply to Shenpen Cancel reply

Your email address will not be published. Required fields are marked *