Automatons, judgment amplifiers, and DSLs

Do we make too many of our software tools automatons when they should be judgment amplifiers? And why don’t we write more DSLs?

Back in the Renaissance there was a literary tradition of explaining natural philosophy via conversations among imaginary characters. I’m going to revive that this evening because I had an IRC conversation this afternoon, about the design insights behind reposurgeon, that pretty much begs to be presented this way.

The person of “Simplicio” was Galileo’s invention in his Dialogue Concerning the Two Chief World Systems. Here he represents four different people, but almost everything he says is something one of them in fact said or very plausibly might have. I’ve cleaned it up, edited, and amplified only a little.

For those of you coming in late, reposurgeon is a tool I wrote for editing version-control histories. It has many applications, including highest-quality repository conversions. Simplicio needed to excise some security-sensitive credentials from a DHS code repository – not just from the tip version but from the entire history. Reposurgeon is pretty much the only practical way to do this.

So, without further ado…

SIMPLICIO pounce-hugs ESR.

SIMPLICIO: I have to run back to work, but I just wanted to say…reposurgeon is FREAKING AWESOME.

ESR: I take it you figured out how to do the necessary.

SIMPLICIO: Yep. How did you *imagine* that?

ESR: I love designing DSLs (domain-specific languages). What you are seeing as “awesome” is the result of proper attention to keeping the language primitives in the DSL mutually orthogonal, so they can have fruitful combinations I did not anticipate. This is a design style that is difficult to do well, but when you pull it off the payoff is huge.

SIMPLICIO nods.

ESR: You might be entertained to know what the model for reposurgeon’s DSL was. Brace for it: … ed(1).

SIMPLICIO: LOL

ESR: I’m not joking. Think about how reposurgeon selections interact with the command verbs. Pick a collection of records, do something to it – possibly with auxiliary following arguments.

SIMPLICIO: I know you’re not joking, it’s still amusing. Heh. The original Patriarchs of Unix were truly worthy of their mantles.

ESR: There were two big insights behind the design of reposurgeon:

(1) Attempts to fully automate repo-conversion tools are doomed by ontological mismatches between VCSes. Bridging those requires case-by-case human judgment; therefore, the best tool should seek to amplify human judgment rather than futilely attempting to remove it from the process.

(2) The structure implied by a deserialized git-fast-import stream resembles a line sequence in an editor just enough that the orthogonal ed model of “apply a command verb to a selection” is applicable.

SIMPLICIO: Everything old is new again.

ESR: Everything since those insights has been in some sense mere details. In particular, mating premise (2) to the properties of tne Python Cmd.cmd library class implies quite a lot of the reposurgeon implementation.

But premise (1) suggests a larger question: where else are we making the same mistake? Are there other domains where we should be trying to write judgment amplifiers rather than automatons?

If I ever again write a DSL as effective as reposurgeon it will be because I found a specific answer to that question. I would love to do this again and again and again.

SIMPLICIO: Blog post, or a new chapter for The Art of Unix Programming?

ESR: Hm. Blog post for sure. Not sure premise (2) is Unix-specific enough to deserve a chapter in TAoUP.

SIMPLICIO: I was thinking of “judgment amplifiers vs. automatons”. That demands to be a chapter title. :-)

ESR: It’s a good design question to notice. Whether it’s a Unix design question is another matter.

SIMPLICIO: Seriously, I don’t understand how to know when a DSL is necessary.

ESR: I’m pretty sure “when is it necessary” is the wrong way to frame the question. “When is it possible” would be a better one.

SIMPLICIO: That may be my problem then.

ESR: If you can figure out a proper set of orthogonal primitives to build it around, a DSL is always better than a more rigid design. At worst, it becomes one of the soft layers in an alternating hard and soft stack.

If I have a DSL, I can front-end it as a GUI or some other kind of more rigid interface. But the reverse is not true; if you don’t design in DSL-like flexibility to begin with, it’s almost impossible to retrofit.

SIMPLICIO: That does make sense. In the past I’ve compared DSLs to more general-purpose programming languages and mainly seen their limitations. Now…I’m intrigued.

ESR: A good example is basically any modern E-CAD package. Look past the GUI and you’re going to find some kind of DSL for hardware descriptions underneath. Going directly from the GUI’s data representation to silicon would be doomed, but the soft layer in the middle gives it a way to chunk the design process that captures domain primitives like logic gates, vias, or entire functional blocks.

SIMPLICIO: Oh. Oh! I bet you’re going to bring up SQL next.

ESR: I certainly could. Mathematica, there’s another one.

Yet another example is Emacs. You have to sort of forget that Lisp is theoretically general-purpose for a moment; if you do you’ll see the same hard-over-soft pattern, DSL underpinning something that doesn’t look like one.

This is an extremely powerful way to design. You’d see it more often, but there’s no tradition about teaching the practice. So programmers have to re-invent it nearly from scratch almost every time they do it.

SIMPLICIO: If you know of any good teaching materials, I’d be very grateful. If not, I’ll go googling at some point.

ESR: I wish there had been teaching materials when I was a noob – I had to spend a quarter-century learning how to do it right. Sadly, there still aren’t; all we have is a handful of people who audodidacticated themselves. RMS. Steven Wolfram. Me. A few others.

SIMPLICIO: There are a bunch of things I wish there were teaching materials for. I’ve noticed that if they’re engineering-useful but not interesting to academics they tend not to get written.

ESR: The really hard part is carving the domain operations into orthogonal primitives. Then you sort of clothe them in semi-generic DSL machinery.

So, listen to this carefully: the reason git fast-import streams were essential to the design of reposurgeon is because they concretized the problem. They reduced the abstract question “what is an orthogonal set of primitive operations on repository histories” to a more concrete one: “What is an orthogonal set of primitives for editing the attributed graph implied by a fast-import stream?”

The first question, the abstract one, is fundamentally difficult because it’s ill-defined. The second question, the concrete one, has an answer that may be somewhat complex but is well-defined and not fundamentally difficult. You can get it by staring at diagrams of nodes and links, and thinking up every possible way to screw with them.

SIMPLICIO nods.

ESR: But for a great many interesting cases, any answer to the second question implies an answer to the first. You write your attributed-graph editor and you have a repository editor.

SIMPLICIO: That makes sense :)

ESR: Export/import to actual repositories is still an issue of course but it’s one you can keep well isolated from the rest of the design.

SIMPLICIO: Is there some way to generalize reposurgeon’s design pattern? I think I get it now, but I don’t see how you map it to other application domains.

ESR: A first thing to notice is how the agenda of amplifying human judgment rather than fully automating fits with writing a DSL. You’re not really writing a judgment amplifier if the tool is incapable of doing things the designer didn’t anticipate. You need the flexibility, the ability to generate and collect options.

A second thing is that you can get a hell of a jump on grasping the problem domain well enough to write a DSL over it if there is some kind of declarative markup that captures all of its entities. Then there’s a mapping – mathematicians would call it a functor – between operations on the markup and operations on the problem domain.

So I’d say a good first question to ask is: is there a declarative markup – an analogue of git fast-import streams – that captures everything in the problem domain I want to hack? And if not, can I invent one?

The process of knowledge capture that needs to happen for such a markup to exist is exactly the one that will tell you, or at least imply, what the primitives for your DSL are.

134 comments

  1. The bit about declarative markup puts me in mind of this Bertrand Russell quote: “A good notation has a subtlety and suggestiveness which at times make it almost seem like a live teacher.”
    Or, if you prefer Rob Pike, “Data structures, not algorithms, are central to programming.”

  2. “Autodidacticated?”

    And I remember sitting and watching you as you took pencil to pad of paper and worked through some of the basic graph operations at a Barnes and Noble while we were waiting to see a preview of Tron: Legacy (that you ultimately wound up skipping because 3D). Do all DSLs of this nature start out as paper-and-pencil exercises? Is there a useful way to do that kind of work on a computer?

    1. >“Autodidacticated?”

      “Autodidact” is a silly word. I could not resist piling on as many syllables as possible.

      > Do all DSLs of this nature start out as paper-and-pencil exercises?

      Dunno. I don’t recall having drawn diagrams before.

      >Is there a useful way to do that kind of work on a computer?

      Doubtful, unless you were just emulating an analog sketchpad.

  3. “the best tool should seek to amplify human judgment rather than futilely attempting to remove it from the process”

    I want that on a fricking sampler. The number of times I see people design systems that intentionally fail at this and then wonder why the system is poorly received makes me sad.

  4. The person of “Simplicio” was Galileo’s invention in his Dialogue Concerning the Two Chief World Systems. Here he represents four different people, but almost everything he says is something one of them in fact said or very plausibly might have. I’ve cleaned it up, edited, and amplified only a little.

    As one of the persons who was live in that particular IRC channel at the time this went down, I’ll attest that it is a quite reasonable summary of what actually was said.

    My only quibble would be that Simplicio originally represented a simpleton, naively opposed to Galileo’s theory. None of the persons who this composite represents would fall in that category.

    ;-}

    1. >None of the persons who this composite represents would fall in that [simpleton] category.

      Most definitely not.

      You do Simplicio a bit of an injustice, however. At that time and place, the fool was also the one who was permitted to see – and say – that the Emperor had no clothes.

  5. I recently used git filter-branch to remove object files that I had accidentally included in the repository from all versions, and it seemed to work well enough. This sounds like a similar type of problem to the “remove credentials” case. Are there any particular advantages of reposurgeon over git-filter-branch for this use case?

    1. >Are there any particular advantages of reposurgeon over git-filter-branch for this use case?

      I didn’t know filter-branch could be used to hack blobs. That’s interesting.

      My one brush with it was building a script to edit comments in committed but un-pushed commits. I still use it to fix typos fairly often.

      Based on that experience I’d say reposurgeon’s advantage is accessibility – a less spiky interface, better documentation, decent embedded help.

  6. Or, if you prefer Rob Pike, “Data structures, not algorithms, are central to programming.”

    Fred Brooks: “Show me your flowcharts and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won’t usually need your flowcharts; they’ll be obvious.”

  7. > I wish there had been teaching materials when I was a noob – I had to spend a
    > quarter-century learning how to do it right. Sadly, there still aren’t; all we have is a handful
    > of people who audodidacticated themselves.

    The Structure and Interpretation of Computer Programs is pretty good in this regard. See lecture 4B on operators (http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-001-structure-and-interpretation-of-computer-programs-spring-2005/video-lectures/4b-generic-operators/) or 8A on logic programming (http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-001-structure-and-interpretation-of-computer-programs-spring-2005/video-lectures/8a-logic-programming-part-1/), or 3A on Escher pictures (http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-001-structure-and-interpretation-of-computer-programs-spring-2005/video-lectures/3a-henderson-escher-example)

  8. ESR> ESR: I wish there had been teaching materials when I was a noob – I had to spend a quarter-century learning how to do it right. Sadly, there still aren’t; all we have is a handful of people who audodidacticated themselves. RMS. Steven Wolfram. Me. A few others.

    Out of curiosity: Do you have an opinion about Martin Fowler’s book on the subject? It’s been on my to-read list for years because I think highly of the author, and because your chapter on mini-languages in The Art of Unix Programming convinced me it’s a topic worth learning more about. Somehow I never got around to it, though.

    1. >Out of curiosity: Do you have an opinion about Martin Fowler’s book on the subject

      I haven’t read it. Sounds like I ought to.

      I too have a high opinion of Fowler, by the way. His book Refactoring was very good.

  9. Do we make too many of our software tools automatons when they should be judgment amplifiers? And why don’t we write more DSLs?

    We don’t write more DSLs because modern software development doesn’t work that way. Even open source developers have increasingly embraced the belief system of mass-market software development, which is centered around a fictive construct called the Average User. The Average User is stupid and afraid, and does not care about orthogonal, infinitely composable sets of primitives. It does not want to deal with software unless that software has a finite, circumscribed, readily identifiable set of knobs that allow it to do certain well-defined tasks. (Should the initial system be insufficient, there is always a secondary market in plug-ins, add-ons, and “apps” to be exploited.) Anything else is fundamentally scary.

    Of course real end users don’t behave in such a silly fashion, at least not all the time — but selling software for the Average User has proven much more profitable than selling software that can be easily instructed to solve real users’ specific needs with what amounts to a programming language with domain-relevant intrinsics, except for certain very limited and highly technical domains.

    DSLs are unquestionably the Right Thing for building tractable solutions to messy problems — even if you end up draping a GUI over the whole works. But it’s hard building a case for that that’s palatable to the CADTs building today’s software — both in the proprietary and open-source realms.

    Yet another example is Emacs. You have to sort of forget that Lisp is theoretically general-purpose for a moment; if you do you’ll see the same hard-over-soft pattern, DSL underpinning something that doesn’t look like one.

    Lisp is a DSL for building DSLs! Which makes it quadratically scary. That may account for why it has such difficulty gaining traction even among seasoned, professional developers.

  10. Well, if you want to construct DSL, you need some kind of lexer+parser combination. For simple languages (and DSL should be in most cases simple) parsing by hand or using regular expressions is enough (the latter assuming that DSL language is regular), but time consuming.

    I know of two solutions that make creating parsers easier. First is Marpa (a low level C library plus a Perl module) implementing table-based parsing, second is Perl6 grammars / Regexp::Grammars Perl module, implementing augmented PEG-parser (think readable, modular and heavily augmented regexps)… though there is not much theory behind the latter.

    Yacc / Bison… not, thanks: http://jeffreykegler.github.io/Ocean-of-Awareness-blog/individual/2010/12/killing-yacc-1-2-3.html A good parser needs doing parsing in reasonable time, and what’s most important good diagnostic of parse-time errors. Handling ambiguity would be nice.

    Is there something like this for Python?

    1. >Well, if you want to construct DSL, you need some kind of lexer+parser combination.

      Not necessarily. Reposurgeon gets away without one because it doesn’t need a recursive grammar. It doesn’t need a recursive grammar because it doesn’t have control structures in the normal sense. (It does have a weak form of if-then in the selection-expression sublanguage, but that’s seldom used. There are no loops other than implicitly over selection sets.)

      Don’t get my motivation wrong here; I’m completely comfortable writing lexer/parser combinations, about as expert with lex/yacc as anyone living, and fully capable of using Lisp as a DSL framework. But sometimes you don’t need that generality. Repository surgery does not; linear sequences of commands over complex selection sets are sufficient.

      There is a macro facility. But this doesn’t really change the DSL’s place in the hierarchy of language types, it’s just a convenience for writing repetitive operation sequences more concisely.

  11. > Well, if you want to construct DSL, you need some kind of lexer+parser combination.

    This is precisely why lisps make the lisp parser available to you at run-time; you can call `read` any time you want in order to read in an s-expression. It parses it for you and gives you back ordinary lisp objects; no extra parsing is required. If you want to support something that looks less like lisp, then you can supply it with syntax macros.

    Reposurgeon uses a python module called `Cmd` (https://wiki.python.org/moin/CmdModule) provides a REPL and a very simple command syntax, but makes you parse the arguments yourself (each command gets a string containing whatever was left of the command line after parsing out the command name from the front). Reposurgeon just splits them on whitespace, as I recall, and treats the resulting tokens as either keywords to be matched exactly, regular expressions to be given to the `re` module, numbers, etc.

    When I’m writing a parser I prefer to use parser combinators, as the resulting code is much, much easier to read and modify than other formalisms. They do have a reputation for being slow, but that hardly matters in most cases.

  12. Bridging those requires case-by-case human judgment; therefore, the best tool should seek to amplify human judgment rather than futilely attempting to remove it from the process.

    This not only holds in ICT, but everywhere. A case in point is (criminal) law, where lawmakers try to design algorithmic laws and regulations that attempt to take human judgement out of the loop. The consequences are disastrous. (see “The death of common sense” for great examples).

    On a deeper, philosophical note, humans tend to reason in language (with verbs, nouns, and modifiers). That makes language metaphors better than graphical metaphors for deeper reasoning. I think that is one of the reasons of the success of math, computer languages, and DSL in open ended incremental reasoning.

    And I think I will start rereading “The Art of Unix Programming”.
    (probably just parts of it, 500+ pages is a lot)

  13. Or, if you prefer Rob Pike, “Data structures, not algorithms, are central to programming.”

    Rather OT, but the one data structures course I’ve had was taken in German during an exchange year I spent in Brandenburg in my senior year of college. When the professor mentioned that the English term for Linksbaum was “Leftist tree”, I needed some convincing that that term was actually used in the English literature and was not a shoddy back translation, as it was the first time I had ever heard the term “leftist” used without political meaning.

  14. “the best tool should seek to amplify human judgment rather than futilely attempting to remove it from the process”

    Not necessarily. Some time ago BMW ran some tests that showed conclusively that automatic stability controls were superior to the best human drivers. I believe they buried the story promptly. (The last thing a BMW owner wants to hear is that a robot is a better driver than he is.)

    1. >automatic stability controls were superior to the best human drivers

      Well, duh. I’m not claiming every automaton should be a judgment amplifier instead. What I’m saying is software engineers have a preconception about full automation being best that is sometimes seriously wrong. The case in point is that fully automated repository conversions tend to be greatly inferior in quality to what a human driving reposurgeon can produce.

  15. > When I’m writing a parser I prefer to use parser combinators, as the resulting code is much, much easier to read and modify than other formalisms. They do have a reputation for being slow, but that hardly matters in most cases.

    What are parser combinators? (And should you worry that combination of well-behaved grammars is not well behaved?)

  16. Parser combinators are functions which combine parsers. A parser in this sense is a function which performs a single step of the parsing process, usually by looking only at the beginning of a string. The return value of a parser function is generally a structure containing both the syntax tree that has been built up so far and the remainder of the string, or if the string doesn’t match then an error value. For example, if I have a function called `digit` which parses any digit character off of the front of a string, then I can use it with the combinator function `many` to get a new parser function which parses any number of digits off of the front of a string: `let digits = many(digit)`. Skipping ahead to a longer example, I might say that `let number = sequence(optional(sign), digits, optional(sequence(radix, optional(digits))), optional(token(“E”), optional(sign), digits))`.

    This gives me a new parser `number` which parses something fairly complex, but which can be used anywhere a primitive parser can be used: `let math = sequence(number, operator, number)`, etc.

    Of course, like any other parser you can get yourself into trouble by backtracking too much, or by specifying a grammar which is different than you intended, but for readability it’s hard to beat them. This is especially the case when you’re parsing something specified using BNF; they read almost entirely the same way.

  17. @db48x –

    > Parser combinators are functions which combine parsers. A parser in this sense is a function which performs a single step of the parsing process, usually by looking only at the beginning of a string.

    How does this differ from a recursive descent parser?

    (The toy example on Wikipedia doesn’t actually return a syntax tree, but just ‘thumbs up’ for a WFF or ‘fail’. The extension seems obvious to me.)

  18. Parser combinators are a particular implementation technique that implements recursive descent parsers. The Jparsec link in the wikipedia page references the Haskell “parsec” library, which is one of the earliest implementations. One of the particularly nice things about parser combinators in Haskell is that rather than having to implement all the combinators from scratch for repetition, selection, etc., much of the generic functionality can be obtained from the general monad interface. A flavor of recursive descent parsers, if you will.

  19. Well, if you want to construct DSL, you need some kind of lexer+parser combination. For simple languages (and DSL should be in most cases simple) parsing by hand or using regular expressions is enough (the latter assuming that DSL language is regular), but time consuming.

    Groovy sort of accidentally fell into being a stellar tool for DSLs because of interactions between its rules for closures (and their “delegates”) and flexibility of grammar around method invocations. Writing DSLs requires getting your head around a set of patterns that isn’t simple (and I’m not there yet), but the DSLs produced are expressive and have the advantage of being able to “shell out” to sandboxed general-purpose Groovy code at any point.

  20. > How [do parser combinators] differ from a recursive descent parser?

    As Jeremy said, it’s a way of implementing a recursive descent parser, specifically one which takes advantage of the first-class functions available in all interesting programming languages. Haskell adds a helping of monads on top that is pretty nice, but not strictly necessary. Also, it’s typical for the lexing and parsing to be combined, although that isn’t strictly necessary either.

  21. John D. Bell,

    The output of parser combinators *is* a recursive-descent parser. Parser combinators are higher-order functions — functions that operate on functions (parsers in this case). You use parser combinators to compose, sequence, and transform simple parsers into more complex ones, making building a specific parser faster and easier than if you had to write the whole thing out by hand.

  22. Well, the “amplifying human judgment” idea is definitely “Memex” worthy (i.e. skip to the [first/last] chapter for the obligatory [J.C.R Licklider/Vannevar Bush] citation).

    “Domain specific”, however, reminds me of something linguist Mario Pei wrote. While he was a renowned polyglot by his own, and others’, admission, when once hired to be a translator for a highly technical project between two groups of differing native speakers, he found himself completely overwhelmed and alone in the world of their particular jargon—“trade skill” if you will. The same would seem to be the case here with regard to defining a generalized markup for any number of disciplines which default to their own unique set of idioms. Obviously, this is not a new problem (collegially referred to as “the age-old problem” as relayed to me by a friend currently vexed by the same lack of—what should seemingly be—an off-the-shelf solution for his, or anyone else’s, field). I have had this same conversation with both neuroscientist and metallurgist alike.

    As an aside, when it comes to human judgment vs. automatons, I highly recommend The Glass Cage by Nicholas Carr as a digest for average, perennial (nay, make that terminal) larval wannabees like myself.

  23. ESR, what you refer to as the distinction between “automatons” and “judgment amplifiers”, I have also seen it referred to as AI vs. IA (Intelligence Amplification[1]). There was academic research in ye olden, Engelbart being a notable figure in this field, although I am not aware of much since then.

    Although other engineers have also hit on the idea that languages[2] and compositional properties[3] are essential to IA, I think what you have here is probably some of the most practical advice I have seen on the matter (admittedly, I haven’t read Martin Fowler’s book on DSLs).
    [1] https://en.wikipedia.org/wiki/Intelligence_amplification
    [2] http://nealford.com/memeagora/2013/01/22/why_everyone_eventually_hates_maven.html
    [3] https://awelonblue.wordpress.com/2015/12/09/out-of-the-tarpit/

  24. > The output of parser combinators *is* a recursive-descent parser. Parser combinators are higher-order functions — functions that operate on functions (parsers in this case). You use parser combinators to compose, sequence, and transform simple parsers into more complex ones, making building a specific parser faster and easier than if you had to write the whole thing out by hand.

    What about very important part of parsing, namely generating legillible and helpful error messages in case there are errors in the input?

    TeX awful and oft misleading error messages come to mind…

  25. The complexities of parsers can be avoided by formulating your DSL as an API and using a common, systemwide protocol to invoke the API. Bonus: free type checking! (Repeat after me: all data has a type. ALL data has a type.)

    This is what COM is for under Windows; and honestly, it really is hard to beat Windows from a system integration standpoint. On Linux, D-Bus is a likely default for desktop uses, but protobuf, capnproto, and msgpack are candidates for distributed systems.

  26. “The case in point is that fully automated repository conversions tend to be greatly inferior in quality to what a human driving reposurgeon can produce.”

    That’s certainly the case now, but will not remain so, simply because the computer stores more data than a human, with storage capacity growing with time. Our capacity is already maxed out. “We can’t think of everything”, but the automated converters of the future will be able to ‘think’ of more things than we, as we build more of our experience into the machines.

    1. >the automated converters of the future will be able to ‘think’ of more things than we, as we build more of our experience into the machines.

      I’ll believe it when somebody writes an N-way converter that good. Which is not going to happen soon, because the best-qualified person to do it would be me, and I don’t know how to.

      I came by my skepticism about fully-automated conversions the hard way, by repeatedly tripping over the artifacts and plain crap left in translated repositories by people who thought they understood the problem well enough to fully routinize it. They didn’t.

      I don’t either, and I’m currently the world’s expert on this problem. I think you can take it to the bank that no machine procedure will beat a reposurgeon-assisted human until the strong AI problem has been solved.

  27. For parsing, I use LPEG (Lua Parsing Expression Grammar) primarily because you can compose the parser from smaller bits. For example, the code here:

    https://github.com/spc476/LPeg-talk/blob/master/strftime.lua

    will parse a strftime() format string and return another expression that can then parse date information in that format. So,

    parser = strftime:match(“%Y-%m-%dT%H:%M:%S”)

    will return a parsing expression than can parse “2016-02-18:22:04:22”. It can also be used as part of a larger parsing expression, say, one to parse email headers. The authors of LPEG used the ability to overload regular operators to great effect here (‘a + b’ means match a OR b, ‘a * b’ means match a AND b).

  28. >> “I’m currently the world’s expert on this problem. I think you can take it to the bank that no machine procedure will beat a reposurgeon-assisted human until the strong AI problem has been solved.”

    Amen to this!

    With regard to your respondent, I can only say, “The Future is loading.”

  29. > What about very important part of parsing, namely generating legillible and helpful error messages in case there are errors in the input?

    It’s mostly orthogonal to the type of parser you use; no matter what type of grammar you write, good error messages take elbow grease and forethought to produce.

    In this case the parser functions need to return enough information to build the error messages (line numbers and character ranges and whatnot), and the combinators mostly don’t care. The combinators usually throw away an error result, or return an error result given to them by a parser function unmodified, or occasionally combine error results into a single result.

    The `many` combinator calls the parser you pass in repeatedly until it fails, collecting the parse results as it goes; when the parser fails it throws away the error and returns the collected results.

    On the other hand, the `alternates` combinator calls all of its arguments in turn, looking for the first one to succeed and returning that success unmodified. Good error messages would generally require that it collect all of the failures and return some combination of them if all of the parsers fail, in order to produce a message to the effect that you were looking for foo, bar, or baz and didn’t find any of them.

  30. > It’s mostly orthogonal to the type of parser you use; no matter what type of grammar you write, good error messages take elbow grease and forethought to produce.

    It is not true. To show a good error message you need the information from parser about what tripped it, and if possible what could have been done for the parsed text to conform to grammar. And parser needs to know where the error is, not notice long after the fact that it cannot parse further.

    Perl6 has a excellent error reporting (augmented PEG grammars). TeX has horrible error reporting (LALR, i.e. yacc / bison like).

    http://jeffreykegler.github.io/Ocean-of-Awareness-blog/individual/2011/04/bovicide-5-parse-time-error-reporting.html

  31. I’m willing to be proven wrong of course, but I’m pretty sure that the quality of your error messages depends much more on the amount of time and care you apply to the task rather than the type of grammar you use. TeX may not be a great example either; it was written so long ago that the run-time costs of the error messages would have been significant. Also would other contemporary parsers (in the Pascal compiler he was using, for instance) have been any better? Good error messages are expected today, but that’s a cultural improvement I’m sure.

    In any case, parser combinators give you a recursive descent parser, not an LALR parser. Even if LALR parsers always give you bad error messages, that’s not a downside for parser combinators.

    I’ll read the article you linked to.

  32. > I didn’t know filter-branch could be used to hack blobs. That’s interesting.

    You can run any shell script on the state of every revision. I thought that was what it was for – what did you think it was for?

    1. >You can run any shell script on the state of every revision. I thought that was what it was for – what did you think it was for?

      The documentation is confusing enough to make the designer’s intent unclear. I did manage to beat some usefulness out of it, but doing so was a struggle.

  33. SCIP is a good resource, as mentioned.

    The underlying data model from ODATA (the EDM) is a good general purpose modelling language that has the advantage of being very close to the action-verb workflow that you’ll need to do to enable REST API’s to power web interfaces anyway. Because that tends to be close to “user” interaction, the resulting reasonable designs are de facto DSLs. Exactly according to the pattern you’ve highlighted.

    Another large set of examples is the PowerShell, ODATA & remoting infrastructure for Windows systems management.

    My first reaction here was to think of the reposurgeon case as an extension of the REST API case to the interior process boundaries. Of course, inside the boundary custom formats beat verbose wire formats for speed but the semantics are the same.

  34. > but selling software for the Average User has proven much more profitable than
    > selling software that can be easily instructed to solve real users’ specific needs with
    > what amounts to a programming language with domain-relevant intrinsics, except for
    > certain very limited and highly technical domains.

    There are probably 3 billion potential customers for “Average Joe” software because it will do things that not only “average joes” need, but will also do things that Above Average Joe needs to do as well, but doesn’t really care enough about that task to want to do it.

    Lots of Math and CS types love TeX/LaTeX, which is allegedly a powerful tool, but frankly I’d rather use Quark XPress, and even LibreOffice lets me get “it” done faster. Now this is likely because I’ve not taken the time to learn it, but it doesn’t offer me a great advantage to do so.

    Heck, PostScript is one heck of a DSL and how many people even *realize* that, much less can be bothered to read it? (Is PostScript even a thing anymore, or am I seriously dating myself–haven’t played in that space since the 90s).

    OTOH, a DSL for managing server configurations greatly interest me (Puppet, Chef are sort of like this if you squint hard) but even your “Average” IT guy would want the GUI on top (e.g. they would recognize the utility but would have no interest in anything deeper.

    I think the reason that DSLs aren’t used more is simply that too many people in the software development world have (like me) very little formal education and have barely learned enough to do our current jobs. There is a C, C++, C# for Dummies. SQL for Dummies. Perl, Python, and Ruby for Dummies, even a LaTeX for dummies.

    And there is a “DSL for Dummies” but it isn’t about programming.

    I think you’d be surprised how many people learned whatever language they got hired for out of one of those books.

    Which is to say that too many of the people in “our” profession are actually “average joes”, and can’t build the sort of mental model of the problem that is necessary to build a DSL. Although I’m about to start thinking hard about how to do just that.

  35. ESR, unrelated question: I wish to educate myself about AGW so I can form an informed opinion. How may I do so in an 80-20 kind of way (least amount of effort for most pay-off)?

    1. >How may I do so in an 80-20 kind of way (least amount of effort for most pay-off)?

      That is a tough one. The subject is complex, and a lot of what passes for ‘fact’ is either out-and-out propaganda or model artifacts presented as though they were measurements of reality. I don’t think there is any easy way through that.

      I guess I’d say, learn about the basics outside of where they’re poisoned by politics, then apply those. Get a grasp on forensic statistics, especially on how to spot overfitting and nonphysical bugger factors. Learn basic radiative physics, the stuff near Boltzmann’s Law, and some physical chemistry. That’ll at least get you started in the right direction.

      EDIT: Oh, and very concrete advice: Read Dr. Judith Curry’s blog.

  36. @ESR:
    “Well, duh. I’m not claiming every automaton should be a judgment amplifier instead. What I’m saying is software engineers have a preconception about full automation being best that is sometimes seriously wrong.”

    How can a software engineer determine *in advance* that the solution should be to amplify human judgment rather than attempt to automate it away? It seems this is an architecture-level (or above) yet I’ve never seen anybody attempt to address it prior to working through the development cycle at least once.

    1. >How can a software engineer determine *in advance* that the solution should be to amplify human judgment rather than attempt to automate it away?

      Well, the way I did it in the case of reposurgeon was by noticing the failure modes of automatons. A particular kind of problem to watch out for is automatons that try to sweep domain complexity under the rug by using kludgy, approximate data transformations that human judgment could better if it had a toolkit handy.

  37. @William:
    Which is to say that too many of the people in “our” profession are actually “average joes”, and can’t build the sort of mental model of the problem that is necessary to build a DSL. Although I’m about to start thinking hard about how to do just that.

    I think another factor in it is that the hackeroid mental build tends to have a good intuitive grasp on programming on a small scale, but is given to laziness and consequent disorganization on a large scale. Even with a formal education, it is tempting to tune out the classes that talk about such boring things as structure and planning and get straight to the coding. I would say that the most important part of formal CS education is probably learning discipline. Everything else the hackeroid is well equipped to learn autodidactically.

    1. >Even with a formal education, it is tempting to tune out the classes that talk about such boring things as structure and planning and get straight to the coding.

      That’s odd. I wouldn’t say most of the hackers I know are like this. Thinking about architecture and elegant solutions is not a chore to be avoided, it’s the fun part.

      I wonder what accounts for the difference in experience? I suppose it’s possible that I differentially attract programmers who think like systems architects into my peer network. Or possibly it’s a years-of-experience thing. I don’t know.

  38. That’s odd. I wouldn’t say most of the hackers I know are like this. Thinking about architecture and elegant solutions is not a chore to be avoided, it’s the fun part.

    This.

    One of the reasons why I’m such a Lisp-head is because there is no quicker path, that I know of, from “system design in head” to “working prototype” than to use Lisp as an implementation language. Writing code is hard and it’s a long slog; failure to think things through up front just makes it harder.

    I wonder what accounts for the difference in experience? I suppose it’s possible that I differentially attract programmers who think like systems architects into my peer network. Or possibly it’s a years-of-experience thing. I don’t know.

    It could be a years-of-experience thing. John Carmack once noted that the way to developing fast algorithms is if there are two ways to do something, do them both ways and see which one is faster. This could apply to finding maxima in efficiency of human effort as much as efficiency in CPU effort. It could be that you have to have written a few duds before you understand what to think about and how to think about it before you start coding.

    Also, note that a “hackeroid” is not a hacker the way a factoid is not a fact. The character type Jon Brase talks about is highly characteristic of bright high-schoolers and college-age kids with a jones for coding. Raw material for the next gen of hackers, but they don’t stay that way for long if they want to remain in the field.

    I think another factor in it is that the hackeroid mental build tends to have a good intuitive grasp on programming on a small scale, but is given to laziness and consequent disorganization on a large scale.

    What kind of scales are we talking about? The reason why I ask is because all too many programming projects have grown in scale beyond management’s ability to contain them and have turned into bug-ridden, slow-moving, sclerotic shitfests. And I’m not just talking about the latest multimillion dollar failed IRS computer revamp. These days “AAA gaming” is more like “subprime gaming”, as the overwhelming complexity of a typical big-budget game leads to uneven design at best and all too many bugs slipping through the cracks on release day.

    When you’re talking about large enough project scales, “laziness and consequent disorganization” can be easily confused with “refusal to acquiesce to demands of project leadership that doesn’t know what the fuck they’re doing, which demands are making the problem worse instead of better”.

  39. It is extremely common that the error is on line X, but the parser fails to notice any problems until it is on the distant, entirely unrelated, and entirely correct line Y, and then throws up an irrelevant and misleading error message.

    Thus, for example, you pass in an argument that lacks the required property. When, much later, the parser attempts to use the required property, it has a fit and issues an error message about that line where it tried to use the property, rather than the line where the argument was passed.

  40. Once I was looking at it from the other angle. Suppose you have a complicated app, like a mail client. Of course it should be configurable, so the user can set config options in a file or GUI. This basically means the user assigns values to certain variables. Then the mail client has user configurable IF-THEN options: if this mail is from X and contains Y then put it into folder Z. This is another programming structure – conditions. The suppose you can apply your new rule to a thousand old e-mails and they all get sorted into folders. This is a third programming structure: a loop.

    Now, I am not a CS type, I don’t really know what makes a language Turing-compatible. But once I have variable assignment, conditions and loops, I’d call that thing at least a scripting language, because I can process data with it and that is kind of the whole point. Like in old times when you fed in all invoices on punched card and the program and you got out a punched card or ribbon representing sales per month per state, or something.

    So programmers keep inadvertedly inventing languages, even on the GUI, just because users want configurable apps and configurability naturally grows into scriptability.

    I am not sure what judgement amplifier means. I suppose this stuff – the program goes through 1000 mails and sorts it into folders according to specified conditions, is one. But then what is a good example of NOT a judgement amplifier?

    Interesting how the old guard says LISP is perfect for creating DSLs and yet they aren’t really doing it.

    One thing that follows from this all is that it is better to design most apps with scriptability in mind, so that e.g. it has a C/C++ core, including the interpreter of something nice, like Lua, and even the original programmer writes the “business logic” in that. This would ensure easy customizability and extendability.

    I don’t know if true DSL’s are really needed or are an advantage. In this case, would an embedded Lua or Python interpreter not be an elegant solution?

  41. @Jeff Read

    >One of the reasons why I’m such a Lisp-head is because there is no quicker path, that I know of, from “system design in head” to “working prototype” than to use Lisp as an implementation language

    Yes, but does the fact that Common LISP has nowhere as many modern libraries as say Java lead to reinventing the wheel? More generally, for the sake of other programmers, don’t you think one should “outsource” as much of one’s work to commonly used libraries as possible, ideally every app just a thin layer gluing common libraries in new ways? It would be the same principle as the division of labor in economics or the division of concerns in other aspects of programming.

    This is why I dislike when a new language is introduced with say a Fibonacci. It’s not1960. Computers don’t just compute. Demonstrate something like pulling data from a web service and putting it into a MySQL database, hopefully reusing already existing libs for this and not reinventing it. That is how something useful looks like…

    Then again I appreciate that hacker types are usually attracted to the heavy lifting part. I.e. actually making those libraries.

  42. @ESR

    >>Even with a formal education, it is tempting to tune out the classes that talk about such boring things as structure and planning and get straight to the coding.

    >That’s odd. I wouldn’t say most of the hackers I know are like this. Thinking about architecture and elegant solutions is not a chore to be avoided, it’s the fun part.

    I don’t know hackers, but generally speaking if you teach students theory before practice, they simply don’t pay attention because they don’t know why.

    Good education starts with practice, ideally, even doing things in intuitive ways and failing, then teaching how it is done, and only then the theory why.

    The go-to example is normalization. Boyce-Codd Normal Form and 4NF are simply words to memorize if you are a student and never designed a simple database app. If you design it first, or at least look at a few common ones, then this theory helps understanding the why.

    Civil engineers should ideally lay bricks for a summer job, get in school for a bachelor, exit as someone who can read blueprints and manage bricklayers, work a year, go back for a masters and only then draw blueprints. If you never smelled wet mortar you will not know what exactly many rules for drawing blueprints are for and thus will not really pay attention.

    Similarly, teaching foreign languages should not begin with memorizing grammar tables, it should begin with a list of useful phrases and only then extracting and generalizing the grammar rules behind them.

  43. > I am not sure what judgement amplifier means. I suppose this stuff – the program goes
    > through 1000 mails and sorts it into folders according to specified conditions, is one. But then
    > what is a good example of NOT a judgement amplifier?

    A pure automaton would sort them into folders for you, without asking for your input on how you would like the sorting to be done. Eric mentioned repository-conversion tools that resolve ambiguities in the original repository structure automatically, and therefore incorrectly at least some of the time.

    > One thing that follows from this all is that it is better to design most apps with scriptability in
    > mind, so that e.g. it has a C/C++ core, including the interpreter of something nice, like Lua,
    > and even the original programmer writes the “business logic” in that. This would ensure easy
    > customizability and extendability.

    You’re perfectly correct :)

    Lots of applications are designed this way; emacs is an old favorite, but new favorites like your favorite games are also good examples. They generally have a core of C++ coupled with an interpreter for some other language (lua is quite popular) plus customizations to that language which make it easier to deal with the game domain (which we’ll call a DSL even if you can’t add new syntax to most languages).

    > Yes, but does the fact that Common LISP has nowhere as many modern libraries as say
    > Java lead to reinventing the wheel? More generally, for the sake of other programmers, don’t
    > you think one should “outsource” as much of one’s work to commonly used libraries as
    > possible, ideally every app just a thin layer gluing common libraries in new ways? It would be
    > the same principle as the division of labor in economics or the division of concerns in other
    > aspects of programming.

    Yes and no. When you deploy a program you are responsible for it’s behavior/performance/etc. That is, if you discover that it has some bug then you can’t just give up and not fix the bug merely because the cause of the bug is a bug in a library that you used (horror stories about dysfunctional organizations not withstanding). Instead you must either fix the library, replace the library with another, or reimplement the library (or I suppose you could live with the consequences). Use as many libraries as you can get away with, but don’t forget that you’re still responsible for the correct behavior of the program whether you wrote the code yourself or not.

  44. > It is extremely common that the error is on line X, but the parser fails to notice any problems until it is on the distant, entirely unrelated, and entirely correct line Y […]

    That’s what clueful parsers (like Marpa and Perl6) are about: they always know what to expect at given time, thus they detect errors immediately, and show not only the correct line and position in line where the error is, but suggestions for correction.

    It is supposedly hard to add good error reporting to yacc / bison based parser, because of the clueless property you wrote about…

  45. @esr:
    Thinking about architecture and elegant solutions is not a chore to be avoided, it’s the fun part.

    Indeed it is, but it’s structure-of-process-of-thinking that I’m talking about when I say “structure”, not structure-of-product-of-thinking, if you know what I mean. And by planning, I don’t mean “how should this API look” so much as “what order should I implement things in to best keep myself engaged in the process”. In any case, everything is very fun at small scales, but beyond a few hundred LOC lack of mental discipline tends to do me in.

    Also, see Jeff’s comment about hackeroid not being the same as hacker. From what I can tell, the traits I’ve described are fairly common in pre-hackers, and stem from a laziness that I’m pretty sure I’ve heard you mention as part of the hacker personality, but are grown out of with education and experience.

  46. I’d like to know opinions on creation of a DSL vs. bolting in a scripting engine like Lua or TCL. ‘

    In one of the systems that we built in the late ’90s had a “power user” mode that exposed a large number of the API points and let you write scripts in TCL. On a regular basis we would get some of these scripts and include them as base functions for other users. The user base was happy with this they often used the TCL scripting to build out new features.

    For my own stuff I’ve been using Lua the same way, it’s pretty easy to imbed it into C programs that do the grungy behind the scene work and for the more esoteric items I just script them.

    Creating a DSL sounds like fun, but then the grammar design, parser build, interpreter build, oh crap messed up the grammar, fix without breaking old scripts, update parser, interpreter recode sounds like work I can skip by flinging it to Lua or some other scripting tool.

    Thanks for your thoughts.

  47. Common Lisp has enough libraries these days to get by. Installing Quicklisp gives you access to the major ones.

    But I tend to reach for Gambit Scheme, which has an easy FFI that can import functions from any C library. I even used these to create a simple Objective-C binding, which was subsequently used to write health-care apps in Gambit for iOS.

    1. >What makes you feel that way?

      You’d probably have to be a native English speaker to notice. It has the same feel as as a lot of half-dead 18th- and 19th-century coinages like “absquatulate” or “borborygmus”, one of reaching for excessive classicism in order to sound high-flown or erudite.

  48. I’d like to know opinions on creation of a DSL vs. bolting in a scripting engine like Lua or TCL.

    How about bolting in a Lisp and then creating a DSL in that Lisp?

    I’ve had good experience integrating TinyScheme (which comes in a single .c file).

  49. >Well, if you want to construct DSL, you need some kind of lexer+parser combination.

    Only if you need an external DSL. This is why internal DSLs are excellent for prototyping.

    > What about very important part of parsing, namely generating legillible and helpful error messages in case there are errors in the input?

    This! Yes, this is the right question. The most difficult and value-adding job of a (language) parser is obviously not parsing correct strings, it’s producing the most helpful error messages. Most parsers in the wild will output a useless load of stupid errors even when their input is just one character away from valid.
    Yet everybody keeps claiming that parsing is just a function from string to a data structure. Another gap between theory and practice…

    > all we have is a handful of people who audodidacticated themselves. RMS. Steven Wolfram. Me. A few others.

    *sigh*
    Now I’m totally ok with admitting that you are smart, or very smart, or smarter than me (assuming that a comparison function is defined), and I’m also kind of ok with reading yourself explaining that you are yourself smart, but if at some point you could just avoid directly pretending that you are one among the top 10 geniuses on Earth, your writing would be less annoying to read, especially when such a claim is not related in any way to an otherwise very interesting topic.

    1. >but if at some point you could just avoid directly pretending that you are one among the top 10 geniuses on Earth

      You read that in, I didn’t put it there. The top 10 geniuses on Earth are doing much more difficult things than software engineering – theoretical physics, higher mathematics, that sort of thing.

      I’ve written before that I don’t I think I rate comparison with those people. The gulf between myself and someone like (say) Terence Tao is vast. Compared to him, the main difference between me and someone with an average IQ is that I at least have some dim and approximate idea of what having a mind like Terence Tao’s would be like.

  50. but if at some point you could just avoid directly pretending that you are one among the top 10 geniuses on Earth, your writing would be less annoying to read, especially when such a claim is not related in any way to an otherwise very interesting topic.

    He did not claim extraordinary intelligence, he claimed expertise in a domain that not a lot of people have experience in, and frustration that knowledge of that domain is not part of the typical computer science education.

  51. >*sigh*
    Now I’m totally ok with admitting that you are smart, or very smart, or smarter than me (assuming that a comparison function is defined), and I’m also kind of ok with reading yourself explaining that you are yourself smart, but if at some point you could just avoid directly pretending that you are one among the top 10 geniuses on Earth, your writing would be less annoying to read, especially when such a claim is not related in any way to an otherwise very interesting topic.

    You should try to learn to read. I mean, really read, with grasping the meaning, and not just pouring letters into an unused part of your brain.

  52. @ esr

    > You’d probably have to be a native English speaker to notice.

    True, as the Spanish word for it is “autodidacto, ta“. But I see your point.

    > excessive classicism

    And at the other extreme you’ve got Anglish/Ander-Saxon, which I suspect you don’t like either.

    1. >And at the other extreme you’ve got Anglish/Ander-Saxon, which I suspect you don’t like either.

      “Don’t like” is too simple. Both the Latinate and Ander-Saxon tendencies have their uses. The Greco-Latin word-hoard is an invaluable resource for coining scientific and technical jargon. On the other hand, when I wrote for Battle For Wesnoth and was aiming for vivid, muscular, emotive effect in my prose I wordsmithed in a spare Anglo-Saxon style.

  53. And at the other extreme you’ve got Anglish/Ander-Saxon, which I suspect you don’t like either.

    But the Anglo-Saxon for “autodidact” is “self-taught”, which isn’t too bad. Unless the sight of Anglo-Saxon “-ugh-” in a word makes you go ugh.

    As for me personally, Uncleftish Beholding is very funny.

  54. >EDIT: Oh, and very concrete advice: Read Dr. Judith Curry’s blog.

    I would also recommend ClimateAudit.org. Typical post is an in-depth critique of the statistical techniques, data set selection or research design of a published climate paper.

  55. OT as all hell, but I wanted to get this on your radar. Twitter recently created a thing called “Trust and Safety Council” that includes SJWs like Anita Sarkeesian, and they’ve already started suspending accounts of their enemies like R. S. McCain.

    We need an alternative to Twitter that can operate in a decentralized fashion, making it impossible for Leftist thugs to take it over. And the best person I could think of to lead the project is ESR (where “lead” is implied to include “figure out someone else competent to pawn it off on if that’ll get the job done”).

    1. >Twitter recently created a thing called “Trust and Safety Council”

      Who can hear a name like that without getting an Orwellian chill?

      I think this one will solve itself. Twitter’s losing users and its stock prices is tanking. In future years it may become the B-school case for Why You Don’t Let SJWAs Take Over.

      There already is a decentralized Twitter-equivalent called “Quitter”. FSF runs it, so there’s dumb anti-capitalist rhetoric on the front page, but it has the right architecture not to be fuckwithable.

  56. @esr:

    Thinking about architecture and elegant solutions is not a chore to be avoided, it’s the fun part.

    I wonder what accounts for the difference in experience? I suppose it’s possible that I differentially attract programmers who think like systems architects into my peer network. Or possibly it’s a years-of-experience thing. I don’t know.

    My history (a combination of self-taught and formal USAF training) leads me to expect that 90% of the answer comes down to the curricula for programming, with the final 10% being “the craftsman’s knack” [a combination of attitude and field-tested intuition].

    That first 90% though? Emphasis on CV-ready skills (like memorizing Java/C#/lang-du-jour library interfaces) over theory—as opposed to theory first, so instructors can wield it as a bludgeon on denser students. Tools and pacing that discourage exploratory programming, a.k.a. “learning by doing”, as if the solutions in CompSci are so obvious or easily derived from pure theory that practice is not required. Separate modules covering algorithm and data structures, as if the characteristics of one were irrelevant to the selection of the other. Treating languages, even in the same general family, as being so alienated from one another that they can’t coexist within one course.

    If you can clear that gauntlet without losing faith that elegant solutions exist or that thinking about architecture is a chore, it’s almost certainly from having autodidacticated yourself to above the minimum level of skill the coursework was intended to provide.

  57. @William:

    Lots of Math and CS types love TeX/LaTeX, which is allegedly a powerful tool, but frankly I’d rather use Quark XPress, and even LibreOffice lets me get “it” done faster.

    A TeX backend to LibreOffice would be interesting.

  58. A TeX backend to LibreOffice would be interesting.

    When I was in school (physics and computer science), I did most of my work in LyX, a thin graphical UI over a LaTeX macro language. LaTeX output and you can include inline LaTeX (I astonished one set of lab mates by plugging a serial cable into the digital multimeter and writing a script that polled the values and constructed a data table that went straight into my report). With proper (read: Emacs) keybindings, I could actually take notes in calculus/DE classes faster than most people could take them by hand.

  59. @ESR

    But, you see people get arrested for opinions expressed on Facebook (UK, Europe) and yet Diaspora is not taking off, not even in alt-rightish circles.

    Audience is a heady drug. Even when dangerous.

  60. > Lots of Math and CS types love TeX/LaTeX, which is allegedly a powerful tool, but frankly I’d rather use Quark XPress, and even LibreOffice lets me get “it” done faster. Now this is likely because I’ve not taken the time to learn it, but it doesn’t offer me a great advantage to do so.

    Well, LibreOffice / MS Office can let you get “it” faster… provided that you don’t need more than a few figures (newest LibreOffice has beginnings of floats, so maybe this would improve), don’t need more than a few listings (and don’t need line numbers, and don’t need them “live”), don’t need more than dozens of equations, and never change your mind about how to name and style things (for example what symbol to use and what to style it).

    Less said about ability to merge changes in word processors the better…

  61. I’m late to the discussion, but wanted to make a few comments:

    1) Anything very high level you could say about DSL design would be appreciated. I’ve been toying with an idea for creating a sort of handwritten programming language for dealing with some of my management and logistics duties. I’m just not suited for keeping track of large numbers of small details, so I want to create what amounts to a beefed-up chessic notation for tracking employees, their actions, and their locations. Bonus points if it can support basic information states and one or two levels of conditional nesting.

    This will involve conceptualizing the sorts of ontological primitives that a DSL programmer would deal with.

    2) The whole point of the OP was that in some domains it’s better to use a judgement amplifier instead of an automated solution. I’m unaware of anyone who has tackled head-on the question of which kinds of domains are better suited for automation and which for judgment amplification.

    A related question is which sorts of problem spaces are best explored with decentralized/crowd techniques, like protein folding, and which are best explored with centralized/expert techniques, like cardiac surgery.

    You could even set up a little four-quadrant map with one axis being the crowd/expert axis and the other the judgment/automation axis. The crowd/automation quadrant would have evolved algorithms selected on the basis of some fitness function, the expert/automation quadrant would contain your basic neural-network expert systems, and so on.

    Some obvious answers suggest themselves immediately: crowds are better with problems amenable to brute force search, for example. But I bet there are some interesting tangents to be found in pulling on these threads.

    1. >Anything very high level you could say about DSL design would be appreciated

      One thing that occurs to me is: don’t rush to the conclusion that your DSL needs to have the apparatus of a general-purpose language – Turing-completeness, recursive grammar.

      Sometimes that’s appropriate – Emacs is a case in point – but sometimes it’s just unnecessary complexity overhead. Reposurgeon didn’t even get macros and named variables until fairly late; it still isn’t Turing-complete and probably never will be.

    2. >You could even set up a little four-quadrant map

      That’s an interesting idea. I think you should run with it,

  62. [Latinate v. Anglo-Saxon]

    The combination of the historical power of the Roman Catholic Church and the peculiar circumstances of William the Conqueror importing a bunch of Norman French nobility to run things led to English having sort of a Multiple Personality Disorder. (I usually say it’s the bastard child of the Germanic and Romance families.)

    For many concepts, English has two different words/phrases, one based on Anglo-Saxon roots and another imported from Latin either directly or via French. Without exception, the Latinate form is seen as refined, educated, quite literally “noble” and “upper class” (as the Brits put it, “posh”), while the Anglo-Saxon form is crude, uneducated, lower-class. One’s choice of which word to use is in many ways a form of signalling “I am of your tribe”.

  63. @The Monster –

    > Without exception, …

    Not sure that’s completely true.

    > … the Latinate form is seen as refined, educated, quite literally “noble” and “upper class”
    > (as the Brits put it, “posh”), while the Anglo-Saxon form is crude, uneducated, lower-class.
    > One’s choice of which word to use is in many ways a form of signalling “I am of your tribe”.

    And an excellent way to mess with people’s heads (if you’re familiar with this idea, and have a wide enough vocabulary) is to deliberately switch registers in the middle of a discourse.

  64. @Jeff Read

    Still looks a bit more cumbersome than LyX, especially for formula entry.

  65. @John D. Bell
    I eagerly await a single counterexample.

    Just show me one case where the Germanic term is considered high-class/educated/sophisticated compared to the Romance. If you’re able to find one, it will be something that’s entered into English recently enough that they don’t carry any historical association with any particular classes.

  66. @Monster,

    Another class where the Germanic term would sound higher-class is where it has been mostly forgotten while the Latinate term is in common usage. The esoteric always sounds higher register.

  67. > Just show me one case where the Germanic term is considered high-class/educated/sophisticated compared to the Romance.

    Black (Germanic) vs Nigger (Romance)

  68. @Deep Lurker
    I believe you’ve found the exception. Even the less-offensive “negro” (used even today in such institutions as the United Negro College Fund and the Negro Leagues Museum, although they clearly use the word in a historical sense) does come from Spanish/Portuguese, making it of Latin origin. The -er suffix in your example is pretty strongly Germanic, though. (Ever notice how lower-class people watch movies in theaters, while rich folk attend theatre events, or view fine artwork at cultural centres? [Yes, I know the -re form is more common in the UK.])

    The French “noir(e)” certainly reads as higher-register than Anglo-Saxon “black” (even if “film noir” comes across as a bit cheesy to modern style sensibilities), so perhaps we’d find some other cases where Iberian forms read as lower-class than Gallic. That the Spanish-speaking immigrants to the US tend to be from lower SES might feed such a distinction here. I don’t know if a similar attitude in the UK might exist.

  69. People not wanting to learn computing theory before learning the practical bits remind me of people in high school math class demanding to know when they will use the quadratic formula.

    1. >People not wanting to learn computing theory before learning the practical bits remind me of people in high school math class demanding to know when they will use the quadratic formula.

      I have a complicated relationship with computing theory that comes of being a self-taught programmer with a mathematical turn of mind.

      Computing theory doesn’t scare me – in fact, I rather enjoy various kinds – but I almost never go out of my way to learn any of it. I pick it up as needed for the job. I don’t know whether in your terms, I am in the “not wanting” category, but I do know that I find my learning process is far more effective when I have a practical motive rather than simply a vague sense that I ought to know something about X.

      (Thus, for example, what I know about parsing and formal automata and finite-state machines I learned specifically because I enjoy writing compilers and interpreters.)

      Therefore I say, son’t be too hard on the kids who don’t want to learn theory. I think the fault lies in their pedagogues’ failure to connect theory to anything the kids might want to actually do. I’m certain, for example that if I needed to teach linear algebra to a CS student, I could improve his or her retention rate a lot by using the motion of sprites in video games as a motivating example.

  70. “That’s an interesting idea. I think you should run with it.”

    Okay, what follows is my first pass at the idea. It should not be taken as a strong assertion of any particular aspect of the analysis, nor should it be taken as implying that I’m an expert on the topic.

    Imagine a horizontal axis which runs from centralization/expertise to decentralization/crowdsourcing, and a vertical axis which runs from judgment to automation, yielding a four-quadrant graph.

    A representative entry in quadrant one (Q1) would be a quintessential judgment amplifier. Examples that come to mind are the kinds of programs talked about by Shyam Sankar in his TED talk or the fascinating-but-as-yet-unproven “Chernoff faces”.

    In Q2 we have mechanisms for improving the judgments of crowds. The only thing I could really think of were prediction markets, though I bet you could make a Libertarian case for market prices working as exactly this sort of mechanism.

    In Q3 we have automated experts, the obvious example of which would be an expert system or possibly a strong artificial general intelligence.

    And in Q4 we have something like a swarm of algorithms evolved by a different meta-algorithm making random or pseudo-random changes to a seed code, all of which are then judged by some kind of fitness function.

    (Contributions of more/better examples would be appreciated).

    Now the question becomes: where should we try to deploy these sorts of systems?

    It seems to me like Q1 systems would be better at solving problems that either a) have finite amounts of information that can be gathered by a single computer-aided human or b) are problems for which humans are uniquely suited to solve, like intuiting and interpreting the emotional states of other humans.

    The previously mentioned chernoff faces, if we ever get them working right, are an especially interesting Q1 system because what they do is take statistical information, which humans are notoriously dreadful at working with, and transform it into a “facial” format, which humans have enormously powerful built-in software for working with.

    Q2 systems should be used to solve problems that require more information than a human can work with. In something like a prediction market the point is to have a profit motive incentivize human experts to incorporate as much information as they can in as honest a way as they can, and over a span of time there are enough rounds of updates that the system as a whole produces a price which contains the aggregate wisdom of the individuals making the system up.

    At least I think that’s how they work.

    Why can’t we have a prediction market that performs heart surgery? Because huge amounts of the relevant information is “organic”, i.e. muscle memory built up over dozens and eventually hundreds of similar procedures. This information isn’t written down anywhere and thus can’t be aggregated and incorporated into a “bet” by a human non-surgeon.

    Based on some cursory research, my example of a Q3 system, i.e. expert systems, appear to be subdivided into knowledge bases and inference engines. I’d venture to guess that they are suitable wherever knowledge can be gathered and encoded in a way that computers can perform inferences and logical calculations on it. Wikipedia’s article contains a chart detailing some areas where expert systems have been used, and also points out that one drawback to expert systems is that they are unable to acquire new knowledge.

    That’s a pretty serious handicap, and places further limits on what types of problem a Q3 system could solve.

    Finally, Q4 systems are probably the strangest entities we’ve discussed so far, and the only examples I’m familiar with are from the field of evolvable hardware. IIRC using evolutionary algorithms to evolve circuits yields workable results which no human engineer would’ve thought of. That has to be useful somewhere, if only when trying to solve an exotic problem that’s stymied every attempt at a solution, right?

    1. >Okay, what follows is my first pass at the idea.

      It’s a good start with with some reasearch and thinking, I think you could develop into a useful taxonomy. A few comments:

      >I bet you could make a Libertarian case for market prices working as exactly this sort of mechanism.

      Yes, and I can tell you exactly how they do it, too: by directing investment to where it generates most value.

      >And in Q4 we have something like a swarm of algorithms evolved by a different meta-algorithm making random or pseudo-random changes to a seed code, all of which are then judged by some kind of fitness function.

      The term you’re looking for is “genetic algorithms”. This is an actual thing.

  71. @esr

    I would be interested in knowing about your prior exposure to the various theoretical domains. While I agree on the practicality and value of lazily learning, there’s a certain level of awareness that A Thing exists without which someone will blindly flail about. As an example, many integrals that are trivial in polar coordinates are difficult or intractable in Cartesian; it’s perfectly sensible not to learn how to use polar until it’s necessary, but someone who doesn’t even know that polar coordinate systems exist will spin wheels.

    A book of a hundred pages or so could identify and briefly describe a large variety of technical topics (e.g., the various sorts of trees) in a way that someone could quickly absorb a sense of the various tools that are available in the catalog, but I see lots of programmers who take astonishingly convoluted approaches simply out of complete ignorance of the solutions already available.

    1. >I would be interested in knowing about your prior exposure to the various theoretical domains.

      Um…prior to what?

      I told you I was self-taught. Starting in the 1970s when there was barely any CS instruction at all.

  72. @Trent Fowler
    “Now the question becomes: where should we try to deploy these sorts of systems?”

    What you are describing are optimization problems. The aim is to find the “best” solution (judgment) to a problem given a set of parameters and constraints (cost function).

    This is a search problem.

    The easy case is when you can iterate all possibilities and then pick the best. Next come problems with strong “simple” symmetries. Here you can set up analytical functions and solve mathematically (at least in principle).

    More complex situations are solved using some kind of numerical simulations, aka, algorithms. This is where machine learning/data mining/AI is used. See the recent solution to the Go game (link to original is inside the article).
    http://blogs.discovermagazine.com/crux/2016/01/27/artificial-intelligence-go-game/

    The most complex problem spaces are in the realm of NP complete problems: You know you are good when you found a solution, but there is no efficient way to find the best solution. This is the problem solved by life itself with Darwinian evolution. Here you can apply genetic algorithms that sample the solutions space and use various forms of gradient descent to find local optimums.
    https://en.wikipedia.org/wiki/Evolutionary_algorithm

    Genetic algorithms tackle the most complex of problems where you really have neither a clue nor any data about what a solution should look like. If you have existing solutions, better to use machine learning. If you have a clue, use simulations.

  73. “I’ve written before that I don’t I think I rate comparison with those people. The gulf between myself and someone like (say) Terence Tao is vast.”

    Heh. He and Von Neumann (and sometimes Yudkowsky) are my go-to examples when I’m trying to explain to someone that yeah, I’m in the top 1% of intelligence, but there are still people walking the Earth that make me look like a Chimpanzee.

    “I pick [computer theory] up as needed for the job. I don’t know whether in your terms, I am in the “not wanting” category, but I do know that I find my learning process is far more effective when I have a practical motive rather than simply a vague sense that I ought to know something about X.”

    The proper role of theory v.s. practice is an issue I’m butting up against as I prepare to start a long-term learning endeavor I’m calling “The STEMpunk project”. Basically, I plan on learning as much computing, electronics, mechanics, and robotics as I can in 10 weeks, 8 weeks, 6 weeks, and 8 weeks, respectively. Given my proclivities it’s just plain embarrassing that I don’t know more about this stuff than I do now, and I’m going to finish 2016 having fixed that.

    As of now I have it structured iteratively, where I’ll be completing some simple hands-on projects (usually kits meant for kids or complete n00bs), then a theory section, then some more serious projects (like wiring an electrical panel on a house).

    I simply couldn’t think of a better way to go about it. I considered just starting with theory and picking up projects as I go along, but I have a tendency to get bogged down in stuff like that, and I want to improve my hands-on skills.

    1. >Heh. He and Von Neumann (and sometimes Yudkowsky) are my go-to examples

      I think I know Eliezer well enough to predict that would be amused and flattered if you put him in the Tao/Von-Neumann class, but he’d set you straight pretty fast. He’s as realistic as I am about this sort of thing. We’re simply not that bright, alas. We’re just barely bright enough not to bore the crap out of a Tao/von-Neumann-class intellect.

  74. “A book of a hundred pages or so could identify and briefly describe a large variety of technical topics (e.g., the various sorts of trees) in a way that someone could quickly absorb a sense of the various tools that are available in the catalog, but I see lots of programmers who take astonishingly convoluted approaches simply out of complete ignorance of the solutions already available.”

    This reminds me of The Simple Math of Everything, and I agree that it’d be invaluable and someone should write the damn thing.

    On the other side of the STEMpunk project I plan on doing a leaned down AI math curriculum a friend of mine has been developing, and maybe doing a project where I self-train in the manner described in Generative Science. (I may call that one “The Carnival of Isomorphisms” :D)

    If once all that is done someone were to plop a book like the one you’re describing into my lap why, that’d make me very happy, indeed.

  75. “What you are describing are optimization problems. The aim is to find the “best” solution (judgment) to a problem given a set of parameters and constraints (cost function).

    This is a search problem.”

    Could you say a little bit more about the difference, or maybe relate it to the quadrant I’ve laid out (assuming you think it has any validity)?

  76. @Trent Fowler
    “Could you say a little bit more about the difference, or maybe relate it to the quadrant I’ve laid out (assuming you think it has any validity)?”

    If I understand you well, I think your crowd sourcing dimension is very much like a genetic algorithm where a lot of solutions are tried and optimized to find a more or less local optimum. Automation is where you try to run algorithms on defined problems.

    So, Q1 would be people thinking up solutions which would be alot like going through all known options and pick the best. Q2 would be like a purely genetic algorithm, but with humans picking the best solution (which could easily be automated). Q3 would be mathematical & algorithmic solutions to well defined problems. Q4 would be machine learning and deep learning (neural nets), i.e. using many, many known good solutions to learn how to get a very good solution.

    But I am not sure whether my view maps well into your quadrants now I look at it.

  77. “I think I know Eliezer well enough to predict that would be amused and flattered if you put him in the Tao/Von-Neumann class, but he’d set you straight pretty fast. He’s as realistic as I am about this sort of thing. We’re simply not that bright, alas. We’re just barely bright enough not to bore the crap out of a Tao/von-Neumann-class intellect.”

    Correct, I don’t think Eliezer is as bright as those two, but he is sufficiently beyond me to make the point, I think.

    And though I doubt you care all that much, by my own estimation Yudkowsky edges you out by a small but non-trivial margin. I’d say I’m near the top of my own stratum, you’re at the middle of the one above me, Yudkowsky is at top of the same one you’re in, and there’s at least one more above that but I can’t really distinguish what’s going on because it’s all kind of fuzzy.

    Which gives rise to an interesting question.

    I doubt you’d have been able to produce the The Sequences before you were thirty, but we’d have never gotten Dancing With the Gods out of Yudkowsky.

    So, what exactly is going on here? A naive model of intelligence would posit that pretty much any insight available to a person of a given level of intelligence should be available to a person of higher intelligence, and that seems to be true in nearly every case.

    But then we have these weird, orthogonal insights that are semi-closed off to certain kinds of minds.

    My best guess is that the mental channels through which insights move exert some influence on the end result of a person’s cognitive output, in at least two respects.

    One, people who are ever so slightly dimmer than someone else might be able to access insights they can’t, via something like synesthesia.

    Two, people of roughly equivalent intelligence can get the same levels of output through wildly different means. I’m not sure what Tao’s mind is like, but as far as I can tell he’s like me but way, way smarter. On the other hand, motherfuckin’ Srinivasa Rumanujan claimed to have these insane quasi-religious dreams in which advanced mathematics was revealed to him on scrolls by Hindu deities.

    How do you compete with that?

    1. >And though I doubt you care all that much, by my own estimation Yudkowsky edges you out by a small but non-trivial margin. I’d say I’m near the top of my own stratum, you’re at the middle of the one above me, Yudkowsky is at top of the same one you’re in, and there’s at least one more above that but I can’t really distinguish what’s going on because it’s all kind of fuzzy.

      Hm. Today Eliezer and I treat each other as intellectual peers, and I got a hint from his behavior the one time we met FTF that I may have been a role model he looked up to when he was younger. But you might be right that he’s somewhat brighter than me; that’s quite plausible, though it’s not a question I’ve considered seriously before because we’re both operating at a level where such comparisons are really difficult.

      I mean, what are you going to use as a common metric among people who have constellations of very particular capabilities that to Joe Average look near unique? How do you compare The Twelve Virtues of Rationality with The Cathedral and the Bazaar? Don’t get me wrong, I believe there is a common factor of general intelligence, Spearman’s g just won’t go away, but measuring it in the middle- and high-genius ranges is a stone bitch. We just don’t know how to do that very well yet.

      >I doubt you’d have been able to produce the The Sequences before you were thirty, but we’d have never gotten Dancing With the Gods out of Yudkowsky.

      You raise a fascinating question. My first reaction was “Of course we wouldn’t have; Eliezer’s knowledge base isn’t quite broad enough.” The more I think about it, though the more doubtful I get about both assertions.

      Here is a thing you don’t know: I’ve been thinking for decades about writing a book updating the ideas of General Semantics for a modern audience. I probably won’t do it now, because the Sequences exist…but if I had started on that project when I was 22? I won’t say I would definitely have produced an equivalent of the Sequences, but it wouldn’t have been an implausible outcome either. I think you’ve seen enough of my writing on mental hygiene and analytic philosophy to apprehend the possibility.

      On the other hand, Eliezer is certainly better at producing respectful pastiches of Zen Buddhist rhetoric than anyone I know but me, and maybe better than me. He is so damned good at it that he has got to have some kind of generative insight into the mysticism. Given that, the notion that he might have written Dancing With the Gods in a history close to this one starts to seem plausible.

      So…is it an essential difference in capability, or is it a contingent difference in life experience and where we chose to direct our attention? I don’t think the second theory can be ruled out, and I doubt Eliezer would think so either.

      I think Eliezer and I have in common that we’re bright enough to get good at almost anything you do with an intellect if we decide to pay close attention to it, the exceptions being a small class of really hard subjects for which you need a Tao/von-Neumann-class intellect. But ordinarily “difficult” skills like (to name one we have in common) really master-class English prose composition, those just…aren’t, really. If we want them, we’ll get them without fuss.

      This is another reason why I haven’t thought much about which one of us is brighter before – even if I knew, the information would be basically useless except for a silly dick-measuring contest neither of us cares to be in. Comparisons like that only make sense when they point at a capability difference that’s fairly large.

      >But then we have these weird, orthogonal insights that are semi-closed off to certain kinds of minds.

      Are they? I used to believe this, but am now doubtful. I’ve observed a threshold around IQ 150 above which you get serious polymathy. I don’t know for sure, but it may well be that around that level of general cognitive ability more specialized ways of gaining insight lose comparative importance – and that’s precisely why you start getting polymathy.

      >I’m not sure what Tao’s mind is like, but as far as I can tell he’s like me but way, way smarter.

      Indeed. Same assessment here.

      >On the other hand, motherfuckin’ Srinivasa Rumanujan claimed to have these insane quasi-religious dreams in which advanced mathematics was revealed to him on scrolls by Hindu deities.

      That doesn’t seem in the least bit odd to me. Not that I’ll ever have his talent, but I know from experience that when your unconscious mind does a complex cognitive task out of your sight its way of presenting you with the result can be, oh, a Pink Floyd lyric.

      So…was Ramanujan brighter than me? Hell to the fuck yeah. Were his mental processes really fundamentally different from mine? When I was younger I would have said “Of course!” without really thinking it all the way through. Now I find a doubt.

      But, of course, it’s possible that I don’t understand the mental lives of the stratum above me as well as I think I do.

      Anecdote: Just once I’ve had the experience of sitting down to breakfast with an extended family and feeling like the slow kid at the table. The family was Freeman Dyson’s. :-)

  78. “orthogonal language primitives”???

    I’m trying to figure out what right angles implies here.

    My best guess is that the substance of this criterion is that each primitive (irreducible) element of the language does something different from any other element, and that nothing any element does is any way done by any other element.

    1. >My best guess is that the substance of this criterion is that each primitive (irreducible) element of the language does something different from any other element, and that nothing any element does is any way done by any other element.

      That’s exactly right. That’s what a language designer means by orthogonality. The analogy to an orthogonal basis in a vector space is direct and intentional.

  79. @ esr

    Trent Fowler mentioned “Generative science”. You didn’t include cybernetics or systems theory there; are they not generative?

    > I’ve been thinking for decades about writing a book updating the ideas of General Semantics for a modern audience. I probably won’t do it now, because the Sequences exist…

    That’s a shame, especially since it was going to feature koans as chapter-header quotes. Would you please name a few – say, five – articles within the Sequences you consider essential reading?

    BTW: from my humble, average-IQ perspective, there are no noticeable intellectual inequalities between you, your blog’s regulars (excluding yours truly), Yudkowsky, Von Neumann, etc. :-)

    1. >You didn’t include cybernetics or systems theory there; are they not generative?

      I’m not sure “cybernetics” is a live category any more; it seems to have split into optimal-control theory for analog systems on the one hand and computer science on the other. And I know systems theory desperately wants to be generative, but I’m not sure there’s enough there to actually achieve that yet.

      >Would you please name a few – say, five – articles within the Sequences you consider essential reading?

      That is a difficult question and I don’t have a ready answer, sorry. Won’t until I go back and finish reading the Sequences myself.

      >BTW: from my humble, average-IQ perspective, there are no noticeable intellectual inequalities between you, your blog’s regulars (excluding yours truly), Yudkowsky, Von Neumann, etc. :-)

      Unsurprising. I too can only perceive one undifferentiated stratum above the one I’m in; it’s entirely possible that this is because I’m not noticing distinctions that are functionally important and would be apparent to me if I weren’t so dimwitted.

      That said, I think Trent Fowler is basically right to identify three strata of low-grade, middle-grade and high-grade geniuses – we can dub the low grade as the Fowler class, middle grade is already the Yudkowsky class and the high grade is already the Tao/von-Neumann class.

      I think the lower boundary of the middle grade corresponds to the IQ 150 polymathy threshold I’ve mentioned before. I’ve written before that my best guess at my IQ from proxies is about 166 (this and all figures Sanford-Binet, not Wechsler). Allowing for the large measurement uncertainty I think the upper IQ boundary of the middle-stratum is roughly 170.

      I don’t really know what happens at that upper boundary – something subtle but important, analogous to polymathy but different. If I were a high-grade, Tao/von-Neumann-class genius I might be able to pin it down and explain it; as it is, I’m nearly as much at a loss as you are. All I actually know is that there are people whose mental processes I cannot follow, and they produce work of a brilliance I will never match.

      A fair number of the regulars here seem to be Fowler-class geniuses in the 140-150 range; a few are middle-grade (150-170). The differences seem pretty obvious to me. However, if we have any Tao/von-Neumann-grade intellects here they’re doing a better job of self-concealment than I think is possible. No blame if you can’t spot them, but I’ve met a few before and I think I could no more fail to notice them than if someone shined a klieg light in my face.

      And as for you, you are way too fluent in a second language and generate questions too interesting to be dead average. I don’t think you quite make it to Fowler-class genius, but there’s a lot of gifted territory between that and average. Remember that only 0.63% of the population are Sanford-Binet 140 or above and you’ll see that you could be brighter than 99% of the people on the planet and still just as unable to model the thought process of a Fowler-class genius as I am to comprehend Terence Tao’s.

  80. > Spearman’s g just won’t go away, but measuring it in the middle- and high-genius ranges is a stone bitch. We just don’t know how to do that very well yet.

    Inherent in that problem is that much of how we go about measuring IQ is that the person designing the test generally has to be intelligent enough to understand what he’s measuring.

    We have questions on IQ tests that present a sequence of items and ask what the next item in the sequence will be. If you’re not capable of discerning the rule for generating the sequence, you won’t be able to devise a group of five things, exactly one of which could follow that rule. You can’t write a question to test what you don’t know.

    Worse, you might have come up with one rule that adequately fits the sequence as presented, with one right answer of the five, but there is no limit to the number of rules that might produce that sequence. A person much more intelligent than you might conceive of one of them, for which another of your five answers happens to fit. Maybe he’ll decide to give the answer that matches the rule you were thinking of when you wrote the question, in which case you won’t capture the fact that he’s more intelligent than you are. But maybe he’ll give the answer you didn’t think of, and your test will decide he’s less intelligent, when what it’s really measuring is that you’re less intelligent than he.

    1. >But maybe he’ll give the answer you didn’t think of, and your test will decide he’s less intelligent, when what it’s really measuring is that you’re less intelligent than he.

      This can bite closer to the real world. There’s a game called Zendo: read this description. It gamifies a problem strongly resembling guessing the generative rule for a sequence.

      To all appearances, I’m terrible at Zendo. When I’m the master, I tend to make rules too complex for others to guess. When I’m a Student, I constantly try to fit the data to excessively intricate rules and am commonly surprised to discover that the Master had a much simpler one in mind.

      OK, you can probably see the reversal coming…

      I think I’m not actually terrible at Zendo at all. I think I’ve always played with people whose feeling for “just hard enough” in a rule is very different from mine. If I’m going to enjoy this game I probably need to select players with an IQ no more than 10 points distant from me at the outside.

      The problem is, the brighter you are the smaller the number of people in that band gets. For me about 0.023% of the general population. Tough odds even given that I hang out in social environments like SF conventions that concentrate brights.

  81. > And as for you, you are way too fluent in a second language and generate questions too interesting to be dead average. I don’t think you quite make it to Fowler-class genius

    I would be very careful about trying to estimate someone’s intelligence based solely on how he expresses himself in writing, in a second language. There are subtleties of meaning that are very difficult to appreciate without a lifetime of exposure to the broader culture of which the written word is but a part. Even within a language, when it comes to different sub-cultures, there are metaphors imbued with special meanings not easily grasped by outsiders. Those outsiders may be quite intelligent, but lacking the cultural context inherently seem less intelligent to the insiders.

    (I shouldn’t [have to] be reminding the keeper of the Jargon File of this.)

    1. >Those outsiders may be quite intelligent, but lacking the cultural context inherently seem less intelligent to the insiders.

      A reasonable point, but my estimate for Jorge is based not on shared references but on the combination of correctness and fluidity with which he uses English and his breadth of vocabulary. His English is very good – it would be impressive in a native, let alone an ESL speaker.

      Relevant fact: If you rank candidate intelligence tests by how well they correlate with a basket of other g estimates, simple vocabulary tests come out extremely well. So you can dump all the nuances and just estimate on working vocabulary and be pretty accurate.

  82. “We just don’t know how to do that very well yet.”

    I think “The Monster” is right on this one. Most people interested in the question of psychometrically measuring supreme giftedness aren’t themselves supremely gifted; they’re probably Fowler- or Yudkowsky-class geniuses. When a T/VN-class genius turns their attention to the task we’ll probably have a better idea of what’s going on.

    “Here is a thing you don’t know: I’ve been thinking for decades about writing a book updating the ideas of General Semantics for a modern audience. I probably won’t do it now, because the Sequences exist…but if I had started on that project when I was 22? I won’t say I would definitely have produced an equivalent of the Sequences, but it wouldn’t have been an implausible outcome either. I think you’ve seen enough of my writing on mental hygiene and analytic philosophy to apprehend the possibility.

    On the other hand, Eliezer is certainly better at producing respectful pastiches of Zen Buddhist rhetoric than anyone I know but me, and maybe better than me. He is so damned good at it that he has got to have some kind of generative insight into the mysticism. Given that, the notion that he might have written Dancing With the Gods in a history close to this one starts to seem plausible.”

    On the basis of that I’m willing to raise my probability estimate that you could’ve produced an approximation of The Sequences, and also that EY could’ve grokked the central insights of Dancing With The Gods, though not by as much.

    I’m just not convinced that these sorts of mystical experiences vary that much as a function of IQ. Part of the reason for this is that, while I’m dumber than EY, I’m pretty sure I have a superior grasp of the reasons that certain states of mind must be accessed via the mytho-poetic command line interface, I’ve got an unusual mind (synesthesia, etc.), and I’ve tried to cultivate such states of mind through meditation and neopagan ritual apparatuses, but have had only modest successes. I can see there’s something there, but haven’t yet broken through, and I just don’t see any reason to think another 15 IQ points would be enough to get me there.

    One alternative explanation for EY’s facility with Zen phrasing could just be a stupendous linguistic pattern-matching ability. I too am a natural poet with a gift for liturgical writing, but I haven’t summoned the Horned God, either.

    “we can dub the low grade as the Fowler class”

    Hahahahaha, what a dubious honor! It may interest you to know that I was tested as part of a gifted program in high school and clocked at about 153. That may be a tad generous; the test was skewed towards verbal reasoning, at which I am simply off the charts. If there had been a heavier spatial reasoning component I wouldn’t have made out as well, I don’t think.

    “That’s a shame, especially since it was going to feature koans as chapter-header quotes. Would you please name a few – say, five – articles within the Sequences you consider essential reading?”

    No need; there’s now a book out.

    “Remember that only 0.63% of the population are Sanford-Binet 140 or above and you’ll see that you could be brighter than 99% of the people on the planet and still just as unable to model the thought process of a Fowler-class genius as I am to comprehend Terence Tao’s.”

    A simple, probably not technically accurate way of explaining this to someone that thinks you’re a peerless genius is to say that the gap between you and the average person is probably not as big as the gap between you and the smartest person.

    1. >Hahahahaha, what a dubious honor!

      Well, you put yourself in that stratum, I didn’t. :-)

      I think I actually believe 153 for you after reading your writing and hearing you play music. I was thinking when I attached the “Fowler-class” label that you may actually show a bit too much creative polymathy for a sub-150 IQ.

      My wife Cathy brackets the other side of that one nicely. Various lines of evidence suggest about Sanford-Binet 145 for her and she shows the beginnings of creative polymathy – in addition to being an expert legal researcher she is musically talented, a skilled martial artist, and creates museum-quality reproductions of Viking and Iron Age costume.

      But then the polymathy-at-150 thing is just from me observing a (necessarily) small sample. I could be wrong about that corresponding to the boundary you observe between yourself and Yudkowsky-class geniuses.

  83. > Relevant fact: If you rank candidate intelligence tests by how well they correlate with a basket of other g estimates, simple vocabulary tests come out extremely well. So you can dump all the nuances and just estimate on working vocabulary and be pretty accurate.

    For a first language, yes. For a second language, you’re trying to measure intelligence filtered through the person’s ability to express themselves in that particular language, which is heavily dependent upon the amount of effort spent learning it in addition to the first language, and a general talent at acquiring new languages to mitigate that effort to some extent. We can easily tell from how Jorge expresses himself in English that he is at least at a certain level, but what we cannot know is whether he’s really at a higher level, but just hasn’t learned the English to express it well yet. A native Spanish reader might be able to detect something we can’t. (Just as a smarter test-writer might be able to suss out some sophistication the existing tests can’t detect.)

    Someone reading my writing in German might think I need to watch Sesamstraße for a few years, so that I can be ready for school. And they’d be right.

  84. “Well, you put yourself in that stratum, I didn’t. :-)”

    Indeed. There’s a lot of power in knowing where you stand, and my experience matches with yours in terms of high-achieving types not being all that territorial and confrontational. Most of the strata above me aren’t interested in making me feel like an idiot.

    At 27 I’m at a bit of a crossroads. On the one hand, I figure that if I try as absolutely fucking hard as I possibly can I could probably do useful AI-related research, maybe start a company, a hedge fund, or a thinktank, that kind of thing, but the chances of failure are high because I may simply be too dumb or too old or both.

    On the other, I know I could crank out world-class nonfiction, maybe even some passible fiction, while giving talks, networking, and running a successful business without killing myself.

    But I can’t do both.

  85. Zendo looks like the game EY (as LW in HPMoR) wrote Harry created to teach the scientific method, which was in the back of my mind as I was describing how such IQ tests can give screwy answers.

    Another thing I was thinking about was the universal ridicule heaped upon Sarah Palin by the Left when she made a comment about Fannie Mae and Freddie Mac being in trouble that would cost taxpayers a bundle. As if possessed of a single voice, they mocked her for thinking they were government, rather than private, entities. (She didn’t say they were, but that was the interpretation of her remarks they’d all somehow decided to run with.)

    It didn’t take long after that to find out that we taxpayers had to bail Fan and Fred out to the tune of over a hundred billion dollars (I’m not sure we even know the true cost yet). Because Palin knew something they didn’t (yet) know, their “test” determined she was less intelligent than they. That event happened to resonate with me because I’m used to being the first person in the room to connect a particular set of dots. I’m sure most of us have all had that experience.

  86. @Monster:
    Inherent in that problem is that much of how we go about measuring IQ is that the person designing the test generally has to be intelligent enough to understand what he’s measuring.

    Would it be possible to design an IQ test in such a way that any moron could answer the questions given enough time, but time taken to complete the test correlated inversely to IQ?

  87. @ esr

    I’m deeply grateful for your kind words. I thought my questions were rather inane, and feared my English might come off as contrived because I don’t know many idioms. To be sure, my composing these comments is not as effortless as you think: I tend to consult Google Translate and/or Wiktionary when doing so.

    > I don’t think you quite make it to Fowler-class genius

    I don’t think so, either. If I were that smart, I wouldn’t be afraid of math and would perform better at strategy games. (In my defense, I’m reading Mathematics and the Imagination – with some difficulty, natch – in the hopes of overcoming said fear.)

    > My wife Cathy … in addition to being an expert legal researcher she is musically talented, a skilled martial artist, and creates museum-quality reproductions of Viking and Iron Age costume.

    I’m a bit surprised you omitted her being “a very capable gamer“. After all, it seems germane to the list. :-)

  88. > Would it be possible to design an IQ test in such a way that any moron could answer the questions given enough time, but time taken to complete the test correlated inversely to IQ?

    Would such a test accurately measure g, or would it just be a rough approximation, valid only for IQs under some threshold?

  89. @TheMonster

    Amen on language working as an IQ filter, I tested this with people who have native language A but like when they were 10 years old they went on living in a different country with their parents, with language B, and didn’t use A after that and it got rusty. So one of such guys talked to me in his native language and he sounded like a 10 year old, and not a good one, came across as impolite and whiny, kinda bratty. I was just about to break contact when he switched languages and lo and behold, a generous, intelligent, likeable gentleman. After that I was aware of it, and every time long-time expats came across as bratty idiots I switched languages and everything went far better after that.

    (I highly suspect this is behind the stereotype that Russian programmers are rude. It is not actually a rude culture, they tend to assign very low status to nekulturniy behavior. But with poor English skills a nuanced counter-argument gets transformed into “no your client want wrong thing, we refuse”.)

    Also, this is a counter-intuitive. I never assumed language will work like an IQ filter. I assumed high IQ with poor foreign language skills makes one a Yoda: cryptic, short, insightful. “Do. Or do not. There is no try.” In real life, for some reason unknown to me, this is extremely rare.

    Also, this is gameable, but that is a long story.

  90. >So, listen to this carefully: the reason git fast-import streams were essential to the design of reposurgeon is that the concretized the problem.

    Is there a typo here, or am I too dumb to parse it?

    1. >Is there a typo here, or am I too dumb to parse it?

      in that they concretized the problem”

      I’ll fix the OP.

      UPDATE: I came up with a better way to phrase it.

  91. >>but if at some point you could just avoid directly pretending that you are one among the top 10 geniuses on Earth
    >You read that in, I didn’t put it there.

    Sure, not “on Earth”, it was an overstatement, a figure of speech that I thought was obvious. And I certainly don’t have any comparison function to offer for intelligence.

    But what I understand from what you have written is that only a handful of people know when to invent a DSL, and there is no way that this is true.

    1. >But what I understand from what you have written is that only a handful of people know when to invent a DSL, and there is no way that this is true.

      If it isn’t true, why aren’t there more of them?

      That’s not a rhetorical question. The success of reposurgeon is direct witness of how effective this design style is. I can tell there are few people who know how to write DSLs because they aren’t everywhere.

      The hard part isn’t writing the language engine, not really. It’s the set of habits, the mental stance, required to carve the domain operations into orthogonal primitives.

  92. I think there already are many DSLs. Most of them are hidden from view though, because they are so domain-specific. I mean, how many digits has the number of programmers who have heard of reposurgeon’s DSL? If one has never used reposurgeon, one can’t know about it, and repo conversion is a niche domain.

    Among the tens of millions of programmers in the world, my conservative guestimate would be that thousands of them know when to create a DSL. And they know how, too, but as you said that’s not the difficult part.

    There is a correlation between knowing when and knowing how. Programmers interested in programming languages and compilers often have implemented toy languages for fun, and that knowledge may lead them to create DSLs when they find domains that seem to lend well to a judgement amplifier style tool. A lot of maybes, but the initial pool is very large. Your Fermi estimate may vary.

    I also agree that a number of those DSLs are trash, with badly designed primitives, or just a bad match with the domain in the first place. But after failing and learning, one ends up succeeding, and I think the good ones are out there. They’re just not being as popular as regexes.

  93. > Unsurprising. I too can only perceive one undifferentiated stratum above the one I’m in

    Could that be an aliasing phenomenon? Analogous to your example of constructing more complex generative models matching subsets while playing the game Zendo, isn’t it possible that a higher capacity thinker may construct more complex relationships which are invisible to someone who can’t sample the model space at the Nyquist limit of the frequency of model instantiation of the higher genius stratum. You’ll agree on the conclusion (subset) with the higher genius stratum, but this doesn’t detect whether the higher genius employed a more complex model to arrive at the same conclusion. To test this hypothesis, I considered the scenario when the higher genius’ model generates an answer contrary to your conclusion. I’ve experienced this where my conclusions are labeled wrong because the others are unable to load the model that is in my head, and I am unable to communicate the model to them. The only way I am able to get them to acquiesce is where I can show an example failure scenario for their conclusions. Invariably in a group politics setting, they will dismiss this case as insignificant, because they can’t understand its frequency of occurrence in my more complex model. So I am again unable to convince them and they learn only by failure in the market; and by then they blame the failure on other incidental issues because again they could not model the phenomenon in the higher complexity model. The only instances where I succeed to win the argument is when I can convey a slamdunk failure mode of their conclusions (a point sample in the Shannon-Nyquist space) which was illuminated to me by my model. Yet they still haven’t grasped the model so they go flailing around looking for other conclusions than the one I provided. That is indeed aliasing error. So the higher genius is likely to say nothing instead, because he knows it is useless to speak. Note I doubt I am even a Fowler class genius, because of my underdeveloped language capabilities. Go figure?

    Listening to Freeman Dyson he seems mostly undifferentiated to me. And interestingly (noted in the Charlie Rose interview) he is somewhat reserved when he speaks as if I can sense he is carefully choosing what the audience could appreciate. Except for example I can detect he is thinking about much more complex relationships, when he says, “I don’t have much faith in predictions. Science is organized on predictability…scientists arrange things in an experiment to be unpredictable as possible and then do the experiment to see what will happen. You might say that if something is predictable, then it is not science”. Note the logical implication of the long-tail undecidability of science in the juxtaposition of his opening phrase with the ending one. Whoever didn’t detect that, is likely not a genius. :)

Leave a comment

Your email address will not be published. Required fields are marked *