Against modesty, and for the Fischer set

Over at Slate Star Codex, I learned that Eliezer Yudkowsky is writing a book on, as Scott puts it, “low-hanging fruit vs. the argument from humility”. He’s examining the question of when we are, or can be, justified in believing we have spotted something important that the experts have missed.

I read Eliezer’s first chapter, and I read two responses to it, and I was gobsmacked. Not so much by Eliezer’s take; I think his microeconomic analysis looks pretty promising, though incomplete. But the first response, by one Thrasymachus, felt to me like dangerous nonsense: “This piece defends a strong form of epistemic modesty: that, in most cases, one should pay scarcely any attention to what you find the most persuasive view on an issue, hewing instead to an idealized consensus of experts.”

Motherfucker. If that’s what we think is right conduct, how in hell are we (in the most general sense, our civilization and species) going to unlearn our most sophisticated and dangerous mistakes, the ones that damage us more by the weight of expert consensus?

Somebody has to be “immodest”, and to believe they’re justified in immodesty. It’s necessary. But Eliezer only provides very weak guidance towards that justification; he says, in effect, that you’d better be modest when there are large rewards for someone else to have spotted the obvious before you. He implies that immodesty might be a better stance when incentives are weak.

I believe I have something more positive to contribute. I’m going to tell some stories about when I have spotted the obvious that the experts have missed. Then I’m going to point out a commonality in these occurrences that suggests an exploitable pattern – in effect, a method for successful immodesty.

Our first exhibit is Eric and the Quantum Experts: A Cautionary Tale wherein I explain how at one point in the 1970s I spotted something simple and obviously wrong about the premises of the Schrodinger’s Box thought experiment. For years I tried to get physicists to explain to me why the hole I thought I was seeing wasn’t there. None of them could or would. I gave up in frustration, only to learn a quarter-century later of “decoherence theory”, which essentially said my skepticism had been right all along.

Our second exhibit is Eminent Domains: The First Time I Changed History, in which I said “What happens when people move?”, blew up the Network Working Group’s original static-geographical DNS naming plan, and inadvertently created today’s domain-name anarchy/gold-rush conditions.

Our third exhibit is the big insight that I’m best known for, which is that while generation of software does not parallelize well, auditing it for bugs does, Thus, while we can’t hope to swarm-attack design, we can swarm-attack debugging and that works pretty well. Given a sufficiently large number of eyeballs, all bugs are shallow.

I’m going to stop here, because these are sufficient to illustrate the common pattern I want to talk about and the exploitation strategy for that pattern. But I could give other examples. This kind of thing happens to me a lot. And, damn you, Thrasymachus, where would we be if I’d been “modest”? If those second and third times I had bowed to the “idealized consensus of experts” – failed in courage the way I did that first time…would the world be better for it, or worse? I think the answer is pretty clear.

Now to the common pattern. In all three cases, I saw into a blind spot in conventional thinking. The experts around me had an incorrect premise that was limiting them; what I did was simply notice that the premise was there, and that it could be negated. Once I’d done that, the consequences – even rather large ones – were easy for me to reason out and use generatively.

This is perhaps not obvious in the third case. The incorrect premise – the blind spot – around that one was that software projects necessarily have to pay the full O(n**2) Brook’s Law complexity costs for n programmers because that counts the links in their communications graph (and thus the points of potential process friction). What I noticed was that this was just a never-questioned assumption that did not correspond to the observed behavior of open-source projects – the graph could be starlike, with a much lower cost function!

Seeing into a blind spot is interesting because it is a different and much simpler task than what people think you have to do to compete with expert theorists. You don’t have to build a generative theory as complex as theirs. You don’t have to know as much as they do. All you have to do is notice a thing they “know” that ain’t necessarily so, like “all computers will be stationary throughout their lifetimes”.

Then, you have to have the mental and moral courage to follow negating that premise to conclusion – which you will not do if you take Thrasymachus’s transcendently shitty advice about deferring to an idealized consensus of experts. No; if that’s your attitude, you’ll self-censor and strangle your creativity in its cradle.

Seeing into blind spots has less to do with reasoning in the normal sense than it does with a certain mental stance, a kind of flexibility, an openness to the way things actually are that resembles what you’re supposed to do when you sit zazen.

I do know some tactics and strategies that I think are helpful for this. An important one is contrarian mental habits. You have to reflexively question premises as often as possible – that’s like panning for blind-spot gold. The more widely held and ingrained and expert-approved the premises are, the more important it is that you negate them and see what happens.

This is a major reason that one of my early blog entries described Kill the Buddha as a constant exercise.

There is something more specific you can do, as well. I call it “looking for the Fischer set”, after an idea in chess theory due to the grandmaster Bobby Fischer. It’s a neat way of turning the expertise of others to your advantage.

Fischer reported that he had a meta-strategy for beating grandmaster opponents. He would study them, mentally model the lines of play they favored. Then he would accept making technically suboptimal moves in order to take the game far out of those lines of play.

In any given chess position, the “Fischer set” is the moves that are short-term pessimal but long-term optimal because you take the opponent outside the game he knows, partly or wholly neutralizing his expertise.

If you have a tough problem, and it’s just you against the world’s experts, find their Fischer set. Model the kinds of analytical moves that will be natural to them, and then stay the hell away from those lines of play. Because if they worked your problem would be solved already.

I did this. When, in 1994-1996, I needed to form a generative theory of how the Linux development swarm was getting away with breaking the negative scaling laws of large-scale software engineering as they were then understood, the first filter I applied was to discard any guess that I judged would occur naturally to the experts of the day.

That meant: away with any guesses directly based on changes in technology, or the falling cost of computing, or particular languages or tools or operating systems. I was looking for the Fischer set, for the sheaf of possible theories that a computer scientist thinking about computer-sciencey things would overlook. And I found it.

Notice that this kind of move requires anti-modesty. Far from believing in your own inadequacy, you have to believe in the inadequacy of experts. You have to seek it out and exploit it by modeling it.

Returning to the original question: when can you feel confident that you’re ahead of the experts? There may be other answers, but mine is this: when you have identified a false premise that they don’t know they rely on.

Notice that both parts of this are important. If they know they rely a particular premise, and an argument for the premise is part of the standard discourse, then it is much more likely that you are wrong, the premise is correct, and there’s no there there.

But when you have both pieces – an unexamined premise that you can show is wrong? Well…then, “modesty” is the mind-killer. It’s a crime against the future.

124 comments

  1. Much of Peter Thiel’s Zero to One hinges on the question, “What important truth do very few people agree with you on?” Yours seems like a related approach.

  2. Thought provoking. Even some good “how-to” advice.

    “… hewing instead to an idealized consensus of experts.”

    This is just a sweet-sounding rehash of one of the Marxist axioms, variously stated:
    – Rule by bureaucracy
    – Rule by our betters
    – Credentialism (as opposed to favoring actual results)
    – Rule by a lab-coated priesthood
    – Ideology over truth

    As a young engineer I happened to be in a meeting with a manager many levels above me. Afterwards he privately told me he was impressed that I was “not caught up in groupthink”. Prolly one of the more useful compliments I’ve ever received.

  3. Do you have a source for Bobby Fischers meta-strategy? When did he report it? Was it still valid when he went into orbit in 1972? If there is two things to be said about Fischers chess, it is that he kept his opening repetoire narrow, and that he was extremely good at calculation, almost never allowed cheap tricks.

    1. >Do you have a source for Bobby Fischers meta-strategy?

      I don’t remember where or when I read about it. I do remember that the term “Fischer set” was explicit, with the implication that it had become a term of art in chess theory circles.

      1. I’m a decently strong chess player (1700ish) and I’ve never come across this term before. The concept of deliberately playing a theoretically suboptimal move to get out of your opponent’s opening preparation is a perfectly banal one, though, practiced even at pretty low levels. It long predates Fischer.

      2. The term “Fischer set” sounds like it has been made up by chess players who don’t really understand top level chess.

        Ranking players by analyzing with a computer program the moves made and by trying to assert the quality of their moves:

        https://content.iospress.com/download/icga-journal/icg0012?id=icga-journal%2Ficg0012

        Not too much pessimal or sub-optimal play by Fischer there or any other of the great chess players. In reality there does not seem to be any Fischer Set in top level chess. Perhaps he thought that he sometimes played inferior moves that in reality was moves that was just as good. And then he went on to defeat the Russians because he was simply a fantastic chess player.

        Outside chess he was an independent thinker. Perhaps we can find the true Fisher Set there. Not realising the holocaust happened for real. Dying from failed kidneys since you don’t believe in dialysis. Also being absolutely right about communism. Sometimes right, sometimes wrong.

  4. Reading this piece and knowing your ideas and beliefs on global warming brings a smile to my face.

    There is no set of “experts” out there so wrong and so adamant that their expertise rules all than the Climate Scientists. Their coming catastrophic fall to a much better understanding of climate is going to be from the greatest heights…

    1. Oh, I don’t know. You have the economists with a hard-on for central planning, plus a catechism of pseudo-scientific notions about how silly race realism etc. is chanted by basically everyone on the planet.

      1. Well said…. I listened to an interview in November of 2016 given by the current fed chair Janet Yellen explaining the one of the problems then (now) facing the Federal Reserve is that the modeling they use are not falling in line with traditional expectations of supply/demand and labor. When asked for why…. she said they didn’t have an answer at that time.

    1. Except that many-worlds and decoherence theory are not, AFAICT, the same. Decoherence theory, as I understand it, basically says that the classical universe we observe is not one branch of a multiverse of different classical histories that could result from the same initial quantum state of the universe, rather, it is, so to speak, the whole multiverse. At small scales and short time periods there is no one classical history of the universe, but at larger scales and longer times, the various superpositions of classical states of different parts of the universe interfere with each other in such a way that uncertainties cancel and the classical picture becomes more and more solid, rather than the uncertainties blowing up as we might expect (and as the Schroedinger’s cat thought experiment suggests). For example, that the atom controlling the trigger for the gun that kills the cat may not have decayed even if we find the cat dead, an atom in the cat may have decayed and triggered the detector, causing the gun to fire. The exact course of classical microscopic events that caused the gun to fire may not be well defined (though it may be a superposition dominated by “the atom decayed”), but the fact that the cat’s brain is splattered over the inside if the box *is*.

      1. >Except that many-worlds and decoherence theory are not, AFAICT, the same.

        No, they aren’t. I wondered if anyone would bring this up.

        1. You may be confusing popular explanations of many-worlds, or other later ideas, with Everett’s approach. Here are a few quotes from Chapter 1. Note especially the second excerpt, which directly addresses your objection to the standard account of observation in QM:

          “Alternative 5: To assume the universal validity of the quantum description, by the complete abandonment of Process 1 [wave function collapse]. The general validity of pure wave mechanics, without any statistical assertions, is assumed for all physical systems, including observers and measuring apparata. Observation processes are to be described completely by the state function of the composite system which includes the observer and his object-system, and which at all times obeys the wave equation (Process 2).”

          “…All processes are considered equally (there are no ‘measurement processes’ which play any preferred role)…”

          “We shall be able to Introduce into the theory systems which represent observers. Such systems can be conceived as automatically functioning machines (servomechanisms) possessing recording devices (memory) and which are capable of responding to their environment. The behavior of these observers shall always be treated within the framework of wave mechanics. Furthermore, we shall deduce the probabilistic assertions of Process 1 as subjective appearances to such observers, thus placing the theory in correspondence with experience.”

          1. >You may be confusing popular explanations of many-worlds, or other later ideas, with Everett’s approach.

            I may well be. Everett’s paper looks a lot saner and more like decoherence theory than most of what I’ve heard called “many worlds”. The problem is, from what I’ve seen, even if Everett used “many worlds” to describe his theory, the name has been used often enough to describe hogwash as to be unusable as an unambiguous name for anything but said hogwash.

      2. Except that yes, decoherence *is* many-worlds if you’re looking at Schrödinger’s cat right, because the whole point of the experiment is that the firing of the gun depends on a superposition. So all the decoherence does is split the entire box-cat-universe system into exactly the same alive + dead superposition that the radiation source was originally in. If in the experiment the universal wavefunction ends up in a single classical state, then the radiation source *can’t have been* in a superposition to begin with, since the whole point of the setup is to amplify that superposition to macroscale.

        At least that’s my understanding as a layman.

        1. And my understanding, as a layman, of decoherence theory, is that the math of QM works in such a way that it is impossible, or at least so obscenely entropically disfavored as to be within epsilon of impossible, for the setup to actually manage to amplify the superposition to macroscale, at least unless you chill the whole setup to a hair above absolute zero (in which case the cat freezes to death and is all dead) or perform the experiment in an environment with densities on the order of the core of a neutron star (in which case the cat becomes an undifferentiated blob of neutrons, and is all dead). At livable temperatures and densities, the superpositions that exist at macroscale are so dominated by a single classical state as to be indistinguishable from that state.

          I think the root of the problem is that we naively expect, since the space of possible states of a simple quantum system is so much bigger than that of a similar classical system, that as we add particles and make the system more complex, the size of the resulting space of states will be the result of multiplying the sizes of the state spaces of the individual components, and so we naively expect uncertainties in the classical state of a quantum system to blow up as the system becomes more complex. But the reality, to my understanding, is that for larger and larger systems, any given classical state has a greater and greater number of quantum states that are very close to it in state space, and an even greater and greater fraction of quantum states are very close to some classical state or other, so the probability of a given superposition being distinguishable from a classical state starts out high for few particles, but as you add particles and multiply probabilities, that probability becomes lower and lower. Macroscopic superpositions that are distinguishable from pure classical macroscopic states require the phases of states for a huge number of individual particles to be very finely tuned.

          1. And yet, you can look at a double-slit interference pattern and clearly see a quantum superposition amplified to macroscale. Measure whether a particle hits a sensor behind the two slits and what exactly do you think the non-classical probability of getting a hit *means*? Which slit did the particle actually go through, if all quantum states become classical?

            1. With double slit experiment you are using *one* particle, not many. It is with many particles that, due to interaction with environment, we see classical-like quantum states.

          2. > Macroscopic superpositions that are distinguishable from pure classical macroscopic states require the phases of states for a huge number of individual particles to be very finely tuned.

            Only because the distinguisher himself winds up in superposition with the states he’s observing. At least that’s what happens if one takes the relevant math literally.

  5. I have always thought of modesty as being an evolved social behavior (opposite of arrogance), in that it signals group deference and conformity, and thereby implies a lack of threat by the individual. This could be an aid to survival if you lack self-reliance and require a lot of group resources in order to sustain yourself. In the extremis, it promotes dependence as a species trait and may lead to hive-mindedness.

    As to the OP, I think contrarianism is a vital necessity in social evolution and serves that same function as mutation is biological evolution. It feeds and seeds the variability in the cauldron of fitness competition.

    1. That’s the spirit. I have seldom seen Xkcd have it wrong.
      (And no, I won’t bet $200 on that)

    2. >Relevant XKCD:

      In case it’s not obvious, I agree with xkcd’s skepticism. His character is making bets that are sound because he’s betting against people who have not identified an incorrect premise that the experts are relying on.

      1. This is exactly the thought I had as I was reading this article.

        I believe that when people first rationally argued that one should default to expert consensus, they were relying on the probability that the dissenter had not yet arrived upon the mistaken premise.

        Anyone who dissents and claims such a premise is worth a look, even if brief. Some experts are good at finding what a dissenter overlooked; it’s part of the benefit we enjoy from experts, and it is a reason to be respectful when dissenting – if your argument is flawed, you want to find it fast, and a good expert will do that.

        Anyone who dissents on general principle simply because experts have been wrong before is a fool – even if the dissenter turns out to be correct later.

        1. Handy example is Thrasymachus himself. If Eliezer is the expert in this case, what is Thrasymachus if not dissenting?

          So, what incorrect premise is he questioning? There’s much to his post. I skimmed it, and found no claim that wasn’t either refuted in Eliezer’s post or refutable after a little thought. I’m tempted to say there was no claim and therefore Thrasymachus is a fool as advertised, but I’m skeptical of my ability to prove a negative. I’ll rather just repeat that I didn’t find one.

  6. Maybe it’s just Eli is smart (if nuts), and Thrasymachus is mediocre. There’s no percentage for him in being immodest, he’ll just look stupid. Eli might do something useful with his immodesty (probably just look stupid, but: who dares, wins).

    We shouldn’t tell anyone about this, or they’ll be unnecessarily immodest to fake intelligence to other mediocre people.

    1. >(probably just look stupid, but: who dares, wins).

      That’s important. You have to dare, and you have expect that some of the time you’re going to faceplant and look stupid. If you can’t handle that action, get back in line with the rest of the people who will never make a big difference, because you almost certainly won’t either.

      1. I usually try to blame my faceplants on other people. Ideally, my enemies. This is known as “Antifragility.”

    2. >Maybe it’s just Eli is smart (if nuts)

      I’m curious about your warrant for the “nuts” part. He seems extremely sane to me.

      I sort of grant you that he seems a bit obsessive and cranky sometimes, but I’m also pretty sure that if I actually held his priors (as opposed to just being able to model them) I’d look obsessive and cranky about the same things in the same way.

      1. There’s a certain connection between being nuts and holding certain priors. It’s not just about updating incorrectly.

        As for being nuts, I dunno if you’ve heard, but he started an organization dedicated to mitigating the doomsday scenario from rampant AGI. No, really!

        He didn’t strike me as obsessive or cranky in person, but this was over a decade ago; maybe he’s gotten worse. But then, I don’t actually care about his personality traits that much (e.g. not interested in dating him).

        1. >As for being nuts, I dunno if you’ve heard, but he started an organization dedicated to mitigating the doomsday scenario from rampant AGI. No, really!

          Um…and this is crazy how? I’m not convinced enough by the existential-risk arguments to feel I need to work the problem myself (especially since I know people as bright as Eliezer are on it) but I judge worrying about a runaway paperclip maximizer is not insane.

          1. The discussion is beyond the scope of comment thread on a blog. Basically, nobody can build a paperclip maximizer, and I’m not worried about one being built in the lifetime of the youngest person now living (plus some extra time as a kind of “cash-out” of the possibility that cryonics works).

            More generally, I have grave doubts about people being able to build de-novo AGI that works, ever (i.e. fuck off, Marcus Hutter). Eli’s suck-ups tend to say “Well, what about mind-uploading?” I’m not sure this will work either (killing a brain and slicing it up with one of those circular deli meat cutters sometime later might turn out to affect whether you can read anything like a mind off it), but the suck-ups also think that making AGI by starting with an upload and basically refactoring a lot is too dangerous, so I just ignore it. We’re basically left with Marcus Hutter on one side and a bunch of people doing big data inference on the other side, separated by the Crack of Doom.

            So basically, MIRI is doing a bunch of math (which, by itself, I approve of) on the theory that if anybody ever programs the AGI, this will be useful to preventing rampancy. Imagine if Carl Gauss invented game theory on the grounds that someday somebody might make nuclear weapons. Except that you have grave doubts that nuclear weapons are possible, and the people inventing aren’t nearly as smart as Gauss.

            Anyway, I’m guessing MIRI’s math won’t be very useful for preventing rampancy. It’ll turn out to be a design thing, not a math thing. Think about the kind of formalism you’d use to demonstrate that lisp is better than c++. You can do it, but it’s not a practical guide for designing programming languages. Similarly, I’m betting that lots of the people who worked on gdb never even heard of Solomnoff Induction or Chaitin’s constant. It’s pure math about programs that do or don’t go wrong, but gdb devs correctly don’t give a shit. My money’s on MIRI theory being as useful to the people actually responsible for keeping us from getting clipped as algorithmic information theory and the halting probability are to the gdb devs.

            If it’s a design thing, you have to just do it and be careful, or don’t do it and try to stop anyone else from ever trying (it’s been suggested this is what MIRI is actually for). But if you try to do it, you keep running up against the fact that you can’t and you’d have to be nuts to think you could. So I just regard it as a problem for my grandchildren. I’ll try to teach them good CS fundamentals, but there’s really not much else to do.

            1. “More generally, I have grave doubts about people being able to build de-novo AGI that works”

              I don’t think that’s MIRI’s aim. As I understand it, they want to design a process to suck the meta-moral content from the brain (CEV), then feed that to a seed AI that begins recursive self-improvement. The goal is to have this result in an extremely powerful AI with a human-compatible ethical architecture that remains stable over billions of rewrites.

              Last I talked to Eli I asked him how worried we should be about a recursive scenario given that we have so few examples of them to reason about. His reply (which I endorse) was that there may be a very small chance that such a process could gain traction and get very smart very quickly, but even with a small chance it’s worth having a few people doing the thinking.

              We have slightly more cause to worry if you take the Orthogonality Thesis and the AI Drives Thesis seriously, which I do.

              1. > I don’t think that’s MIRI’s aim

                That’s why I said “people.”

                > that remains stable over billions of rewrites.

                Which is where all the math comes in (except it probably doesn’t).

                > His reply

                sounds like he wasn’t trying very hard. To anyone unfamiliar with Eli who’s just learning about him now: he’s not that lame, trust me.

                > Orthogonality Thesis

                I took this as a given

                > AI Drives Thesis

                Should include “for some possibly trivial definition of maximizer.” I have a chess engine installed on my laptop. Should I delete the package before it kills me?

            2. >More generally, I have grave doubts about people being able to build de-novo AGI that works, ever

              So do I. But I don’t think MIRI’s budget or Eliezer’s time is a crazy overinvestment in the possibility that we’re wrong. It might not be rational to put ten Eliezer-class geniuses on the problem, but one does not seem excessive to me.

              1. > It might not be rational to put ten Eliezer-class geniuses on the problem, but one does not seem excessive to me.

                Thanks like saying “there’s only a 1 in 10 chance of rain so I’ll bring a 10th of an umbrella”.

              1. Your reading comprehension skills aren’t great either. My point is there is no other strategy. Hence the joke about MIRI being a way to keep people interested in AGI from actually trying to build one.

      2. Eli seems to have written a great many sane things that I appreciate the use of. But of late, after writing at length on rationality at Less Wrong and how politics brings out the screaming chimp in us, Eli decided to move to Facebook and start chimp-screaming at length about Trump, for example this.
        That seems like a specific example of at least a little nuttiness to me. I especially liked this bit:

        “Perhaps there are dozens of other cases where a country elected an impulsive, chaotic, populist leader and nothing whatsoever went wrong.”

        Yes. They’re called democratic elections. They elect populists pretty much by design, and the other two adjectives come off as little more than dysphemisms where corresponding euphemisms might be “bold, innovative”. If you think Trump makes chaotic decisions in the sense of random, bad decisions, I think you owe an explanation of how the hell he managed to become and stay a billionaire.
        Particularly bizarre when you look at Eli’s followup post:

        “Venezuela used to be an up-and-coming country with one of the fastest-growing economies in South America. And then they elected an impulsive populist leader who made a few decisions he probably didn’t think were that bad at the time, and now Venezuela is on the verge of being a failed state.”

        It seems to me there is a relevant adjective missing from the description of the Venezuelan leader in question and his decisions.

        1. If I wanted to put my criticism of Eliezer in a pithy phrase, it would go something like “He wrote ‘Politics is the Mind-Killer’, and seeing that it failed to educate the people, he went on to make himself an object lesson in it”.

        2. He was also arguing (on Facebook) prior to the election that one should vote for Hillary over Trump because (simplifying here) Trump was saying things about foreign powers that you Just Don’t Say because it Sends The Wrong Signal, and Serious People Don’t Do That.

          He said this with (apparently) a straight face, while utterly ignoring the actual track record of one of the candidates on foreign policy.

          Of course, as Feynman once noted: “The universe doesn’t care how smart you are, you can still be wrong”.

          But it did suggest that maybe rational thinking isn’t all it’s cracked up to be. https://www.youtube.com/watch?v=duVdErdw5Ps

          1. >But it did suggest that maybe rational thinking isn’t all it’s cracked up to be.

            I prefer the alternative: politics makes you irrational, and even more rational thinkers tend not to notice when this is happening. It’s sad that Elizer turns out to be vulnerable.

            Still not seeing crazy here, though. Lots of nominally sane people act out worse.

  7. I know you’re not into sports, but you should read some vintage Bill James, going back to the 1980s Baseball Abstract annuals in which he essentially invented the science of sports analytics by applying scientific and academic methods to the statistical data, thus breaking the nearly century-long stranglehold that oral tradition and conventional wisdom had maintained over the sport.

    “It is a wonderful thing to know that you are right and the world is wrong; would God that I might have that feeling again before I die.” (from “Breakin’ the Wand,” the essay he wrote to end his final Abstract, the ’88 edition)

  8. FWIW, I don’t think there’s *anybody* who thinks you’re unsuccessful in your immodesty.

    :-)

    Otherwise, I think *both* views are important, as illustrated by RMS and Linus; they *need* to be outliers, and they also *need* to exist.

    1. >Separately: ”Brooks’ Law“

      That exception to regular possessive formation is, I judge, archaic and passing out of use. I won’t miss it.

      1. I think Baylink was pointing out your typo by offering a correction. You have “Brook’s Law” in the post.

            1. >t’s still there. Unless there’s some guy named Brook with a Law I don’t know about, I consider it an error too.

              Found it. Chip Salzenberg’s spelling, not mine.

  9. “Fischer Set”, what a brilliant articulation of Trump’s exact electoral strategy: sub-optimal opener that draws your opponent into a fight she’s not prepared for.

    The anti-modesty you’re talking about requires not just self-confidence, but nerve. There are a lot of people who just know something that runs against common wisdom; there are many fewer who are willing to stand up, shout it to the world, and take what abuse may come. After all, one can be confident in a conclusion, but make the rational choice that something is just not worth the effort.

  10. To maybe generalize the point a bit, I think you could say it’s justified to go against expert consensus when you have a plausible idea of why the experts might have gone systematically wrong in a particular case.

    Over the past while I’ve somewhat enriched myself by making what amounts to bets against the current broad world system continuing along its widely-predicted trajectory. When I go against consensus on matters like these, it’s not so much that I’ve identified a particular false premise that the experts all hold, but that I’m acting on hypotheses that they emotionally can’t accept due to the logical implications. One class of example would be the climate change debate, where if the null hypothesis is true, the plurality of “experts” who form the consensus end up losing their jobs. Here it’s not that there’s a particular premise that’s wrong, but that there’s a process in action that systematically generates wrong conclusions, and if you can characterize the process you can often spot the mistakes.

    1. Are you assuming that climate scientists didn’t have jobs before one of them predicted global warming?

      1. Certainly the field was much smaller. In fact, I think “climate science” as a separate field (as distinct from meteorology, physical geography, &c.) may actually postdate the AGW meme.

        There is an enormous resource flow currently available to climate scientists for AGW-related topics, that will disappear or at least shrink enormously should the consensus become that AGW is not a problem as currently understood. The incentive issues caused by this are obvious.

      2. They did, or at least some of them (James Hansen). He was the Global Cooling Because Of Pollution guy at NASA.

        1. That’s not correct. Hansen wrote a computer program to study the optical properties of the clouds of Venus. This program was used by doctors Rasool and Schneider to study the behavior of Earth’s atmosphere. This resulted in Rasool and Schneider’s prediction of global cooling. (The problems of using a computer program meant to study Venus to predict changes in Earth’s atmosphere are obvious, which is probably why Schneider published a correction a few years later.)

          http://scottishsceptic.co.uk/2015/03/25/hansen-was-part-of-the-global-cooling-consensus/

          You’ll note that despite the insistence of the person who posted the newspaper report that in fact Hansen is not listed (in the reporting) as one of the paper’s authors. Unless someone can actually provide a link which very clearly demonstrates that Hansen was listed as an author of a “global cooling” paper I don’t think there’s any reason to believe he was ever a “global cooling” advocate.

  11. A pity you and I could not have corresponded in 1979. Early that year I began a masters project in particle physics, following up work initiated by a young Fulvio Melia (the astrophysicist).
    In response to a critique by Richard Dalitz, my supervisor Shui-Yin Lo asked me to make some refinements.

    But I soon recognised a fundamental flaw in the solution: it was not self-consistent.

    Normally this “doesn’t matter” because in nonrelativistic, weakly-coupled systems, either the interaction field (eg a magnetic field) or the particle field (charge-current) dominates and is scarcely affected by the other.

    However, the project concerned strongly-interacting quarks, which is the antithesis of the above assumptions.

    Before long, I found a self-consistent ansatz, but my supervisor wasn’t having it: he insisted that I keep trying to justify his preferred class of “solutions”.

    I realise now that there were two factors affecting Yin’s outlook: one was an older mind used to particular ways; the other will become apparent once you do a search on his name.

  12. I think this is probably chapter 3 or 4 of the Autodidact’s Toolkit!

    Backing up a little bit, it helps to get better at thinking in terms of premises and conclusions, which many humans don’t consciously do. An obvious place to start is with logic, and I’d add that anyone wanting to cultivate this skill should study objectivism. I’m unaware of another modern philosopher who has a more deliberate, consistently-logical thinking style.

    Even if one rejects part or all of her ideas, the feel is worth grokking.

    1. Plenty of philosophers in the analytic tradition are as logical. Wittgenstein or Russell come to mind.

      I admit I don’t particularly like objectivism and have serious doubts one can learn much from it.

      1. But do they take the same methodical, deductive approach? Do they say: “I proceed from the following three axioms. From this I derive six lemmas…” all the way up to “and therefore blue is the best color”?

        (NOTE: I’m seriously asking)

    2. >I think this is probably chapter 3 or 4 of the Autodidact’s Toolkit!

      A large portion of it. yes. And I was rather expecting you to notice.

      > I’m unaware of another modern philosopher [besides Rand] who has a more deliberate, consistently-logical thinking style.

      Aaaaarrrrggghh! NOOOOOOOO!

      Absolutely do not study Rand without Korzybski or the equivalent parts of Yudkowsky’s Sequences first.

      Without consciousness of abstracting and the map-territory disjunction, Rand is a very dangerous trap. She will teach you how to be logical within premises that are deadly to your ability to reason about actual reality. You will become blind, stupid, and utterly self-confident.

      Randian moral critique is always worth reading, but for the sake of sanity don’t go fscking near Randian epistemology until you already understand why A=A is vacuous.

      1. “Absolutely do not study Rand without Korzybski or the equivalent parts of Yudkowsky’s Sequences first.”

        Calm down, calm down :)

        I was familiar with your rejection of Objectivist epistemology because of its endorsement of the Aristotelian law of the Excluded Middle long before I’d studied Rand’s theory of knowledge directly, I’ve read the Sequences, and I had puzzled out a good deal of GS by my early 20’s.

        “but for the sake of sanity don’t go fscking near Randian epistemology until you already understand why A=A is vacuous.”

        Do you mean that it’s trivially true? Given how much postmodernism has rotted people’s brains it might be worth stating at the outset ‘by the way, reality exists independently of your mind of your linguistic constructs’.

        1. >Do you mean that it’s trivially true?

          That’s a gentle way of putting it. The real problem is that Rand relies on it being nontrivially true.

            1. >I haven’t heard this critique before. What do you mean?

              The point of Rand’s “Law of Identity” is to impose a decree on all categories that they have to be Aristotelian. At her worst, Rand teats this as a kind of voodoo that imposes order on the territory, not just her maps. At her best, it requires elaborate circumlocutions and epicycles before you can talk about graded membership or probabilistic inference.

              A related issue is that no Randian is allowed to understand that there is never any such thing as identity in the real world at all. There is only indistinguishability by limit of measurement or (more usually) by choice – we can decide we don’t care about the difference in time between two observations, or we can decide to ignore features of two events we don’t care about, or we can agree that we don’t care about differences below a certain level of precision.

              1. The way I’m parsing this is:

                Ayn Rand: “A leaf can’t be red all over and blue all over. It can’t burn and freeze at the same time.”

                Eric Raymond: “Obviously not. But most categories don’t obey these crisp delineations.

                What does it mean to be alive, for example? Biologists use a constellation of different traits to categorize living things; but some entities, like viruses, have some traits and not others, such that their membership or exclusion in the category ‘living things’ is hard to determine.”

                Which seems completely obvious to this non-objectivist.

                I’m curious: are your critiques aimed at objectivist epistemology specifically, or are you proffering a wider critique of Aristotelianism? Because, while Rand drew explicitly on Aristotle she also departed from his thinking in a number of areas, and I wonder if maybe this is one of them.

                It sure would be surprising to see someone like her be tripped up by something as straightforward as the above.

                1. >I’m curious: are your critiques aimed at objectivist epistemology specifically, or are you proffering a wider critique of Aristotelianism?

                  Straight at Objectivism. The thinking and rhetoric of Randites exhibits this problem in a much more obvious way than other systems that use Aristotelian logic.

                  The difference, I think, is where your system is on an axis between “exclude middle is a rule of reality” and “excluded middle is a rule of some formal systems”. Objectivism hangs out way over at the wrong end.

                  1. So how would an Objectivist define ‘life’? How would one deal with the issue of concepts that are more like leaky generalizations?

                    Surely they don’t just insist such concepts aren’t valid.

                    1. >Surely they don’t just insist such concepts aren’t valid.

                      Oh, there are couple of evasive strategies.

                      One is to insist that “life” must have an Aristotelian definition that we don’t know yet.

                      Another is to fort up around some Aristotelian oversimplification of a graded definition and insist that it must be correct because shut up.

                      A third is to fort up around some Aristotelian oversimplification and wield ever-bigger hammers trying to jam the cases it doesn’t really fit over to one side or the other.

                  2. Perhaps way off topic, but abstraction space is the map and therefore the rule of the excluded middle may be defined as absolute. In real space, QM theory may invalidate this axiom and the territory is what it is regardless of what we may think about it in abstraction space.

      2. Hi, latecomer Rand fan here.

        1.) Yes, “A = A” is a tautology. There isn’t a deep meaning there. It literally is just an exhortation not to deceive yourself. “Your beliefs should be, like, actual beliefs, about the real world, which is real.”

        2.) If you poke an Objectivist hard enough about math, you will find that he thinks about it like an engineer: “equal” means “equal within the tolerance you care about.” It’s not clear what philosophy-of-math Objectivists believe in, but they’re definitely *not* mathematical Platonists — abstractions like infinity or real numbers are not supposed to exist “out there” in the world.

        This implies that *concepts themselves* are relative to what you are trying to do with them; the boundaries of a cluster depend on your choice of metric.
        My first time around, I assumed that this was exactly what Rand meant, but it turns out on close reading that she probably didn’t know that, and thought concepts were a lot more universal.

        You could probably write a more technical version of ITOE that agrees with Rand maybe 75% but doesn’t violate what we know now about cogsci and machine learning. (I tried once, but I was way more ignorant at the time and got a lot of things wrong. Maybe someday I’ll take another stab.)

      1. Eric and I have been in talks for a while now about co-authoring a short book based on his post Generative Science to build a kind of curriculum for people wanting to self-train in the sciences.

        Basic idea: take a shortlist of the most generative concepts from the most generative sciences and use them as a toolkit to shorten the inferential distances involved in learning new things. The key ideas in evolution, for example, crop up in fields as far off as psychology and economics.

  13. This reminded me of a passage in my fortune file, from Alan Turing: The Enigma:

    ‘Einstein here throws doubt,’ Alan commented, ‘on whether Euclid’s axioms, when applied to rigid bodies, hold….He therefore sets out to test…the Galilei-Newtonian laws or axioms.’ He had identified the crucial point, that Einstein *doubted the axioms*.

    The trouble of course being that most people who do that are wrong. Presumably, on the inside, pointing out the Emperor’s missing clothes and just being a crank feel pretty much the same. Which is problematic if you’re trying to figure out if you’re cranking.

    “There may be other answers, but mine is this: when you have identified a false premise that they don’t know they rely on.”

    If someone’s making a list, this rule probably belongs next to Eliezer’s about incentives — but I think Eliezer’s has an advantage. Identifying others’ financial incentives is probably less prone to motivated reasoning, and more verifiable, than identifying their false premises. It’s easier to check that “people are getting paid craptons of money to solve this correctly” than it is to check that a purported hole is really real or really overlooked by the relevant group — particularly if the inquirer is aware that he has an incentive to believe that it is, as in e.g. every politicized “science” debate ever.

    I wonder if it’s relevant to your first example that QM is notoriously confusing even to people who work with it every day.

  14. recent episode of Adam Ruins Everything was sort of on point. They claimed that 60% of all research papers can not be replicated. Any consensus based on 60% bad data can’t be trusted.

    Others say 64% can not be replicated: http://www.dailymail.co.uk/sciencetech/article-3214037/Only-scientific-studies-replicated-Experts-fail-repeat-findings-majority-psychology-papers.html

    Any consensus based on 60% to 64% bad data can’t be trusted. First thing is to always doubt your ‘facts’.

  15. I think your “Fischer set” advice is essentially the economist’s insight of comparative advantage.

  16. One argument in physics that confuses me to no end as a layman, and makes me wonder if the experts are missing the obvious, is the brouhaha over the black hole information paradox. Assuming that the Schwarzschild solution radiates at all at a non-zero temperature as seen by a distant, stationary observer, it follows from the basics of Special and General Relativity that an infalling observer will see that radiation blueshifted, both by lessening gravitational redshift and increasing velocity blueshift, as they approach the horizon, and that for any arbitrarily high intensity or temperature of radiation we might choose, an infalling observer will observe the radiation to reach that intensity/temperature before crossing the horizon. So I really don’t get how any information contained in particles falling into the black hole is supposed to be lost just because the radiation is initially exactly thermal: I would expect any particle falling in to eventually, before crossing the horizon, collide with an outgoing Hawking quantum, with the result that at least one of the below occurs:

    1) The infalling particle is scattered back out, so that its information is not lost.

    2) The outgoing particle is scattered back in, with the result that the infalling particle is “silhouetted”, so to speak, against the Hawking radiation, preserving information about it.

    3) The infalling and outgoing particles are antiparticles, and annihilate, resulting in different particles being emitted than might otherwise have been (so we get a gamma photon where we would have gotten a positron, for instance).

    4) The center-of-mass kinetic energy involved in the collision is high enough to pair-produce new particles, which, if they end up moving outward at sufficient speed to escape, will alter the profile of the outgoing radiation.

    Furthermore, it seems to me that any infalling matter will, before crossing the horizon, see an amount of Hawking radiation equal to the mass of the black hole fly past it (in other words, due to its position low in the black hole’s gravity well and the doppler shift from its inbound velocity it will see the black hole evaporate in extreme fast forward beneath it), which draws into question in my mind, if the Schwarzschild solution does radiate, whether the horizon and interior region of the Schwarzschild solution ever in fact forms, as opposed to just an asymptotically deep portion of the exterior region. Of course, this will somewhat depend on whether an actual horizon has to form for radiation to occur, but my understanding from trying to track down literature in the subject is that an object will radiate prior to the formation of a horizon as long as it continues collapsing. So I’d expect that for an object that exceeds the mass supportable by the strongest fermion degeneracy pressure that exists in nature, it would collapse towards being a black hole but Hawking radiation would provide radiation and thermal pressure to lift the degeneracy a bit and slow the collapse. It wouldn’t be able to stop the collapse entirely as slower collapse would weaken the radiation, but as the object got closer and closer to being a black hole, the amount of radiation attending any given rate of collapse would increase. Eventually (saying “eventually” makes it seem like a long time, but it would probably be a splintered fraction of a second of proper time at the surface of the object for a stellar mass black hole, though it *would* be eons to an observer at rest at infinity) enough Hawking radiation would be emitted for degeneracy pressure to halt the collapse completely, and then depending on the exact physics of matter in those conditions, either the entire thing would rebound and blow itself apart, distributing the original matter back out into space (likely on an external timescale similar to the evaporation of a black hole of that mass, but with a very different distribution of emitted particles than would come from Hawking radiation), or there would not be sufficient energy left for a rebound and the object would eventually become a degenerate star in equilibrium near the upper mass / lower radius limit for the strongest fermion degeneracy pressure. This would happen in somewhat less than the timescale for evaporation of a black hole with the mass of the object that initially collapsed, as it would stop radiating away at some mass measured in solar masses (meaning that if radiation did continue, it would have eons to go). In either case, I would expect that the entire process would look to an external observer just like a black hole forming and slowly radiating itself away well into the old age of the universe, but that an internal observer would experience something quite different than predicted by plain GR: hellacious deceleration to a stop just a whisker outside of the horizon radius, and unimaginable temperatures, and even more unimaginable time dilation: hurled from the youth of the universe to its dotage in an eyeblink.

    Now I’m admittedly a layman who is very weak on the math, so maybe there’s something there that actually makes things doubtful, but I don’t get why information loss in black holes is considered an issue by the experts, and I’m inclined to think that Hawking radiation even prevents a complete collapse into a black hole from occurring at all (so that all the information that went in remains accessible, though getting to it with any kind of data rate went be possible for eons) . I’ve heard mumbo-jumbo about a “firewall” at the horizon caused by broken entanglements being a possible solution to the whole thing, but this seems redundant as basic GR seems to predict a “firewall” at the horizon if the Schwarzschild solution radiates at all.

    1. @Jon Brase
      “One argument in physics that confuses me to no end as a layman, and makes me wonder if the experts are missing the obvious, is the brouhaha over the black hole information paradox.”

      I have seen every one of your remarks raised and discussed. None of them was conclusive.

    2. > I don’t get why information loss in black holes is considered an issue by the experts, and I’m inclined to think that Hawking radiation even prevents a complete collapse into a black hole from occurring at all

      I would say it’s considered an issue by experts because the experts don’t (yet) have any way of proving that your solution–that when quantum corrections are taken into account, no true event horizon ever forms and no singularity ever forms, and quantum unitarity is maintained throughout–is correct. In other words, it’s one thing to say that no true black hole ever forms; it’s another to know *how* nature makes that happen.

  17. I wonder sometimes what was going on during the 1920s in the early era of quantum physics. The more I dig into some of the origins of the Copenhagen consensus, the less innocent the generation of that prevailing worldview seems (which has been challenged off and on ever since because you can’t build any kind of fundamental model of the world from that interpretation that doesn’t have glaring internal problems.)

    It’s more than scientists confronting the genuinely baffling results and being confused or arriving at temporary ad-hoc models. It’s more than trying to figure out how to relate a set of successful mathematical machinery to the world conceptually. If that were all that were going on, there wouldn’t have been ideological purges. There wouldn’t have been a *consensus* about something that the honest admitted that no one understood! Some scientists who were still struggling to arrive at a genuine understanding of the world, like Einstein, Schrodinger, Bohm and others were branded with bizarre labels: “juvenile deviationist”, “traitor”, by others that acted as enforcers of this consensus. (Rosenfeld comes to mind, who from his private correspondence more or less plotted to ruin certain physicists for failing to get with some anti-realist philosophical program.) People who pointed out genuine contradictions and problems were treated like dimwitted idiots and shunned. A whole boatload of social hostility and drama seems to have existed.

    There were deliberate choices made to interpret and model things in certain ways that were in no way required by experiment, and *some* kind of program in the group surrounding Bohr seems to have been driving it. It seems obscurantist!

  18. This puts me to mind of an old post of the month from a working scientist at TalkOrigins, titled “Running the Scientific Gauntlet”…discussing the tests he gave to his ideas before taking them to others (let alone the public) —

    After getting through the basics (does the idea match the data? meaning, of course, you have to know what the data actually say…and if it doesn’t match certain observations, are there good reasons why?…these questions, according to the author, wash out most of his clever new ideas) — he starts asking other questions, like: “Is this idea new?” And if so (relevant to this post) “Why did nobody think of it before?”

    I get good mileage out of [that question], however, because it can lead me to better answers to the previous question — ‘this is the kind of thing so-and-so might have come up with, are you really sure that he didn’t?’. Answers like ‘I’m such a sooper geenius that those morons can’t even understand my brilliance’ are not allowed. What is it that I knew about in coming up with the idea that people in the field didn’t, or don’t commonly? (new observations, ideas swiped from one field and applied to another, new mathematics, new computational abilities, …)

    …Eric’s post adds a possibility he didn’t mention, “the field is ticking along with an assumption the practitioners haven’t examined.” Possibly because, given the context, the TalkOrigins author’s goal was to gently discourage cranks (who don’t get past step one, “knowing what the observations are and whether their theory fits them”).

  19. In a deliciously-meta twist of epistemology, reliance on the authority of experts turns out to be inconsistent with what quite a few experts on science say about the matter. Chief among them is Richard Feynman, a Nobel laureate in physics and hence an expert on scientific work by anyone’s definition. In a much-quoted speech to a conference of of high-school science teachers, he once argued that experts are a feature, not of science, but of of cargo-cult science. Science, by contrast, is “the belief in the ignorance of experts”. Here is Feynman’s quote in context

    In the same way, it is possible to follow form and call it science, but that is pseudo-science. In this way, we all suffer from the kind of tyranny we have today in the many institutions that have come under the influence of pseudoscientific advisers.

    We have many studies in teaching, for example, in which people make observations, make lists, do statistics, and so on, but these do not thereby become established science, established knowledge. They are merely an imitative form of science analogous to the South Sea Islanders’ airfields–radio towers, etc., made out of wood. The islanders expect a great airplane to arrive. They even build wooden airplanes of the same shape as they see in the foreigners’ airfields around them, but strangely enough, their wood planes do not fly. The result of this pseudoscientific imitation is to produce experts, which many of you are. [But] you teachers, who are really teaching children at the bottom of the heap, can maybe doubt the experts. As a matter of fact, I can also define science another way: Science is the belief in the ignorance of experts.

    When someone says, “Science teaches such and such,” he is using the word incorrectly. Science doesn’t teach anything; experience teaches it. If they say to you, “Science has shown such and such,” you might ask, “How does science show it? How did the scientists find out? How? What? Where?”

    It should not be “science has shown” but “this experiment, this effect, has shown.” And you have as much right as anyone else, upon hearing about the experiments–but be patient and listen to all the evidence–to judge whether a sensible conclusion has been arrived at.

    Those are the key points for purposes of this thread, but the entire speech is well worth one’s time to read it.

  20. I have long (since an injury to one eye) had the thought that there might be a better alternative to stereoscopic vision systems for robotic vision systems. I was capable of good depth perception with a single eye, based on if an object was in focus. An alternative would be to use lenses with a very small depth of field, take many pictures every second at various zoom levels, and use the resulting images to throw away everything that is not in focus, which would tell you the distance from the camera when an object or a portion of an object was in focus. Let a computer reconstruct the multiple images into a 3D map.

    1. I used a device that seemed to work by a similar principle in grad school: An Olympus LEXT confocal microscope. It scanned its focus and observed the image of a laser to reconstruct a heightmap of an image. It also scanned the focus and did some sort of sharpness optimization to piece together multiple images for greater depth of field.

    1. >Another set of immodest people resist the pressure of power and money to say what they know to be true:

      In order for this to be virtuous, your contrarianism has to actually describe reality. Difficult to do if the models you rely on keep delivering prediction that fail the Fischer test.

      1. So does chemotherapy. Doctors never get their predicted lifespans right. I assume you will reject chemotherapy too?

    2. Thanks to Trump, it’s going to be the *classiest* warming we’ve ever had. You’re going to love your warming.

      Make Warming Great Again!

  21. Here https://americanaffairsjournal.org/2017/08/ricardos-vice-virtues-industrial-diversity/

    which if I understand it correctly seems to challenge widely-held theory in economics that seems to depend on a rarely debated conception of “capital” that treats a sack of gold and a field of land and a factory full of machinery as identically mobile (or immobile) across national borders, while jewelers and farmers and factory workers are regarded as held to their respective locales by intangible barriers of language and custom.

    I blame Marx, who divides “factors of production” into perishable “labor” and durable “everything else” — gold, land, location, factory, traditions of the local tribes, rule of law, infrastructure (of roads and sewers and more lately power grids and data networks) … many of which are far from portable either within or without a given territory and certainly do not literally get up and walk away in pursuit of happiness, property, or a higher rate of return on (hours) invested.

    I cast this blame knowing Ricardo died when Marx was barely five. I blame Marx for just about everything, on general principles. (Which I seldom examine closely.)

  22. Economics provides many examples of expert consensus fairly challenged more by outsiders than other experts.

    Popular puzzler (among other talents) Martin Gardner challenged the premises of the famous “Laffer Curve” by pointing out that economic activity (one axis of the curve’s graph) was affected/afflicted by more than one tax or tax rate. Property tax, inheritance tax, sales tax, capital gains tax, import tariffs, export tariffs, …

    So a change in the tax rate of the one tax may, or may not, shift the afflicted economic activity in the direction predicted by the Laffer Curve, if (ceteris parabis) all the other tax implications produce and even higher burden. For example, take a theoretical model with only two types of tax–inheritance tax and sales tax. Both could be “high” and Laffer’s curve (never actually measured and graphed, as far as I know) would predict “lowering” the sales tax would “raise” the amount collected due to increasing economic activity of that sort. But if accumulation of wealth into an estate results in that estate being taxed to oblivion, then raising the sales tax raises collections despite the “curve”, because the money has nowhere better to go.

    The Laffer curve is regularly attacked on various grounds, but the layman’s (Garner’s) argument is one I seldom hear advanced, or refuted, by any of the expert parties in the scrum.

    1. As I recall…. Martin Garner’s contribution to the Laffer curve was to attack it as quackery and sudo science (which it is). I’ve actually never read his argument, so I’ll take the time to read it. My assumption is that whatever argument he advanced, it must be 1: self-evident needing no reply 2: humourous and witty or 3: both.

      1. “sudo science” is very different than pseudoscience. When you do pseudoscience, people laugh at you. When you do sudo science, you get the result “$NAME is not in /etc/sudoers, this incident has been reported”, and then hear a distant rumble of thunder.

    2. ‘But if accumulation of wealth into an estate results in that estate being taxed to oblivion, then raising the sales tax raises collections despite the “curve”, because the money has nowhere better to go.’
      – Sigh… people can chose not to make extra money. They can value leisure over accumulation of wealth. If the marginal value of extra work is less than the marginal value of extra leisure time they will take more leisure time (or not try hard for the promotion, etc). Sales (and income) taxes lower the marginal value of extra work.

      Furthermore, your entire argument is nonsense. Maybe Gardner’s isn’t (I haven’t read his), but yours definitely is.

      1. Oh, any idiocy is definitely mine, not Gardner’s. If I can encourage you to find and follow his examples, however, my clowning around will have been worthwhile.

        Which brings me to the point — that Gardner, a non-economist, created a memorable challenge to a prevailing economic theory, and even economists who did NOT respect Laffer or his curve would not — as far as I can tell — address, refute, or even acknowledge Gardner’s claims. I’m reminded of a court throwing out a victim of tort for lack of standing. Unwilling to address either the victim’s injuries or the offender’s errors, the process of adjudication chose to abdicate.

    3. Gardner’s criticism is wrong. For purposes of exposition, the Laffer Curve is presented in a simplified economy, with one tax and one measure of economic activity. But the concept applies in an economic system with multiple commodities and multiple taxes. This can all be worked out mathematically, but cannot be represented on a 2-dimensional graph.

      Where a particular economy is on the Laffer curve relative to a particular tax is an empirical matter. And yes, economists do estimate Laffer Curves — more specifically, they estimate tax elasticities, which are equivalent to estimating the slope of the LC. The CBO estimates them when they score tax plans.

    4. > Popular puzzler (among other talents) Martin Gardner challenged the premises of the famous “Laffer Curve” by pointing out that economic activity (one axis of the curve’s graph) was affected/afflicted by more than one tax or tax rate. Property tax, inheritance tax, sales tax, capital gains tax, import tariffs, export tariffs, …

      This just gives you a multidimensional “Laffer field” rather than a one-dimensional curve, but you can still take a slice across any one of the dimensions and get a curve for any particular tax. What makes me doubtful about the utility of the Laffer curve is this: it’s obvious that tax revenue will be zero at the extremes (if you have no taxes, there will be no revenue, and if you confiscate everything, the economy will die and you’ll in short order have zero revenue, or, perhaps more likely, taxes will be paid in lead and the postage will be paid in gunpowder), and presumably the curve is smooth near these points, but getting reliable data on the actual shape of the curve away from the extremes is made almost impossible, because everyone will be fudging the data to fit their political preconceptions, or, if they don’t fudge the data, will be choosing a way to slice the Laffer field that fits their politics.

  23. Another little thing about the whole quantum state thing that has always bewildered me: all particles have mass. All mass have gravity. If a particle is in superposition, it supposedly occupies a range of positions. Now where is the mass? If a particle is in superposition, where is the mass? And therefore, where is the gravity? Is it all smeared out over space? If so, wouldn’t there be a difference in the measured gravitational force between superposition and not superposition, because one is smeared out, and the other isn’t?

    I asked a physicist about this, and he answered that the mere fact that you measure the gravitational force collapses the waveform. Since the gravitational force stretches out over incredibly long distances, weak granted, shouldn’t the effect thereof on other objects count as “measuring” and therefore collapse the waveform? I know this is very similar to yours.

  24. Eric,

    Do you mind expanding on your skepticism of “doomsday” AI scenarios, as outlined by Yudkowsky, Bostrom, Musk, etc?

    I’m particularly curious to hear your reasoning because, as someone exposed to the “rationalist” memeplex, you have probably heard the same arguments I have, and considered them seriously; and given your background, I would expect that you wouldn’t dismiss them without very good reason.

    This question is not motivated solely by idle curiosity. I’m an academic and am seriously considering whether, given the potential existential threat posed by AI, I should shift my research towards this topic. So I’m particularly interested in good skeptical arguments, since I think I’ve heard the best alarmist arguments and have found them very convincing.

    1. >Do you mind expanding on your skepticism of “doomsday” AI scenarios, as outlined by Yudkowsky, Bostrom, Musk, etc?

      What I’m skeptical about is the general proposition of a general-purpose AI capable of bootstrapping itself to superintelligence, which is a precondition for either the Skynet or paperclip-maximizer scenarios.

      Note that I say “skeptical”, not “entirely dismissive”. I think it’s good that there are people working the AI-ethics problem, and don’t necessarily want to discourage you or anyone else from doing so. Because, even if it turns out we don’t have to restrain super-AIs, the things we will learn about ethical reasoning and decision theory from the effort seem more than worth it. And I could be wrong; the singularitarians’ greatest hopes and nightmares could be imminent.

      What I observe is that we now know how to build special-purpose AIs that can do amazing things like being better at a human at Go, but we’re really no close to a Turing-complete AI than we were 50 years ago. Yes, I know, deep learning – the thing is, training a deep-learning system to do a task well doesn’t generalize – there’s no “common sense” there, in the sense of ability to share common learning across multiple domains and a general world-model in which specialized knowledge can be situated.

      We don’t have more than the vaguest idea even of what kind of cognitive architecture you need to support common sense. We have guesses at it like Cyc, but these have conspicuously failed to outcompete either human or AI specialists (they’re never going to beat Garry Kasparov at chess, let alone Deep Blue). There are important aspects of the human world-modeling architecture, like emotion, that we barely know how to relate to anything the AI field might recognize as cognition. And others that seem inseparable from our embodiment, like proprioceptive maps.

      I am not claiming these problems must be forever unsolved, but they’re also not things we can expect a deep-learning system or super-Eurisko to bootstrap itself up to knowing – they’re as biologically contingent as all hell. Our AIs are not yet entangled with the reality outside their silicon enough to learn from the totality of that world, and they’re not shaped to it by a hundred million years of evolution, either.

      Thus, for the foreseeable future I think AIs will remain idiot savants, conquering specialized tasks in depth but continuing to fail Turing-completeness. That requires breadth, not depth, and breadth is a quality we barely even begin to understand, let alone know how to code into an AI.

      1. > Thus, for the foreseeable future I think AIs will remain idiot savants, conquering specialized tasks in depth but continuing to fail Turing-completeness.

        Nitpick: You mean “fail the Turing test”, right? Building a Turing-complete computer is a solved problem, modulo finite memory.

        Of course, if the human you use is a lit-crit PhD, passing the Turing test is also a solved problem.

        1. It used to be a joke in the Mostly Wrong History of Programming Languages:

          1972 – Alain Colmerauer designs the logic language Prolog. His goal is to create a language with the intelligence of a two year old. He proves he has reached his goal by showing a Prolog session that says “No.” to every query.

          You don’t need to mock lit-crit when there’s a perfectly real example, though! A few years back the Goostman chatterbot attempted to use this kind of approach: passing for human by imitating a foreign ESL speaker who was also underage and grouchy and tended to deflect and insult the examiner rather than answer the questions posed to it. It managed to pass for a 13-year-old for a few minutes to a few people this way, and hack media made a big noise about this bot supposedly passing the Turing test.

          But while a constant flow of “You’re trying to trick me”, “That’s not an interesting question, let’s discuss something else” and “I’ll have to think about that” can do a nice imitation of human ignorance, I don’t think that’s where the goalposts for the Turing test should be.

            1. That is very blatantly obviously faked and does not pass the Turing test.

              “(Noun) is (Descriptor), says (Bigname); however, according to (Othername)(Citation), it is not so much (Noun) that is (Descriptor), but rather the (Other Descriptor) of (Noun)”.

              That exact form of madlibs paragraph appears three times, each time with different names and nouns sprinkled into the variables, but with the punctuation remaining perfectly identical to the detriment of proper comma-grammar, rather a giveaway of a computer template.

              Also, citation only for the second name in this template, never the first. Even if the same person has their name in the first position in one paragraph and the second position in another paragraph.

              Moreover, no paragraph ever continues a topic from its predecessor. I have had the distinct displeasure of reading the works of some human literary masturbators at shitty departments consumed by obscurantism and tribal-credentialist status-signaling, and they are rather more topically persistent.

              This generator is like a political cartoon: very recognizable, but also very caricatured and distorted. It essentializes and highlights a failure mode of literary masturbators. It should not be considered an accurate imitation at all.

  25. Thrasymachus’ rebuttal makes the same mistake I have seen made often in discussions of this general topic: it fails to ask what the “consensus of experts” is based on.

    If the consensus of experts is based on a generative model that has proven predictive power–it has a track record of getting things right–then yes, you are unlikely to do better unless you arhee a expert yourself.

    But often the “consensus of experts” has no such predictive power. It’s just some version of “we’ve always done it this way”, etc. Or else the “consensus” is motivated not by a genuine desire to solve the actual problem but by outside considerations, often political in nature (*cough* global warming *cough*).

    In the latter case, not only is it a lot easier for a non-expert to have a useful insight that the “experts” don’t have (because the “experts” aren’t really experts to begin with), but it is a *bad* idea to follow the consensus, even if you yourself don’t have any useful insight. A “consensus” that is not based on a track record of correct predictions is useless as a guide to belief.

    1. And this brings us to the question of the intended audience. David Brin complains on his blog that the Conservatives/Republicans are essentially rejecting the expert approach on damn-near everything. While I’m not sure I agree with Brin,* there are a ton of people out there who don’t have a clue about best practices in any form. How do you talk to them about becoming educated?

      * The left can be equally whacky, but the nutty part of the left isn’t dictating policy at the national level.

      1. > the nutty part of the left isn’t dictating policy at the national level.

        Not “nutty”, perhaps, but there are party positions on both sides that are based on “expert” opinion that does not come from any actual predictive power. (I alluded to one example in my previous post–that position is being pushed by the left.)

        > there are a ton of people out there who don’t have a clue about best practices in any form. How do you talk to them about becoming educated?

        I don’t know of any other way besides getting people to understand that any “consensus” not based on actual predictive power is useless as a guide to belief. Unfortunately, the fraction of people who are willing to regulate their beliefs this way is and always has been very small. It’s very difficult for humans to accept that “don’t know” really does mean “don’t know”–it doesn’t mean “since there’s no evidence against it, I can believe whatever I want”.

        1. The problem is that the average holder of a college degree in chemistry, for example, has some idea of what’s been tried before in chemistry and how it can be expected to work. They may be wrong about something, but at least they’re wrong without being oblivious, and they know where to look to see whether they’re correct or not.

          I’d guess that the target of Yudkowsky’s arguments is the person doing work outside their field, or arguing with a consensus of senior scientists, or fighting for an ideology (left or right) that has no predictive power… While you get the occasional wild talent, a chemistry breakthrough is most likely to be made by a chemist. A physics breakthrough is most likely to be made by a physicist. Etc. They may disagree with their colleagues, but they’ve at least got the chops to push the envelope.

      2. the nutty part of the left isn’t dictating policy at the national level.

        Title IX Dear Colleague.

        1. Title IX makes sense until someone tries to extend it well-past it’s black-letter meaning, such as the rape business at universities – they were actually persecuting people where the woman insisted that the accused was her lover and she had not been raped!

  26. By the way, Eric, the plugin that does the five minute edit timer, rather than doing something sensible like (pseudocode):

    timer = current time + 5 min
    when top of second event fires
    display timer – current time
    end event handler

    Seems to be doing something like:

    timer = current time + 5 min
    countdown = 5 min
    when top of second event fires
    loop until countdown == timer – current time
    display countdown -= 1 sec
    end loop
    sleep 1 sec
    end event handler

    If I lock and unlock my phone, the counter starts at whatever value was last displayed, then ticks down like crazy until it reaches the actual time remaining.

  27. Basically, what I’ve learned in biology as a layman is that there are always some things that *a few* researchers have looked into, but most haven’t followed up on.

    For example, it sure looks like we can make mice live a lot longer than their normal lifespan, in lots of ways. Have university labs or pharma companies followed up with large animal or human studies? Usually not! Usually for incentive reasons that wouldn’t necessarily apply to a highly motivated individual funder.

    Cancer is similar; a lot of really promising early-stage treatments never get the follow-up research you’d need to bring them to market, again mostly for incentive reasons. Most new cancer drugs are in a very restricted set of drug classes; most of the underexplored but promising stuff is outside that set.

    I think you *can* do better than the “consensus” if you’re not operating under the same incentive pressures and can afford to take a more diverse set of strategies.

  28. My first technically IT type job was as a Tool Code tester for Automotive assembly tools. It was my job to try to make the machine do things that the programmer hadn’t thought of, like spoofing a sensor to simulate a failed one to make sure the machine didn’t eat itself. One day the the robotics people working alongside the tools people told me about this new software feature the robots all had. If the robot lost power, and then regained power, the robot would detect it’s position and restart the cycle from that position rather than an operator manually positioning it back to home prior to re-starting the cycle. I asked them to demonstrate. The operator picked up the robot controller, waiting until the robot was in-cycle, hit the e-stop (simulating loss of power), and turned the power off at the breaker. He reversed those operations, and indeed, the robot “found itself” and continued in cycle. I observed the robot go though several more cycles, and pulled the breaker (an actual loss of power) before the operator could stop me. Lo and hehold, the robot did NOT find itself, and had to be re-set. A couple of weeks later the manufacturer came back with a code patch for the robots to fix the flag registry that wasn’t being constantly saved. I was told the emergency code patch cost the manufacturer over $100k for that patch. It was my job to find stuff the programmers told me I wasn’t supposed to do. By the way, the assembly tools that I checked had a uptime after production started of over 99.5%. Something that didn’t happen at that time, in that industry.

  29. “If you have a tough problem, and it’s just you against the world’s experts, find their Fischer set. Model the kinds of analytical moves that will be natural to them, and then stay the hell away from those lines of play. Because if they worked your problem would be solved already.”
    — bingo!
    I’m often told, in research of an unsolved problem, to start by studying what the “experts” did already.
    But I always had a gut feeling that if I do so, I’ll probably fall in the same traps that they did. And also, my mind will be burdened with their knowledge, and I will have less mental energy to be creative, to think in my own way.

Leave a comment

Your email address will not be published. Required fields are marked *