Jan 06

What Do You Believe That You Cannot Prove?

I wrote this for John Brockman’s 2005 Edge Question. Can’t see
any good reason not to blog it as well.


I believe that nature is understandable, that scientific inquiry is
the sharpest tool and the noblest endeavor of the human mind, and that
any “final answers” we ever get will come from it rather than from
mysticism, religion, or any other competing account of the universe.
I believe these things without being able to prove them despite — or
perhaps because of — the fact that I am a mystic myself.

Science may be the noblest endeavor of the human mind, but I believe
(though I cannot prove) that the most crippling and dangerous kind of
ignorance in the modern West is ignorance of economics, the way
markets work, and the ways non-market allocation mechanisms are doomed
to fail. Such economic ignorance is toxic, because it leads to insane
politics and the empowerment of those whose rhetoric is altruist but
whose true agenda is coercive control.

I believe that the most important moment in the history of philosophy
was when Charles Sanders Peirce defined “truth” as “predictive power”
and made it possible to talk about confirmation of hypotheses in a
non-circular way.

I believe the most important moment in the foreseeable future of
philosophy will come when we realize that mad old Nazi bastard
Heidegger had it right when he said that we are thrown into the world
and must cope, and that theory-building consists of rearranging our
toolkit for coping. I believe the biggest blind spot in analytical
philosophy is its refusal to grapple with Heidegger’s one big insight,
but that evolutionary biology coupled with Peirce offers us a way to
stop being blind. I beleve that when the insights of what is now
called “evolutionary psychology” are truly absorbed by philosophers,
many of the supposedly intractable problems of philosophy will vanish.

I believe, but don’t know how to prove, a much stronger version of the
Sapir-Whorf hypothesis than is currently fashionable. That is, I
believe the way humans think is shaped in important ways by the
linguistic categories they have available; thinking outside those
categories is possible but more difficult, has higher friction costs.
Accordingly, I believe that some derivation of Alfred Korzybski’s
discipline of General Semantics will eventually emerge as an essential
tool of the first mature human civilizations.

I believe, but don’t know how to prove, that Julian Jaynes was on to
something very important when he wrote about the origin of
consciousness in the breakdown of the bicameral mind.

I judge that that “dark matter” is no better than phlogiston as an
explanatory device, and therefore believe without being able to prove
it that there is something very deeply wrong with the standard model
of cosmology.

I believe, but cannot prove, that the “knowledge interpretation” of
quantum mechanics is pernicious nonsense, and that physical theorists
will essentially develop some testable form of nonlocal realism.

I believe, but cannot prove, that global “AIDS” is a whole cluster of
unrelated diseases all of which have been swept under a single rug for
essentially political reasons, and that the identification of HIV as
the sole pathogen is likely to go down as one of the most colossal
blunders in the history of medicine.

Much of the West’s intelligentsia is persistently in love with
anything anti-Western (and especially anti-American), an infatuation
that has given a great deal of aid and comfort to tyrants and terrorists
in the post-9/11 world. Besides these obvious political consequences,
the phenomenon Julian Benda famously called le trahison des
clercs
has laid waste to large swathes of the soft sciences
through ideologies like deconstructionism, cultural relativism, and
postmodernism.

I believe, but cannot prove, that le trahison des clercs is
not a natural development of Western thought but a creation of
deliberate propaganda, directly traceable to the successes of Nazi and
Stalinist attempts to manipulate the climate of opinion in the early
and mid-20th century. Consequently I believe that one of the most
difficult and necessary tasks before us in the next half century will
be to banish the influence of totalitarian nihilism from science in
particular and our culture in general.

I know how to prove, or at least convincingly demonstrate, that
open-source software development produces better results than
secrecy and proprietary control. I believe that the same advantage
applies to any other form of engineering or applied science in which
the limiting factor of production is skilled human attention, but I
don’t know how to prove that general principle.

Oct 19

Predictability, Computability, and Free Will

I’ve been reading some philosophical discussion of the free-will/determism question recently. Quite a number of years ago I discovered a resolution of this question, but never did anything with it because I assumed I had simply reinvented a well-known position and could not really contribute anything to the debate. However, the research I’ve done recently suggests that my resolution of the question is actually a novel one.

Like a lot of philosophy, the discussion of free will and determinism I’ve seen founders on two errors. One of this is Aristotelianism, an attachment to observer-independent two-valued logic in a system of universal categories as the only sort of truth. The other is a tendency to get snarled up in meaningless categories that are artifacts of language rather than useful abstractions from observed reality.

In this essay, I hope to show that, if one can avoid these errors, the underlying question can be reduced to a non-problem. More generally, I hope to show how ideas from computability and complexity theory can be used to gain some purchase on problems in the philosophy of mind that have previously seemed intractable.

Formulating The Problem

The free-will question is classically put thus: do we really have choices, or are our actions and behavior at any given time entirely determined by previous states of the universe? Are we autonomous beings, who ourselves cause our future actions, or meat robots?

The second way of forming the question gets at the reason most philosophers have for finding it interesting. What they really want to know is whether we cause our own actions and are responsible for them, or whether praise, blame, and punishment are pointless because our choices are predestined.

Thus the free-will question, which is traditionally considered part of metaphysics or the philosophy of mind, is actually motivated by central issues in moral philosophy. At the end of this essay, we will consider the implications of my proposal for moral philosophy.

Classical Determinism And Its Problems

The ways philosophers have traditionally asked these questions conceal assumptions that are false in fact and logic. First, the evidence says we do not live in the kind of universe where classical determinism is an option. In almost all current versions of physical theory there is an irreducible randomness to the universe at the quantum level. Thus, even if we knew the entire state of the universe at any given moment, its future states would not be determined; we can at best predict the probability distribution of those states.

Another characteristic of quantum theory is that observation perturbs the system being observed. Let’s sidestep that for the moment and introduce the concept of a perfect observer, with infinite computational capacity and the ability to take infinitely precise measurements in zero time without perturbing the system under observation. In a universe with quantum randomness, even this perfect observer cannot know the future.

Matters are worse for imperfect observers, who have only finite computational capacity, can take only finitely accurate measurements, and perturb what they measure when they measure it. Even in theories that preserve physical determinism, imperfect observers have two additional problems. One is that they perturb what they observe; the other is sensitive dependence on initial conditions.

Two physical systems that are measurably identical to an imperfect observer and evolve by the same deterministic laws can have different futures because unmeasurably small differences between their present states are chaotically amplified over time — and some of those unmeasurable differences may be produced by the act of observation!

Even in the absence of sensitive dependence on initial conditions, though, an imperfect observer’s attempt to predict the future may fail without warning because his finite computer loses information to round-off errors (there are more subtle limits arising from finite storage capacity, but round-off errors will stand as a readily comprehensible representative of them). And like it or not, human beings are imperfect observers. So even without quantum indeterminacy, we cannot know the future with certainty.

For philosophical purposes, quantum indeterminacy and sensitive dependence on initial conditions in classical (non-quantum) systems have nearly indistinguishable effects. Together, they imply that classical determinism is not an option for imperfect observers, even in the unlikely case that quantum reality is not actually rolling dice.

Non-Classical Determinism and Irreducible Randomness

Philosophers have tended to make a fast leap from the above insight to the conclusion that humans do in fact have free will — but this conclusion is a logic error brought on by Aristotelian thinking. There is an unexcluded middle here: we may be meat robots in a universe that rolls dice, both non-determined and non-autonomous.

Most people (even most philosophers) find the idea that we are puppets on random strings even more repugnant than classical determinism. In classical determinism there is at least a perfect-observer view from which the story makes sense. The religiously inclined can believe in that perfect observer and identify it with God, and the rest of us can take some sort of fatalistic comfort in the face of our adversities that things could not after all have been any different.

In the indeterminate universe we seem to inhabit, the only way for even a god to know the future would be for it to intervene in every single collapse of a quantum state vector, and thereby to create that future by a continuous act of will. But if that were so, the behavior of all the matter in our bodies could be nothing but the god’s will. We’re back to determinism here, but it’s one in which a god is the sole causal agent of everything — good, evil, and apparent randomness. Some varieties of Hindu theology actually read like this; one rather lovely version has it that the entire universe is simply the vibration of the voice of the god Atman (or Brahman) chanting a giant “OM!” and will end untold eons in the future when He next draws breath. In the West this position has been called “occasionalism”.

The trouble with occasionalism is that it’s untestable. There is no observation we can make from within the universe to establish causal intervention from outside it. If we could do so, we would simply extend our conception of “the universe” to the larger domain within which causality operates — including the mind of Atman. The testability problem would immediately re-present itself. (This, of course, is a slightly subtler version of the standard rebuttal to the “First Cause” argument for the existence of a creator-God.)

For those of us unwilling to take occasionalism on pure faith, then, free will is about the only comfort an indeterminate universe can offer. Our experience of being human beings is that some of the time our behavior is forced by factors beyond our control (for example, if we fall off a cliff we will accelerate at a rate independent of our desire or will about the matter), but that at other times we make unforced choices that at least seem to causally originate within our own minds and not elsewhere.

To carry the discussion further, we need to decide what the term “free will” means. Our challenge is to interpret this term in a way that both consistent with its ordinary use and fits into a larger picture that is rationally consistent with physical theory. Try as I might, I can only see two possible ways to accomplish this. One has to do with autonomy, the other with unpredictability.

The Autonomy Interpretation of “Free Will”, And Its Problems

Most people, if pressed, would probably come up with some version of the autonomy interpretation. All the philosophical accounts of “free will” I’ve ever seen are based on it. We have no problem with the idea that our choices are caused, or even determined by, our previous thoughts, but the intuitive notion of free will is that our thoughts themselves are free. This implies that the measure of a human’s degree of “free will” is the degree to each human being’s history of mental states is autonomous from the rest of the universe — not caused by it, but capable of causing changes in it.

There are several problems with this account. The most obvious one is that we can often locate causal influences from the rest of the universe into our mental states. To anyone who doubts this, I recommend the experience of extreme hunger, or (better) of nearly drowning. These are quite enlightening, and philosophers would probably talk less nonsense if they retained a clearer grasp of what such experiences are like.

Less extremely, evidence from sensory-deprivation experiments suggests that a mind deprived of sensory input for too long disintegrates. Not only does the rest of the universe have causal power over our mental states, but we cannot maintain anything recognizable as a coherent mental state without that input. Which makes sense; evolutionary biology tells us that we are survival machines shaped by natural selection to cope with a reality exterior to our minds. Consciousness, reasoning, and introspection — the “higher” aspects of human mental activity that mostly concern philosophers — are recent add-ons.

None of this evidence outright excludes the possibility that there is some part or aspect of our normal mental activity that is autonomous, uncaused but causal. The real problem, the problem of logic and principle, is that we don’t know how the autonomously “free-willing” part of the mind (if it exists) can be isolated from the part that is causally driven by sensory stimuli and normal physical laws.

For materialists like myself who model the mind as a kind of software or information pattern that happens to run on an organic substrate, this is an impossible problem. We have no warrant to believe that any part of that system is causally autonomous from the rest of the universe. In fact, on functional grounds it seems quite unlikely such a part would ever evolve — what would it be good for?

But the problem is not really any simpler for dualists or mysterians, those who hold that minds have some “soul” attached that is non-physical or inaccessible to observation. That “soul” has to interact with the mind somehow. If the interaction is one-way (soul affects mind, but mind does not effect soul) then the soul is simply a sort of blind pattern- or noise-generator with no access to reality. On the other hand, if mind affects soul we are right back to the beginning of the problem — is there anything in “soul” that is neither random nor causally driven by “mind”, which we already understand to be either random or causally driven by the rest of the universe?

The basic problem here is the same as the basic problem with occasionalism. Define the “causal universe” as all phenomena with observable consequences, whether those phenomena are material or “soul” or the voice of Atman. Unless the occasionalists are right and it is all just Atman saying a trillion-year “OM!”, the concept of “soul” does not actually in itself make us any space between determinism and chance. The autonomy account of free will leaves us finally unable to locate anywhere autonomy can live.

The Predictability Account Of Free Will

I have invented a predictability account of free will which is quite different. Instead of struggling with the limits of imperfect observation, I consider them definitional. I say human beings (or any other entity to which we ascribe possession of a mind) have “free will” relative to any given observer if that observer cannot effectively predict their future mental states.

By “effectively predict” I mean that the observer, given a complete description of the mind’s state and a set of stimuli applied to that state, can predict the state of the mind after those stimuli.

Since we have access to mental states only by observing the behaviors they generate, this is arguably equivalent to saying that an entity with a mind has free will with respect to an observer if the observer cannot predict its behavior. However, I specify the term “mental state” because I think the natural-language use of the term “free will” requires that we limit the candidates for it to entities which we believe to have minds and to which we thus attribute mental states.

I am deliberately not proposing a definition or theory of “mind” in this essay, because I intend my arguments to be independent of such theory. All I require of the reader’s theory of mind is that it not exclude human beings from having one.

Can There Be Minds Without Free Will?

The first thing we need to do is establish that this definition is not vacuous. Are there any circumstances under which an entity to which we ascribe mental states can fail to have free will?

A psychologist friend of mine with whom I discussed the matter reports that the answer is “yes”. The example case she reported is a bot (software agent) named Julia designed to fool people in Internet Relay Chat rooms into believing it was a person. Julia could be convincing for a few minutes, but human beings would eventually notice mechanical patterns as they came to the edge of her functional envelope. Studies of humans interacting with Julia showed that they continued to ascribe intentions and mental states to the bot even after noticing the determinism of its behavior. The study evidence suggests that they went from modeling Julia as being like a normal adult human to being like a child or a retardate.

This was not even the first such result. The AI literature reports humans projecting personhood even on much cruder early bots such as the famous ELIZA simulation of a Rogerian psychotherapist — and not giving up that attachment even after the shallow and mechanical algorithms used to generate responses were explained to them.

The reader may object, based on some theory of “mind”, that Julia did not actually have one. But it is possible that we are all Julia. Suppose that the human mind is a deterministic machine with a very large but finite number of states; suppose further that the logic of the mind has no sensitive dependence on initial conditions (that is, its states are coarse enough for us to measure accurately). This simplest-possible model we’ll call the “clockwork mind”. If Julia has a mind, this is the kind of mind she has.

In principle, any clockwork mind can be perfectly simulated on a computer. The computer would have to be more complex than the clockwork mind itself. To predict the state of the clockwork mind, just run the simulation faster than the original. But — and this is an important point — a clockwork mind cannot be predicted by itself, or by any clockwork mind of comparable power to itself. Thus, whatever viewpoint a hypothetical perfect observer or god might have, human beings have free will with respect to each other.

It is also worth noting that human beings could have clockwork minds even in a universe of chaotic or quantum indeterminacy. If you put enough atoms together, the Law of Large Numbers will normally swamp quantum effects. If you make the states of a finite-state machine sufficiently coarse, there won’t be unmeasurable initial-condition differences to be amplified. After all, clockwork does tick!

The Indeterminate Mind

It is unlikely that humans have clockwork minds. The anatomy and physiology of the brain suggests strongly that it has chaotic indeterminacy. It may have quantum indeterminacy as well (the mathematician Roger Penrose suggested this in his book The Emperor’s New Mind, one of the favored texts of the new mysterians). It is possible that the mind cannot be modeled as a finite-state machine at all.

These distinctions make little difference, because what they all have in common is that that they make the prediction problem far less tractable than for a clockwork mind. Thus, they widen the class of observers with respect to which a non-clockwork mind would have free will.

At the extreme, if human minds have intrinsic quantum uncertainty then even a perfect observer could not predict their future mental states, unless it happens to be an occasionalist god and the only cause of everything.

The most likely intermediate case is that the mind is a finite-state machine with sensitive dependence on initial conditions and an intractably large state space. In that case it might fail to have free will with respect to a perfect observer, but will have free will with respect to any imperfect observer.

Implications for Moral Philosophy

The binding I have proposed for the term “free will” does not rely on any supposed autonomy of the mind or self from external causes. From the perspective of traditional moral philosophy, it combines the worst of both worlds — a non-autonomous mind in an indeterminate universe. How, then, can humans being be appropriate subjects of praise, blame, or punishment? In what sense, if any, can human beings be said to be responsible for their actions?

The first step towards solving this problem is to realize that these questions are separable. Because we ascribe intention and autonomy to human beings and believe their future behavior is controlled primarily by those intentions, we explain acts of praise, blame, and punishment directed at human beings in terms of the supposed effects on their mental states. But this is where remembering that we have no direct access to mental states is useful; what we are actually after when we praise, blame and punish is to change observable future behaviors.

Thus, we also praise and blame and punish animals without much regard to whether they have mental states or free will. When training a kitten it is of little interest to us in what sense it might be choosing to crap on the rug; what matters is getting it to use the litterbox. Humans, like animals, are appropriate subjects of praise and blame and punishment to the extent that those communications effectively alter their behavior. The attribution of “responsibility” is at best a sort of convenient shorthand, and at worst a red herring.

In any case the question of “responsibility” is simply the question of free will in another guise, and admits the same answer within a predictive account. An observer may hold a mind “responsible” for the actions it initiates to the extent that the observer is unable to identify external causes of those actions.

This accords well with the way people normally reason about responsibility. If all we know of a man is that he murdered someone in a fit of rage, our inclination is to hold him responsible. But if we then learn that he was unwittingly dosed with PCP, we have an external cause for the rage and can no longer consider him fully responsible.

Conclusion

The predictivist account of free will I have proposed here solves the classical problems with the autonomy account of free will, accords with natural-language use of the term “free will”, and is consilient with physical theory. It does so at the cost of making the ascription of free will dependent on the computational and measurement capacity of the observer.

The parallel with the way “space” and “time” are redefined in Relativity Theory is obvious. As in that theory, our intuitions about “free will” are largely valid in human-observable ranges but tend to break down at extremes. Relativity had to abandon the idea of absolute space/time; in our context, we need to abandon the ideal of the perfect observer and accept that finite computational capacity is yet another fundamental limit on theory-building.

I believe a similar change in stance is likely to prove essential to the solution of other outstanding problems in philosophy.

Dec 22

Racism and group differences

At the end of my essay What good
is IQ?
, I suggested that taking IQ seriously might (among other
things) be an important step towards banishing racism. The behavioral
differences between two people who are far apart on the IQ scale are
far more significant than any we can associate with racial origin.
Stupidity isn’t a handicap only when solving logic problems; people
with low IQs tend to have poor impulse control because they’re not
good at thinking about the long-term consequences of their actions.

Somebody left a comment that, if what I was reporting about group
differences in average IQ is correct, the resulting behavior would be
indistinguishable from racism. In particular, American blacks (with
an average IQ of 85) would find themselves getting the shitty end of
the stick again, this time with allegedly scientific justification.

This is an ethically troubling point. It’s the main reason most
people who know the relevant statistical facts about IQ distribution
are either in elaborate denial or refusing to talk about what they know.
But is this concern really merited, or is it a form of tendermindedness
that does more harm than good?

Let’s start with a strict and careful definition: A racist is a
person who makes unjustified assumptions about the behavior or
character of individuals based on beliefs about group racial
differences.

I think racism, in this sense, is an unequivocally bad thing. I
think most decent human beings would agree with me. But if we’re
going to define racism as a bad thing, then it has to be a behavior
based on unjustified assumptions, because otherwise there
could be times when the fear of an accusation of racism could prevent
people from seeking or speaking the truth.

There are looser definitions abroad. Some people think it is
racist merely to believe there are significant differences
between racial groups. But that is an abuse of the term, because it
means that believing the objective truth, without any intent to use it
to prejudge individuals, can make you a racist.

It is, for example, a fact that black athletes tend to perform
better in hot weather, white ones in cool weather, and oriental asians
in cold weather. There is nothing mysterious about this; it has to do
with surface-area-to-volume ratios in the population’s typical
build. Tall, long-limbed people shed heat more rapidly than stocky and
short-limbed people. That’s an advantage in Africa, less of one in the
Caucasian homelands of Europe and Central Asia, and a disadvantage in
the north Asian homeland of oriental asians.

And that’s right, white men can’t jump; limb length matters there,
too. But whites can swim better than blacks, on average,
because their bones are less dense. I don’t have hard facts on
how asians fit that picture, but if you are making the same guess I am
(at the other extreme from blacks, that is better swimmers and worse
jumpers than white people) I would bet money we’re both correct. That
would be consistent with the pattern of many other observed racial
differences.

Sportswriter and ethicist Jon Entine has investigated the
statistics of racial differences in sports extensively. Blacks,
especially blacks of West African ancestry, dominate track-and-field
athletics thanks apparently to their more efficient lung structure and
abundance of fast-twitch muscle fiber. Whites, with proportionally
shorter legs and more powerful upper bodies, still rule in wrestling
and weightlifting. The bell curves overlap, but the means — and
the best performances at the high end of the curve — differ.

Even within these groups, there are racially-correlated
subdivisions. Within the runners, your top sprinters are likelier to
be black than your top long-distance runners. Blacks have more of an
advantage in burst exertion than they do in endurance. I don’t have
hard recent data on this as I do for the other factual claims I’m
making here, but it is my impression that whites cling to a thin lead
in sports that are long-haul endurance trials — marathons,
bicycle racing, triathlons, and the like.

It is not ‘racism’ to notice these things. Or, to put
it more precisely, if we define ‘racism’ to include
noticing these things, we broaden the word until we cannot justifiably
condemn ‘racism’ any more, because too much
‘racism’ is simply recognition of empirically verifiable
truths. It’s all there in the numbers.

Knowing about these racial-average differences in athletic
performance would not justify anyone in keeping a tall, long-limbed
white individual off the track team, or a stocky black person with
excellent upper-body strength off the wrestling team. But they do
make nonsense of the notion that every team should have a racial
composition mirroring the general population. If you care about
performance, your track team is going to be mostly black and your
wrestling team mostly white.

In fact, trying to achieve ‘equal‘ distribution is a
recipe for making disgruntled underperforming white runners and
basketball players, and digruntled underperforming black wrestlers and
swimmers. It’s no service to either group, you get neither efficiency
nor happiness out of that attempt.

Most people can follow the argument this far, but are frightened of
what happens when we apply the same kind of dispassionate analysis to
racial differences in various mental abilities. But the exact same
logic applies. Observing that blacks have an average IQ a standard
deviation below the average for whites is not in itself racist.
Jumping from that observation of group differences to denying an
individual black person a job because you think it means all black
people are stupid would be racist.

Let’s pick neurosurgery as an example. Here is a profession where
IQ matters in an obvious and powerful way. If you’re screening people
for a job as a neurosurgeon, it would nevertheless be wrong to use the
standard-deviation difference in average IQ as a reason to exclude an
individual black candidate, or black candidates as a class. This
would not be justified by the facts; it would be stupid and
immoral. Excluding the black neurosurgeon-candidate who is
sufficiently bright would be a disservice to a society that needs all
the brains and talent it can get in jobs like that, regardless of skin
color.

On the other hand, anyone who expects the racial composition of the
entire population of neurosurgeons to be ‘balanced’ in
terms of the population at large is living in a delusion. The most
efficient and fair outcome would be for that population to be balanced
in terms of the distribution of IQ — at each level of IQ the
racial mix mirrors the frequency of that IQ
level
within different groups. Since that minimum IQ for
competency in neurosurgery is closer to the population means for
whites and asians than the mean for blacks, we can expect the
fair-outcome population of neurosurgeons to be predominantly white and
asian.

If you try to social-engineer a different outcome, you’ll simply
create a cohort of black neurosurgeons who aren’t really bright enough
for their jobs. This, too, would be a disservice to society (not to
mention the individual patients they might harm, and the competent
black neurosurgeons that would be discredited by association). It’s
an error far more serious than trying to social-engineer too many
black wrestlers or swimmers into existence. And yet, in pursuit of a
so-called equality, we make this sort of error over and over again,
injuring all involved and creating resentments for racists to feed
on.

Nov 17

What good is IQ?

A reader asks:

To clarify, while I believe natural selection explains a lot I have
caveats about IQ as a tool for testing intelligence. If you can’t
measure the coast of France with a single number how can you do it
with human intelligence?

Easily. Human intelligence is a great deal less complex than the
coast of France. :-)

It’s fashionable nowadays to believe that intelligence is some
complicated multifactor thing that can’t be captured in one number.
However, one of the best-established facts in psychometry (the science
of measuring mind) is that it is quite difficult to write a test of
mental ability that is not at least 50% correlated with all other such
tests. Or, to put it another way, no matter how you design ten tests for
mental ability, at least about half the variance in the scores for any one
of them statistically appears to be due to a “general intelligence”
that shows up on the other nine tests as well.

Psychometricians call this general intelligence measure “g”. It
turns out to predict important real-world success measures quite well
— not just performance in school but income and job success as
well. The fundamental weakness in multiple-factor theories of intelligence
is that measures of intelligence other than g appear to predict
very little about real-world outcomes. So you can call a lot of other
things “intelligence” if you want to make people feel warm and fuzzy,
but doing so simply isn’t very useful in the real world.

Some multifactor theorists, for example, like to describe accurate
proprioception (an acute sense of body position and balance) as a kind
of intelligence. Let’s say we call this “p”. The trouble with this
is that there are very few situations in which a combination of high p
and low g is actually useful — people need to be able to balance
checkbooks more often than they need to walk high wires. Furthermore,
g is easier to substitute for p than the other way around; a person
with high g but low p can think up a way to not have to walk a high
wire far better than a person with low g but high p can think up a way
not to have to balance a checkbook. So g is in a strict functional
sense more powerful than p. Similar arguments apply to most of the
other kinds of specialized non-g ‘intelligence’ that have been
proposed.

Once you know about g, you can rank mental-capability tests by
how well their score correlates with g. IQ is valuable because a
well-composed IQ test measures g quite effectively. For purposes
of non-technical discussion, g and IQ can be considered the same, and
pychometricians now accept that an IQ test which does not closely track
g is defective.

A lot of ink has been spent by people who aren’t psychometricians
on insisting that g is a meaningless statistical artifact. The most
famous polemic on this topic was Stephen Jay Gould’s 1981 book
The Mismeasure of Man, a book which was muddled,
wrong
, and in some respects rather dishonest. Gould was a
believing Marxist; his detestation of g was part of what he perceived
as a vitally important left-versus right kulturkampf. It is
very unfortunate that he was such a persuasive writer.

Unfortunately for Gould, g is no statistical phantom. Recently g
and IQ have been shown to correlate with measurable physiological
variables such as the level of trace zinc in your hair and performance
on various sorts of reaction-time tests. There are hints in the
recent literature that g may be largely a measure of the default level
of a particular neurotransmitter associated with states of mental
alertness and speed of thought; it appears that calling people of
subnormal intelligence “slow” may not be just a metaphor!

IQ is one of several large science-related issues on which
political bias in the dominant media culture has lead it to present as
fact a distorted or even reversed version of the actual science. In
1994, after Murray and Herrnstein’s The Bell Curve got a
thoroughly undeserved trashing, fifty leading psychometricians and
psychologists co-signed a summary of mainstream
science on intelligence
. It makes eye-opening reading.

The reasons many popular and journalistic accounts continue to
insist that IQ testing is at best meaningless and at worst a sinister
plot are twofold. First, this belief flatters half of the population.
“My IQ may be below average, but that doesn’t matter because IQ is
meaningless and I have high emotional intelligence!” is,
understandably, a favorite evasion maneuver among dimwits. But that
isn’t the worst of it. The real dynamite is not in
individual differences but rather that the distribution of IQ (and
hence of g) varies considerably across groups in ways that are
politically explosive.

Men vs. women is the least of it. With other variables controlled,
men and women in a population have the same mean IQ, but the
dispersion differs. The female bell curve is slightly narrower, so
women have fewer idiots and fewer geniuses among them. Where this
gets touchy is that it may do a better job than cultural sexism of
explaining why most of the highest achievers in most fields are male
rather than female. Equal opportunity does not guarantee equal
results, and lot of feminist theory goes out the window.

But male/female differences are insignificant compared to the real
hot potato: differences in the mean IQ of racial and ethnic groups.
These differences are real and they are large enough to have severe
impact in the real world. In previous blog entries I’ve mentioned the
one-standard-deviation advantage of Ashkenazic Jews over gentile
whites; that’s roughly fifteen points of IQ. Pacific-rim Asians
(Chinese, Japanese, Koreans etc.) are also brighter on average by a
comparable margin. So, oddly enough, are ethnic Scots — though
not their close kin the Irish. Go figure…

And the part that, if you are a decent human being and not a racist
bigot, you have been dreading: American blacks average a standard
deviation lower in IQ than American whites at about 85. And
it gets worse: the average IQ of African blacks is lower
still, not far above what is considered the threshold of mental
retardation in the U.S. And yes, it’s genetic; g seems to be about
85% heritable, and recent studies of effects like regression towards
the mean suggest strongly that most of the heritability is DNA rather
than nurturance effects.

For anyone who believe that racial equality is an important goal,
this is absolutely horrible news. Which is why a lot of
well-intentioned people refuse to look at these facts, and will
attempt to shout down anyone who speaks them in public. There have
been several occasions on which leading psychometricians have had
their books canceled or withdrawn by publishers who found the actual
scientific evidence about IQ so appalling that they refused to print
it.

Unfortunately, denial of the facts doesn’t make them go away. Far from
being meaningless, IQ may be the single most important statistic about
human beings, in the precise sense that differences in g probably drive
individual and social outcomes more than any other single measurable
attribute of human beings.

Mean IQ differences do not justify making assumptions about any individual.
There are African black geniuses and Ashkenazic Jewish morons; humanity and
ethics demand that we meet each individual human being as an individual,
without prejudice. At the same time, group differences have a significance
too great to ignore. In the U.S., blacks are 12% of the population but
commit 50% of violent crimes; can anyone honestly think this is
unconnected to the fact that they average 15 points of IQ lower than the
general population? That stupid people are more violent is a fact
independent of skin color.

And that is actually a valuable hint about how to get beyond
racism. A black man with an IQ of 85 and a white man with an IQ of 85
are about equally likely to have the character traits of poor impulse
control and violent behavior associated with criminality — and
both are far more likely to have them than a white or black man with
an IQ of 110. If we could stop being afraid of IQ and face up to it,
that would give us an objective standard that would banish racism per
se. IQ matters so much more than skin color that if we started paying
serious attention to the former, we might be able to stop paying
attention to the latter.

UPDATE: An excellent summary of science relating to g
is here

Nov 14

Selecting for intelligence

Mike Smith relays an interesting possible explanation for the observed
statistical fact that American and European Jews have a mean IQ a
standard deviation higher than Caucasian gentiles:

During the period from ancient times to modern times, there was a
constant phenomenon of Jews converting to Christianity (there were
many social pressures to do so). In a nutshell, the idea is that the
lower-IQ Jews were statistically more likely to convert, as it freed
them from having to learn to read Torah. During the Middle Ages, it
was not worth the effort for most people to become literate; the
payback was not worth it. Books were rare and expensive, and learning
to read was no guarantee of getting ahead in life. Of course, people
like to do what they’re especially good at, and the higher-IQ’s among
the Jews did not find learning to read to be such a burden. As such,
they were statistically less likely to convert (and statistically more
likely to become fathers of many children in a culture that valued
intelligence.) It is worth noting that in ancient times, Jews were not
stereotyped as especially intelligent; that stereotype arose in the
Middle Ages.

This is a special case of one of my favorite Damned Ideas, originally
developed by John W. Campbell in the 1960s from some speculations
by a forgotten French anthropologist. Campbell proposed that the
manhood initiation rituals found in many primitive tribes are a
selective machine designed to permit adulthood and reproduction only
to those who can demonstrate verbal fluency and the ability to override
instinctive fears on verbal command.

Campbell suggests that all living humans are descended from groups
of hominids that, having evolved full-human mental capability in some
of their members, found the overhead of supporting the dullards too
high. So they began selecting for traits correlated with intelligence
through initiation rituals timed for just as their offspring were
achieving reproductive capacity; losers got driven out, or possibly
killed and eaten.

Campbell pointed out that the common elements of tribal initiations
are (a) scarring or cicatricing of the skin, opening the way for
lethal infections, (b) alteration or mutilation of the genitals,
threatening the ability to reproduce, and (b) alteration of the mouth
and teeth, threatening the ability to eat. These seem particularly
well optimized for inducing maximum instinctive fear in the subject
while actually being relatively safe under controlled and relatively
hygenic conditions. The core test of initiation is this: can the
subject conquer fear and submit to the initiation on the basis
of learned (verbal, in preliterate societies) command?

Campbell noticed the first order effect was to shift the mean of
the IQ bell curve upwards over generations. The second-order effect,
which if he noticed he didn’t talk about, was to start an arms race in
initiation rituals; competing bands experimented with different
selective filters (not consciously but through random variation).
Setting the bar too low or too high would create a bad tradeoff
between IQ selectivity and maintaining raw reproductive capacity. So
we’re descended from the hominids who found the right tradeoff to push
their mean IQ up as rapidly as possible and outcompeted the groups
that chose less well.

It doesn’t seem to have occurred to Campbell or his sources, but
this theory explains why initiation rituals for girls are a rare and
usually post-literate phenomenon. Male reproductive capacity is
cheap; a healthy young man can impregnate several young women a day,
and healthy young men are instinct-wired to do exactly that whenever
they can get away with it. Female reproductive capacity, on
the other hand, is scarce and precious. So it makes sense to select
the boys ruthlessly and give the girls a pass. Of course if you push
this too far you don’t get enough hunters and fighters, but the right
tradeoff pretty clearly is not 1-to-1.

(This would also explain why humans are designed for mild polygyny,
1 to 3 sexual partners per male. You can spot this by looking at
where human beings are on various physical characteristics that
correlate with degree of polygyny in other primates — disparity in
average size between males and females, for example, is strongly
correlated with it.)

What Campbell did notice is that this theory of selection
by initiation would neatly explain one of the mysteries of human
paleoanthropology — how human beings got so smart so fast. The
differences between H. Erectus and H. Sapiens are not large in
absolute genetic terms (they can’t be, we share over 94% of our genome
with chimps) but they’re hard to credit given normal rates of
morphological change in mammals and only two million years to work
in. Something must have been putting hominids under
abnormally strong selective pressure — and Campbell’s idea
is that we did it to ourselves!

Now, I’m not sure I believe Jews bootstrapping themselves up a
whole standard deviation in less than 2000 years, but if you apply
a similar idea to a longer timeframe it begins to look pretty
reasonable. (And Campbell did suggest that the Jewish practice of
infant circumcision had originally been a manhood rite.)

Within my lifetime, I expect we’re going to have the ability to do
germ-line enhancement of human intelligence. I strongly suspect that that
will set off another arms race — because cultures that suppress
that technology will be once again doomed against cultures that do. And
this time, we’re smart enough to know that in advance…