On my favorite mailing list, it was written:
> Anyway, if you think someone who lives and breathes some field is missing > some obvious point, they're probably right and you're probably wrong.
Generally I think this is true. However, I hereby submit the story of Eric and the Quantum Experts as a cautionary tale for all bright children.
Once, long ago in the 1970s, there was a bright young sprout named Eric who was exposed to the Schrödinger’s Cat thought experiment. There was explained to him the standard account of what happens.
Eric’s reaction was that the standard account seemed like obvious nonsense. His initial objection was that the account seemed to make mystifying, ungrounded assumptions about “observation”. Eric had learned about operational definitions from Alfred Korzybski and C.S. Peirce and Bertrand Russell, and he asked: What is operationally special about opening the box?
Eric pestered physics-literate people about this, and read some books. It did not take him long to discover that essentially the same objection had been raised (in a slightly disguised form) by a physicist named Wigner, who – alas – launched from it to land at an interpretation that seemed even crazier and less grounded than the standard one.
Try as he might, Eric could elicit no sense from what the “experts” had to say on the matter, just a load of snottiness about how you have to understand the math. However, Eric was, at the time, in training to become a theoretical mathematician and knew that sort of bullshit when he smelled it.
Eventually Eric managed to corner an unusually bright and lucid physicist who said, essentially: “There’s nothing special about observation. You ‘observe’ a quantum system whenever you bounce a photon off it. The key thing about Schrodinger’s box is that it’s a closed system until the experimenter opens it.”
This greatly relieved Eric, because it disposed of all the mystifying nonsense about human observers and consciousness that had somehow accreted around the physics. Eric, you see, was also an experimental mystic – a student (though not follower) of Zen and a third-degree Wiccan of anti-religious type – and he knew that kind of bullshit by its smell, too.
However, operationally equating “observation” with “any interaction between previously separate wave functions” did not solve Eric’s problem. It simply moved the problem to a different level.
Eric’s new question was: “OK, then. Why don’t the walls of the box observe the cat? It’s like, emitting thermal radiation, yes? Why doesn’t the cat’s hindquarters observe the cat’s forequarters?”
The question Eric was really getting at is this: If “observation” is some mystically special moment not captured in wave-function interactions, we’re not doing science any more but miracles; we might as well collapse into an occasionalist theology in which God makes every sparrow fall by observing its wave function. Game over.
To be doing science – that is to construct confirmable causal accounts with predictive value – we must assume that “observation” is not special. But under this assumption, Eric sees no reason to believe that “mixed states” (e.g. quantum superpositions of multiple classical states) ever persist for more than time epsilon even in small ensembles of particles. Even Schrödinger’s bacterium would be way too large to ever be in a mixed state, let alone his cat!
Either way, there appears to Eric to be a big fucking hole in the Schrödinger’s Box thought experiment. A dumb, obvious hole. It amazes Eric – it completely confounds and gobsmacks him – that physicists do not seem to get this.
When he presses the issue, the response is essentially “Shut up, kid. It’s a thought experiment; you’re not supposed to ask these questions.” Either that, or the perennial favorite “You have to understand the math.” Eric has become a programmer rather than a mathematician-in-training at this point in the story, but the smell of that bullshit has changed not at all.
Eric is deeply frustrated. Eventually, reluctantly, he concludes that there must in fact be some flaw in his reasoning invisible to him. It seems beyond the bounds of plausibility that every quantum physicist in the world is wrong and he is right. They have Nobel prizes: all he has is dreams and some interesting friends and a one-room walkup on Sansom Street. It is about 1978 or 1979.
Many years pass. Eric becomes rather successful in his field; in fact, in the late 1990s he develops something of a reputation for asking simple but devastating questions that can up-end entire disciplines. A certain measure of fame duly follows upon this. But he still has no answer to his Schrödinger’s-cat question, and occasionally it still bothers him.
Then, one day shortly after the century changes, Eric is reading a science magazine and stumbles over an account of decoherence theory:
The effect of decoherence on density matrices is essentially the decay
or rapid vanishing of the off-diagonal elements of the partial trace
of the joint system’s density matrix, i.e. the trace, with respect to
any environmental basis, of the density matrix of the combined system
and its environment. The decoherence irreversibly converts the
“averaged” or “environmentally traced over”[4] density matrix from a
pure state to a reduced mixture; it is this that gives the appearance
of wavefunction collapse. Again this is called
“environmentally-induced-superselection”, or einselection.[4] The
advantage of taking the partial trace is that this procedure is
indifferent to the environmental basis chosen.
Eric reads something quite similar to this in his pre-Wikipedia paper source, his jaw drops open, and he realizes “Holy leaping fuck, I was right all along!”. That phrase “indifferent to the environmental basis chosen” means, exactly, that it doesn’t matter whether you choose an account in which the walls of the box observe the cat or in which the cat’s hindquarters observe the cat’s forequarters; the off-diagonal elements of the matrix still vanish rapidly, the mixed state doesn’t last for more than time epsilon.
Upon further investigation, Eric learns that the groundbreaking work on this theory was done in the early 1980s, shortly after young-sprout-Eric had given up on the question in frustration. It will not actually reach popular accounts until Penrose’s 2004 book Road to Reality, a few years after Eric’s moment of jaw-drop.
Eric realizes that if he’d had a bit more courage and self-discipline, and moved from mathematics into physics rather than programming, he would have been rather likely to have invented decoherence theory himself and become a physicist renowned for kicking the props out from under the Copenhagen Interpretation. Which would have been, all things considered, much niftier and more fundamental than becoming a hacker renowned for kicking the props out from under closed source.
What is our lesson for today, children?
If you think you have spotted something fundamental that all the experts missed, don’t ignore it. Because, after all, you might be right.
Maybe a viable corollary to the principle you quoted would be “If you think you’ve spotted a flaw in an expert’s reasoning, but no expert is willing to put you straight properly, you’re probably on to something.”
wicca has degrees? (or am i missing some subtle irony?)
No irony. Roughly speaking, 1st = aspirant, 2nd = teacher, 3rd = qualified priest or priestess. They are gated by particular skills and initiatory experiences which vary across lineages but are broadly consistent. The nomenclature was probably swiped from the Freemasons.
How can you tell when you think you’ve spotted a legitamate flaw in an expert’s reasoning rather than your own lack of knowledge. Being contrary doesn’t necessarily mean you’re right.
>How can you tell when you think you’ve spotted a legitamate flaw in an expert’s reasoning rather than your own lack of knowledge. Being contrary doesn’t necessarily mean you’re right.
Well, that’s the big question, isn’t it? I can tell you this, though: I sure wish I’d been more contrary back in 1978.
> How can you tell when you think you’ve spotted a legitamate
> flaw in an expert’s reasoning rather than your own lack of
> knowledge. Being contrary doesn’t necessarily mean you’re right.
I think one key way to know is to ask the expert to explain why you are wrong. That doesn’t mean you can understand the answer, but you can often tell by the handwavyness of the answer whether you are on to something.
Honesty about degrees of ignorance is a good marker for expertise.
Inability to explain something is often a marker for BS. (Not always of course, but often.)
I think one metric worth considering as to how expert a person truly is would be to measure their degree of why-ness. What I mean is if you ask a question, then keep asking why, how many times can you do it before they run out of air.
Why on earth is a renowned physicist niftier than a renowned hacker? IMO, the hacker is WAY cooler. I suppose it depends which you find more interesting.
At any rate, I for one am grateful for your contributions to hacking and the open source movement.
What I mean is if you ask a question, then keep asking why, how many times can you do it before they run out of air.
Conversely, if you correctly answer enough “why” questions people will promote you to “expert” whether you want the title or not. :)
Or as Mark Rosenfelder put it, in the context of economics: “A layman should be cautious, but not over-cautious, when disagreeing with experts. Mere ignorance isn’t very attractive, and to be sure where the experts are not is a sign of quackery.” The trouble, of course, is the effect whereby non-experts consistently over-rate their own competence.
Can you perhaps put that decoherence stuff into layman terms, cats and boxes and guns? And what’s the end result then? Is there something special about the human mind as an observer or not?
On the larger topic: well Einstein was just a bright guy sitting bored in an office in Genf, amusing himself with thought experiments, well-read in physics but by no means an experienced expert of it at the time when he came with up with the basic about about special relativity. Sometimes a fresh point of view, untainted by the accepted truisms of the field, can be quite useful.
BTW thought experiments are IMHO underrated these days. They played quite an important role in the history of science. F.e. Galilei didn’t need to throw balls of wood and metal from the Tower of Pisa to figure out heavier objects don’t fall any faster than light ones. He simply had this thought experiment: I throw up a pair of shoes in the air and measure how long it takes for it to fall. And then tie that pair of shoes together. Will it fall any faster? Well, no. Q.E.D. AFAIK he did that famous experiment in the Tower of Pisa as a demo for convincing others, not as a way to figure it out.
Another point. These were real experts, no doubt, but there are a lot of folks who just look like experts. For example, if you ever see a nice paper in frames on the wall of the office of a person that says Member of the New York Academy of Sciences – http://www.nyas.org – or if you see it in a CV, you can be almost sure he is a quack, for you don’t have to do anything else to be a member of that than to pay $125 a year. No serious scientist ever would brag about it by putting it on the wall or in the CV.
>Can you perhaps put that decoherence stuff into layman terms, cats and boxes and guns?
Um…you mean I didn’t already? The environment of any quantum system observes the system. There are no mixed states, except that the entire universe could be in one.
>And what’s the end result then? Is there something special about the human mind as an observer or not?
In a word, no. The experimenter’s mind is just a particle ensemble like the box and the cat.
“The environment of any quantum observes the system.” Which means, ultimately, the waveform is always collapsed just we don’t always know it?
>Why on earth is a renowned physicist niftier than a renowned hacker? IMO, the hacker is WAY cooler. I suppose it depends which you find more interesting.
I’m not sure which I find more interesting, but I know which one is more fundamental. Hackers just push bits around; physicists grapple with the nature of reality itself. Also, despite my one big insight in the 1970s, I think physics is more difficult than hacking, which even by hacker standards makes it cooler to be a physics wizard.
>Which means, ultimately, the waveform is always collapsed just we don’t always know it?
Er, you mean the state vector, not the waveform. And that is disputable. The decoherence literature insists on a distinction between actual collapse and something they call “apparent collapse” or “environmental selection”, but this smells to me like a political language maneuver intended to allow Zurek and his gang to avoid rumbling with the Copenhagen Interpretation leather boys.
The same odour of “Crottes de Taureau” seems to surround climate “science” these days. It may well be true that the earth is heating up uncontrollably thanks to human created CO2 but pretty much word in that statement is debatable. Unfortunately climate scientists when pushed seem to resort to similar appeals to authority e.g. “it’s the consensus view” or “it’s standard statistical analysis in this field”.
Hasn’t thought invented the division between the observer and the observed? When it comes right down to it – I mean right down to the ground – how can there be a distinct break?
Of course, the division has practical use. For everyday living – me here and you there and that tree there and this ant here – the division is useful. It’s not possible to function without thought distinctions. But don’t we get caught up in this fragmentation? This is a problem, no? Next thing you know, we begin to get carried away with absurd worries over death. And instead of perceiving reality we cover our fear by organizing religion, or worshiping the state, or some other escape.
I think the climate science got distorted by political motivations on both sides. Science requires a kind of equianimity or indifference: an approach that a given research may yield results A, B or C and the researcher has no interest or desire whatsoever in trying to get result A, he is only interested in getting the correct result i.e. he focuses on using a correct method and is indifferent about what will it yield.
In other words, the required attitude of the scientific researcher is a kind of mindset that’s very different from our everyday mindset, which is always coloured by likes, dislikes, convictions and interests. The scientist has to force himself not to think like a normal human but a bit like a machine. (One of the earliest definitions of the scientific method, by Roger Bacon was exactly that: to process information disinterestedly as if by a machine.)
This just doesn’t work in questions when political convictions and economic and career interests are at stake in getting result A or B.
In such cases scientists switch back to functioning like a normal human – and it means in such cases they are just as unreliable as the rest of us.
You weren’t the only one to have these reservations:
http://plato.stanford.edu/entries/qt-epr/
As I understand it, CI lives on because it is a useful model that help people understand things like wave/particle duality, and provides a student with some kind of model to wrap around the math.
Some people just stop right there. “Hey, if I think about things in this way, I can understand them a bit better, and do better on my next exam. This model in my head sure helps me get along in the world. Why mess with it?”
And then after a while you pick up the lingo and just start expressing everything in mathematical terms, and leave the CI behind. It’s no longer helpful and you don’t use it, and you stop thinking about it. So when someone comes and asks, you access the old buffer, and give them an answer that doesn’t hold much water.
Lee Smolin talks about this phenomenon more eloquently than I do in this talk:
http://www.cbc.ca/quirks/media/2006-2007/mp3/qq-2006-09-23e.mp3
And here is a relevant quote from a New Yorker article about some of the same Lee Smolin stuff:
http://www.newyorker.com/printables/critics/061002crat_atlarge
===quote===
…Smolin furnishes the more definite answer. The current problem with physics, he thinks, is basically a problem of style. The initiators of the dual revolution a century ago—Einstein, Bohr, Schrödinger, Heisenberg—were deep thinkers, or “seers.†They confronted questions about space, time, and matter in a philosophical way. The new theories they created were essentially correct. But, Smolin writes, “the development of these theories required a lot of hard technical work, and so for several generations physics was ‘normal science’ and was dominated by master craftspeople.†Today, the challenge of unifying those theories will require another revolution, one that mere virtuoso calculators are ill-equipped to carry out. “The paradoxical situation of string theory—so much promise, so little fulfillment—is exactly what you get when a lot of highly trained master craftspeople try to do the work of seers,†Smolin writes.
The solution is to cultivate a new generation of seers. And what, really, is standing in the way of that? Einstein, after all, didn’t need to be nurtured by the physics establishment, and Smolin gives many examples of outsider physicists in the style of Einstein, including one who spent ten years in a rural farmhouse successfully reinterpreting general relativity. Neither Smolin nor Woit calls for the forcible suppression of string theory. They simply ask for a little more diversity. “We are talking about perhaps two dozen theorists,†Smolin says. This is an exceedingly modest request, for theoretical physics is the cheapest of endeavors. Its practitioners require no expensive equipment. All they need is legal pads and pencils and blackboards and chalk to ply their trade, plus room and board and health insurance and a place to park their bikes. Intellectually daunting as the crisis in physics may be, its practical solution would seem to demand little more than the annual interest on the rounding error of a Google founder’s fortune.
===end quote===
Bottom line is that group think is indeed dangerous, and one should watch out for it. Especially, in my experience, when considering working for startups.
Eric,
First rule of compiler bugs: It’s probably not a compiler bug.
Of course sometimes a neophyte tickles a bug in GCC. But until someone skilled in the arts can assure us that yes, this particular standards-compilant line of code causes the SSA frobnosticator to overrun its zorblatts and yield bogus object code, it’s safer for the neophyte to assume that his program is at fault.
Shenpen,
Horseshit. The anthropogenic global warming model has been supported by decades of theory and extensive research. The 1950s is when physicists first reasoned that the amount of CO2 we put into the atmosphere will almost surely exceed what the oceans can absorb, upset the balance of the carbon cycle, trap excess radiation and cause the earth to heat up. This hypothesis has been confirmed by 800,000 years’ worth of ice-core data. Some nonscientific hysterical lefties have exaggerated the effects but the model is sound. What’s bothersome is the ant-intellectual right wing who, sensing a threat to unfettered capitalism, have begun to spread FUD about the people out there in the field doing the research.
> This hypothesis has been confirmed by 800,000 years’ worth of ice-core data.
Interestingly enough, that data shows huge variations in CO2 levels that aren’t associated with the temperature changes supposedly driven by CO2.
And if we go further back, we find significantly higher CO2 levels without significantly higher temperatures.
Googling “ice core co2 measurements” finds lots of cool stuff.
> The 1950s is when physicists first reasoned that the amount of CO2 we put into the atmosphere will almost surely exceed what the oceans can absorb, upset the balance of the carbon cycle, trap excess radiation and cause the earth to heat up.
Read is rewriting history. Global cooling was the concensus. There were even predictions that the first Iraq war would cause nuclear winter.
And we still don’t understand clouds.
Eric, you may still have a chance to revolutionize physics, since decoherence by itself may not completely solve the problem. Penrose doesn’t think so; I don’t have my copy of *The Road to Reality* handy, but the Wikipedia page on decoherence contains the following footnote:
[quote]
Roger Penrose The Road to Reality, p 802-803. “…the environmental-decoherence viewpoint..maintains that state vector reduction [the R process ] can be understood as coming about because the environmental system under consideration becomes inextricably entangled with its environment.[…] We think of the environment as extremely complicated and essentially ‘random’ [..], accordingly we sum over the unknown states in the environment to obtain a density matrix[…] Under normal circumstances, one must regard the density matrix as as some kind of approximation to the whole quantum truth. For there is no general principle providing an absolute bar to extracting information from the environment.[…] Accordingly, such descriptions are referred to as FAPP [For All Practical Purposes]”
[/quote]
That bit about “no general principle providing an absolute bar to extracting information from the environment” is the key: as I understand the current state of decoherence theory, in principle, if we were smart enough, we could figure out a way to run an experiment that would allow us to recover from the environment the information about the original system (e.g., the cat) that showed it to be in a superposition (say |dead> + |alive>) — i.e., we could “reverse” the decoherence. (BTW, I assume that when you use the term “mixed state” you really mean “superposition” as I’ve just used the term here.)
Since nobody really wants to believe this, Penrose argues that there needs to be an additional physical process that “makes the decoherence permanent”, so to speak — in other words, after this process takes place (which would still have to be within time epsilon), the decoherence is no longer FAPP, but truly irreversible. In *The Emperor’s New Mind* he called it Objective Reduction (OR) of the state vector; I can’t remember if he still uses that term in *The Road to Reality*, but he seems to believe it might have something to do with quantum gravity.
Jeff:
Please find me a source cite from the 1950s talking about Anthropogenic Global Warming.
Please find me a climate model that can take the input data from 1908 and accurately replicate the climate through 2008, without data massage.
When it comes to climate science, I have much more respect for people who do the following:
1) Gather the data and tabulate it for all to use. Not “edited” or “massaged” sets. (See Watts critique of Mann and Hansen)
2) Show their data gathering techniques (See the web sites showing how many climate gathering data sites are not in compliance with the guidelines for instrument placement.)
3) Acknowledge that this is a complex system and we don’t know everything there is to it. The correlation between sunspots and cooling is compelling. We don’t know why there’s a correlation (thought htere are hypotheses, mapping solar magnetic field strength and cloud cover), but the correlation is stronger than the correlation between cigarette smoking and lung cancer…and it gives predictions that are accurate with a three month lead time.
I know of no-one who works with the actual data gathering instruments who thinks climate modeling software is more than a highly effective grant money generator. (Interestingly, they get their funding nipped for instrument upgrades because it’s a ‘settled issue’, and heaven help them if they throw data that goes against the consensus.)
There are extant USDA recommendations that next year’s spring planting should shift to cold-tolerant crops in Minnesota, Iowa and Wisconsin, Nebraska and the Dakotas. Winter wheat growth and late soybean tills are down 10% this year. Expect more rye.
It is a general principle that when you start out with an impossibility, you can get other impossibilities that come up. This is not interesting, logically.
Schrodinger’s cat starts out with an impossible box. A box sealed that thoroughly against quantum effects is, arguably, not even in our universe. On the inside of the box, the cat is probably 100% dead after all the virtual particles in resonance with the cavity become briefly real enough to obliterate anything inside the box, though I think that aspect of QM came after this idea. (See also the “quantum censor” idea as applied to time machines by Hawking; the universe seems to have it out for this sort of unreal situation.)
The real failure, IMHO, is not to point out that this is just an illustration of one particular point and is, mathematically, a sketch cartoon and not a real situation. If you want to argue about the ideas this sketch is illustrating, you have to move back down to the quantum level again. In the end, I think this sketch has hurt more than it has helped anybody.
It seems to me that decoherence is exactly like the CI except that the observer is the environment (the forest heard the tree fall). The transactional interpretation was published in 1986 and avoids the observer role. I’ve put a link in the “Website” field. I stumbled on to this interpretation when I was trying, unsuccessfully, to get beyond the popular, magical accounts of Quantum Computing.
When I first started down the path to hackerdom, my mind was much mildewed with Hollywood archetypes like Matthew Broderick and Sandra Bullocks, and the Hacker’s Manifesto. Your Hacker Howto is still the first Google result for “How to become a hacker,” and it has rescued me from slovenliness of thought and anti-social behavior. It has vouchsafed to me the forbidden slackness. In a word or two (or 25 or 26), if you’d become a physicist, I would still be in my basement, screaming about l33tness in chat rooms, and I wouldn’t even be interested in physics. (= hell on earth. :) So in a round-about way, you’ve contributed to science after all!
This has been a testimonial.
>Schrodinger’s cat starts out with an impossible box.
Your objection is sound with respect to my first question, but not to my second: why don’t the cat’s hindquarters (inside the box) observe its forequarters (inside the box).
>It seems to me that decoherence is exactly like the CI except that the observer is the environment (the forest heard the tree fall).
There is a key difference. Most version of the CI require a conscious observer to collapse the state vector. In a decoherence-theory account, the forest doesn’t have to be conscious.
It’s MSM and therefore to be taken with a HUGE grain of salt, but, seems to be pointing in generally that direction that I meant:
http://www.timesonline.co.uk/tol/comment/faith/article5324234.ece
>Eric, you may still have a chance to revolutionize physics, since decoherence by itself may not completely solve the problem.
See my next post.
>This has been a testimonial.
Well, thank you. That is what the Hacker HOWTO was intended to do; it’s nice to hear about it working.
> but not to my second: why don’t the cat’s hindquarters (inside the box) observe its forequarters (inside the box).
That’s part of what I mean by “sketch cartoon”; by the time you’re talking about pieces of the cat, you’ve left the cartoon. Part of the point is that the cat is quantum mechanically a single thing, but… that’s not a cat anymore.
No, there are no mixed states, but there is a probability distribution for all potential states.
I like think of it this way. You put a 6-pack of beer in your fridge, you leave the kitchen, and some time passes. You return, hoping for a nice, frosty beer.
Until you open the door to the fridge, any number of beers remaining are possible (your roommates might have quaffed one or all, or perhaps replaced what they quaffed.)
Regardless, Schrödinger never sought to promote an idea of simultaneously dead-and-alive cats as a serious possibility; quite the reverse.
Quoting from the original Schrodinger letter:
In this, you, like many others, were (apparently) misled.
>No, there are no mixed states, but there is a probability distribution for all potential states.
Yeah, the term of art for this premise seems to be the “ensemble interpretation”; it is also compatible with the “consistent histories” interpretation, one version of which you arrive at via decoherence theory.
>In this, you, like many others, were (apparently) misled.
Most accounts of the interpretation of QT put the “misleading” version right up front; that’s certainly the sort of thing I was reacting to in the 1970s. I’ve done some on-line research into the matter recently, and it appears that (a) Schrödinger and Born may have changed their minds about this, possibly more than once, and (b) as a result, there are a couple of different variants of CI floating around; some are subjectivist, assigning a privileged role to the observer, and at least one is not.
There is also doubt, however, that any non-subjectivist version can be reconciled with what they actually said. In the passage you quote, for example, the phrase “resolved by direct observation” can be read to smuggle in the subjectivism; certainly that’s how Wigner read it, and why he invented the Wigner’s Friend thought experiment.
Perhaps the problem with experts may be the inability to express themselves clearly. It would have saved me a lot of trouble if programming manuals would just explain objects in object-oriented programming as simply hierarchical collections of functions. (At least, that’s my mental model.) I still can’t get a straight answer as to what monads are in Haskell, which is a pity because it the language otherwise tastes good to me.
David: let me try. Monads are alternative implementations of the API that function composition obeys. If you have a (possibly empty) chain of functions such that their start and end points match up, you can compose them to get a single function, and it doesn’t matter where you put the brackets: similarly, if you have a (possibly empty) chain of functions with types Ai → T Ai + 1, where T is some monad and i = 0..n, then you can “bind” them into a single function A0</small → T An. For instance, if T is the Maybe monad, a function A -> Maybe B is a function that takes an A and returns either a B or an error; binding a chain of such functions simply propagates any errors that arise along the chain, and otherwise composes the functions. It’s a very natural notion once you get it, but it takes most people a while to get their heads around: I found meditation on the List monad to be very helpful (as was exposure to monads in their original setting of universal algebra, where they’re presented rather differently).
Why is this useful? Well, it allows for some very general and useful library routines, and it allows Haskell programmers to quarantine their functions with side-effects from the rest of the program. This is apparently a Good Thing.
I think the reason most manuals don’t explain objects as hierarchical collections of functions is that most people don’t think of them that way! In prototype-based languages like Javascript, it’s not even true.
Miles: Thank you, the explanation of Monads makes slightlymore sense. ;-) Someday I’ll have an epiphane about a difficult concept, the way I usually do. I finally grokked recursive tree-like data structures when I happened to look at real tree branches up close and saw that there were smaller copies of the same tree branch.
David, maybe this helps (no idea, haven’t yet touched Functional Programming):
“So yes, several GoF design patterns are rendered redundant in FP languages, because more powerful and easier to use alternatives exist.
But of course there are still design patterns which are not solved by FP languages. What is the FP equivalent of a singleton? (Disregarding for a moment that singletons are generally a terrible pattern to use)
And it works both ways too. As I said, FP has its design patterns too, people just don’t usually think of them as such.
But you may have run across monads. What are they, if not a design pattern for “dealing with global state”? That’s a problem that’s so simple in OOP languages that no equivalent design pattern exists there.
We don’t need a design pattern for “increment a static variable”, or “read from that socket”, because it’s just what you do.
In (pure) functional languages, side effects and mutable state are impossible, unless you work around it with the monad “design pattern”, or any of the other methods for allowing the same thing.”
From Stack Overflow.
“objects in object-oriented programming as simply hierarchical collections of functions”
Interesting. When I first learned about them at school I just accepted the real-word examples they offered, like a toaster, that has states (has bread, does not have bread, cold, hot) and behaviours (toast bread, eject bread).
Then later on I realized that it’s fine for the original purpose of OO (simulation) but most of what we do isn’t simulation, so my next paradigm was that it’s simply a hash-table where some of the values are variables and some are pointers to functions.
Then later on I realized it’s a good paradigm for OO if it’s a simple one and if you build it all yourself, but isn’t very practical for most purposes I really use it for. And then it dawned on me what’s wrong with 99% of the Java and C# class library tutorials I find on the net: they just use class libraries the way they are. They just instantiate the objects and call their methods and that’s all they do.
And that’s wrong: the whole point is to subclass the class libraries so that the ADOTablerReader (or whatever it’s called I forgot it) library class where it takes 10 lines to read an Excel table becomes in your subclass an ExcelTableReader where it only takes 2 lines or 1 to read it.
So if you subclass them then the whole verbose, clumsy crap that you can read in the class library tutorials suddenly becomes elegant and useful. So at the moment my _practical_ definition of OO is to find some useful class libraries and subclass them in a useful and elegant way, and it’s about as un-CS and untheoretical as it can get, but this is how it works well.
It’s a progress from abstract theory to down-the-earth practical way of thinking, from general, almost metaphysical theories up in the clouds to getting shit done. That’s roughly what happened to me in political philosophy too…
Jeff:
> First rule of compiler bugs: It’s probably not a compiler bug.
And what about those of us who hack on self-hosting compilers, eh?
Then the bug is probably in your compiler, not the compiler you used to bootstrap your compiler. :)
I felt like blogging about this but at the same time it’s a response/comment to this:
http://ljuwaidah.blogspot.com/2008/12/great-people-seriously-this-time.html
I suspect a physics professor who didn’t take speculations about quantum mechanics seriously had gotten into a furious debate the preceding term with someone who claimed that quantum mechanics provided an explanation for the paranormal. Every innovation is surrounded by other ideas that sound equally strange but turn out to be nutty.
Why all the f-bombs, though? I would have liked to send this to some of my young students but now I have to copy and edit it. How bout a little decency.
For a reasonably sane view of quantum mechanics (and a bunch of very witty and enjoyable writing to boot), see Nobel laureate (in physics, 1998) Robert Laughlin’s A Different Universe: Reinventing Physics From The Bottom Down. He performs the smackdown on the Copenhagen Interpretation far better than anybody else has to date (at least, of which I’m aware).
The Schrodinger’s cat thing always seemed like complete nonsense to me, until I realized the whole point of it was to illustrate the stupidity of Heisenberg’s uncertainty principle. Apparently Schrodinger was no fan of Heisenberg or his theories.
This somehow got turned around into proving the concept it ridicules, much as the current cooling trends have “proven” the wisdom of the warming models because they allowed for “significant decadal variations” or however the hell they put that..
>Why all the f-bombs, though?
Because, in this case, I wanted to make an actual *point* of violating the usual standards of scholarly discourse. I’m surprised you didn’t understand this.
That’s the way it has always been in science. You have scientists who are honest and encourage exploration. Then you have people, like yourself, who are so caught up in themselves that the inhibit inquiry. The F-bombs reveal the ridiculous anger that you are harboring for human nature. Shtuff happens…
You do realize Eric that Schrodinger posed the box question as why the standard model isn’t quite right. He posed it as a way of exposing a paradox. His thing was where does observation end ( as well as your question of what defines observation), what level of consciousness is required to collapse the probability wave.
thank you, eric. had EXACTLY the same objections myself.
i’ll now go and drill into decoherence theory when i’m properly in the mood for physics thoughts again.
similarly: i’ve long objected to the Doppler theory for stellar red-shift, particularly in light of:
(a) the varying speed of light even in our fixed gravity well, just for different concentrations and structures of matter. if i can create a black hole by stirring a liquid, and if the maths of rapid spinning not only extrapolates to the same external behaviour as a black hole but also matches the gravitational anomalies experienced by our intra-stellar probes’ slingshotting manoeuvres, perhaps it’s not as simple as we like to think.
(b) the wild extrapolations leading immediately to universe-redefining conclusions such as big-bang and dark matter and then dark energy — all merely outcomes of theory, not observation.
(c) light’s speed being wildly variable with mass, and the overwhelming bulk of mass in our gravity well being due to forcefields rather than particles (ie: CQD’s strong nuclear force), and the ongoing discovery of intra-stellar forcefield after forcefield (did you know that twice a year there’s a plasma tunnel reaching from the sun to the earth?), and the extraordinarily limited/restricted nature of our “reality” observations in our tiny little narrow-range temperature and gravitational/pressure and forcefield bubble.
k wood: that was exactly my understanding of the cat experiment: a thought experiment to show that the physics didn’t make sense at the macroscopic level. I didn’t get to the understanding alone, alas, I read it on some book, possibly Pratchett’s first ‘Science of Discworld’.
adriano: Unfortunately it the Copenhagen view has become the defacto theory because Neils Bohr was such a strong personality and hammered it in, then most physicists just accepted it as correct, with holes. There had been some theories forwarded in the 50’s by physicists an ocean away from him, and now there are other theories being forwarded – string theory being one of them. Unfortunately string theory has some serious issues as well, like how to we test it, and if we could test it how do we know there’s not something else that makes up strings.
I think, Eric, that the observation you’re making here is the same one Ted Woodward did in the 40s, which everyone gets wrong:
When you hear hoofbeats, don’t assume they have to be coming from horses.
http://baylink.pitas.com/20090113.html#ZEBRAS
Hey, very interesting read. I thought I’d mention some things even though I know I’m coming to the discussion a bit late.
The first thing is to say I don’t believe you actually missed an opportunity here. Schrodinger’s cat is not really a paradox about measurement, that would more so be the thought experiment called “Wigner’s friend”. (There are several different thought experiments in QM which are meant to reveal different parts of the theory.)
Schrodinger’s cat is more about the macroscopic applicability of the concept of coherent superposition. Quantum Mechanics is, in a sense, a generalisation of probability theory. However there is a new type of probabilistic mixing not present in classical probability. In classical probability, states are mixed by ignorance. I don’t know if something is one way or another, hence I use probability. In quantum mechanics this old style of probability due to ignorance is called incoherent mixing. The new feature of QM is the coherent mixing. This is a new form of probability which allows previously impossible correlations between observables. For instance entanglement, isn’t taken to be a form of nonlocal influence. Rather the two distance particles, due to coherent mixing have correlations impossible in a classical probability theory. Since correlation is not causation however, this doesn’t imply they’re actually influencing each other, just that they are correlated.
The basic philosophical problem of QM is that this new form of probability has no interpretation. Schrodinger’s cat is basically pointing out that this new form of probabilistic mixing never occurs for macroscopic objects, it’s not really a direct paradox of measurement. For that see Wigner’s friend or Vaidman’s bomb.
Decoherence was an attempt to explain this by showing that as the thermal nature of an object increases, the old classical probability immediately begins to dominate. The origin of thermal fluctuations is the environment.
This can be related back to measurment, in that since measuring devices are very thermal, then different dial readings on the experimental apparatus are mixed incoherently and hence are actual ignorance not coherent superposition of the device.
The reason people often say you need to know the maths is that there are really only two ways to explain this. One is if you already know C*-algebras, for mathematicians. The other is if you take physicists courses on measurment theory.
The basic idea behind decoherence has been known for a long time.
Not really sure how well I explained that. Maybe it would be better to shortly explain how decoherence is part of the copenhagen interpretation.
Let’s say I have a dial which measures if a particle is spin up or spin down. This is supposed to reflect the particle being observed. The dial can be pointing up or down, let’s say. Now we know from QM that a particle can be superposed as “up” + “down”. This immediately leads in the dangerous territory of the measurement dial being “up” + “down”, which we know is impossible. However decoherence comes along and says don’t worry, that the “up” + “down” for the measuring device just means “up or down, but I don’t know”. That is it’s just good old classical probabilistic ignorance, not the quantum superposition of up and down found in the particle.
Sorry to be explaining things, I just know how annoying it is when you can’t get a good source to explain this stuff.
Who can explain an infinite universe?
http://en.wikipedia.org/wiki/Infinity#Physical_infinity
I think I have, and explained what gravity is:
http://www.coolpage.com/commentary/economic/shelby/Mass-Entropy_Equivalence.html
Someone wrote in email:
> An interesting collary is that the more
> insane and abberated a being gets, the more he imitates the solidity of
> matter.
I replied as follows…
And that is very astute observation.
Doctors can promise/goal a material static (God only does for the spirit), and in the process guarantee insanity or health addictions.
There is analog to generalize to any kind of insurance. Insurance guarantees the failure of the entire group, because it attempts to replace God (nature, randomness), with an insurance that the entire group will pay for their erroneous notion that universe is trending towards more order. The underwriter merely parasites on the group’s futile attempt to stop randomness. Since 1856, thermodynamics has stated that the trend of the universe is towards more disorder, and I theorize that disorder explains what infinity is:
http://www.coolpage.com/commentary/economic/shelby/Mass-Entropy_Equivalence.html
Infinity (at edge of universe) is 0 mass & 0 energy, i.e. nothing material. That is what gives a physically unreachable boundary, i.e. infinite.
Haha, physicists' guilt, it's like Catholic guilt! There's such an obligation that physicists put into people to 'serve the community', it seems almost statist.
I remember my friends telling me I shouldn't leave physics and math for computer science, because physics and math were fundamental and computer science wasn't. There's also some sort of guilt that if you're good at physics, you're not supposed to leave. I always found math and physics easy perhaps because I've thought in terms of lists of axioms ever since I was a little kid, due to whatever fluke of genetics. I was actually planning on going to program video games for a living, then I found out in college that I was good at the math and physics. I remember people being upset because I'd get higher marks than other people without studying, but it mystified me why they'd want to study since it all logically followed. I do still feel guilty about possibly throwing this gift away, although it's not clear it's a gift since those who are good at math are quite good at math.
However, I strongly feel that it's only rational to account for the difference between you not existing and you existing in the world. Therefore, if you could discover a theory of decoherence a year earlier than someone else, it's actually not that important, as that other person would've come along and discovered it right after you, so the delta you'd make by your inclusion in reality is small. The irrationalist standpoint common in society is to take the delta between people. Thus the Nobel prize was given to the guy who discovered decoherence in year X, even if in year X+delta someone else also discovered it. Anyway based on this reasoning I figured I should do more creative work than physics, since my contribution in physics might be useful, but there are tons of people in physics nowadays, it's not like 1900, and it's better to make the unique contribution that I can make rather than the contribution that any of a line of a million physicists can all make. Plus there's a notion of your obligation to your own happiness, for example I don't enjoy physics that much, I was just doing it to learn how to put physics in games, so there's no reason someone who doesn't like physics that much should do it for their whole life just because "they can make a contribution."
So by the theory of accounting for the difference of your existence, you made the right choice. I also don't see why it's particularly important to make a contribution that's seen as important, it seems better to just do what you enjoy.
Anyway I still do wonder whether I made the wrong choice since I do find math very easy to do and my brain just kind of endlessly spins and analyzes all sorts of uninteresting axioms when I'm not doing math. It's kind of annoying actually, it's like a background process. I wake up in the morning with logical problems solved that aren't interesting because I'm not doing research into axiomatic problems…
I do find academia enjoyable but I think everyone should do what they want. Because there's no sense in being miserable just because there's some important theory that needs to be proved.
If you look at Einstein’s “Relativity” appendix 2, you can see that meters=i*c*seconds. Or maybe it should be seconds = – i*c*meters. By using this substitution, Einstein says you can do away with the complicated minkowski space and use the simpler euclidean space. By Occam’s razor, it’s actually a requirement. Then by removing either meters or seconds from all physics equations and keeping the correct “i” or i^2 in the units, you can make predictions that you may not have otherwise known. For example, force and momentum become vector quantities. Instead of E=mc^2 it should be E = – mc^2 (energy is the negative of mass, and Hawking even discusses this in “Brief History of Time”. If the difference between meters and seconds truly and deeply is only a “-i”, then “velocity” is not a number like we normally think. The only way I can make it work with relativity SQRT(1-(v/c)^2) is to say the apparent velocities we see are really a change in c instead of v. It also helps expose existing problems such as physicists saying “nothing can go near the speed of light” when every photon leaves every observer at exactly the speed of light, so all observers always go exactly 0 m/s relative to light. It also undermines one of relativity’s two postulates, so relativity shows there is something wrong with relativity.
Also, there are no integers in physics that are not based on the 3 dimensions of space, which experts say are based on the integers of spin. For example, E=mc^2 has a 2 that is based on the 2 spatial dimensions that are not in the direction of travel, if looking only at special relativity. To model 4D space-time, a modeling system needs 6 degrees of freedom. Our cortex is 6 layers of neurons, using 6 equations to solve 6 unknowns. To go to 5D space-time, we would need 10 layers. Ants, the close cousins of wasps, use only 2 layers for 3D space time of their eyes while wasps have 6 layers for 4D space-time flight.
Experts in physics (and probably in most fields)… most of them don’t fully understand everything, but just believe the authorities: a few people that invented and claimed to verify a particular theory. The rest just get used to it. We say, it get’s “accepted”. Popularity increases this effect.
Then if someone comes along and says, guys, this doesn’t makes sense — all the “experts” just claim this someone is a crack, is stupid and so on.
The more time passes, the harder it is to expose a flaw: not necessarily because the theory has really been tested more.