In a response to my previous post, on Acausality and the Scientific Mind, a commenter said: “The computationalist position necessarily entails that subjectivity does not really exist, and what looks like subjectivity is a mere illusion without causal force.”
There are, I’m sure, many vulgar and stupid versions of computationalism that have this as a dogma. But it is not at all difficult to construct a computationalist model in which there are features that map to “subjectivity” and have causal force. Here is a sketch:
Human beings have minds that are persistent information patterns of very high complexity. These patterns evolve over time, incorporating memory (both memories about sense data and memories about features of past mental states). The path can in principle be modeled as a computation in which the inputs are the present mental state and sensory inputs, and the result is a succeeding mental state. (The last sentence is the computationalist position.)
The computational path of a mind in the space of its possible mental states is chaotic, in the sense that its future has sensitive dependence on unmeasurable features of its present state (it is not significant to my argument whether the indeterminacy is quantum, classical, or due to computational intractability). The mind is therefore, as a whole, intractable to prediction.
Now we face the procedural question of how we identify a mental state. We do this in the same way we identify the state of a collection of matter: by measuring observable consequences. We observe that mental states of different people can be grouped into equivalence classes by observable consequences. (If this were not so, language, art, and communication in general would be impossible.)
Next, we observe that important features of our mental states are not intractable to prediction. We know this because people can form predictive models of each others’ mental states; in fact people rely so heavily on this ability that there is a strong case we evolved into sophonts in order to get better at it.
It is important, and bears emphasizing at this point, that we now have a model of mind in which (a) some features of its state at any given moment are tractable to prediction, (b) other features are not tractable to prediction, and (c) the tractable and intractable features are causally entangled with each other and are both inputs to ongoing computation.
Now I propose a definition: the “subjectivity” of a human being is that portion of his or her evolving mental state which is intractable to prediction by any observer.
I think it is not difficult to see that this definition accords with our intuitive notion of “subjectivity”. But here is the important point: As so defined, subjectivity is not a mere epiphenomenon or illusion. It has causal force because it is an input to the computation of future mental states which have observable consequences.
See, that was easy. Subjectivity reconciled to computationalism in less than 20 minutes of writing. A lot of philosophers of mind seem to be remarkably thick-headed.
I would also argue that a lot of the relevant philosophers fail to grasp much basic systems work. What you describe reflects any closed-source state-machine (from Comp-Sci), or a basic feedback system in engineering.
http://en.wikipedia.org/wiki/File:Ideal_feedback_model.svg
>What you describe reflects any closed-source state-machine (from Comp-Sci), or a basic feedback system in engineering.
Of course it does. Any second-year undergraduate in engineering would be able to tag how fundamentally stupid Feser’s argument is if it didn’t come wrapped in an obscuring cloud of academese. What happens, though, is that most engineers never try because they’ve been indoctrinated with the belief that philosophers possess a sort of ineffable higher knowledge not available to hairy-eared grunts who merely bend metal or silicon.
For various reasons of personality and history I am pretty much immune to being fooled by this sort of bullshit. I think this sometimes leads people to overestimate my intelligence. Part of this is Alfred Korzybski’s fault.
“Now I propose a definition: the “subjectivity” of a human being is that portion of his or her evolving mental state which is intractable to prediction by any observer. ”
Why is it crucial that it is intractable? My subjective feelings seem to exist whether or not someone else can read my mind.
>Why is it crucial that it is intractable? My subjective feelings seem to exist whether or not someone else can read my mind.
Because “subjectivity” is a philosophical term of art that is not quite identical to “subjective experience”. Philosophers of mind mean it to refer precisely to that which is (or is held to be) not susceptible to objective examination.
I’ve also noticed that good network engineers tend to have a good grasp on dysfunction in human groups and organizations because they understand that well defined protocols are *required* for successful operation of any autonomous system. The larger a group becomes the more unstable it becomes due to computational problems in scale and functional diversity required for correct operation. The same then applies to the human mind and the individuals existence within his/her community. It’s another reason why race, religion, sex and culture etc are valid forms of discrimination as they help to ensure stability in each autonomous group. From there, it is relatively easy to see that the mathematicians will end up defining human groups and human consciousness far better than the philosophers (who, BTW, are just talking within the bounds of their own AS).
“Now I propose a definition: the “subjectivity” of a human being is that portion of his or her evolving mental state which is intractable to prediction by any observer.
I think it is not difficult to see that this definition accords with our intuitive notion of “subjectivity”. ”
No, it doesn’t. Consider perception: if you and I are standing in the same room and looking at the same things, I will have no difficulty predicting that part of your mental state concerned with your perceptions of the room, and you can as easily predict the part of my mental state concerned with my perceptions of the room. But our perceptions are – by definition – subjective to us; indeed perception is the paradigm of subjectivity. Any definition of “subjective” that doesn’t cover it must be wrong.
>But our perceptions are – by definition – subjective to us; indeed perception is the paradigm of subjectivity. Any definition of “subjective” that doesn’t cover it must be wrong.
You are making assumptions that will turn around and bite you. The earliest successes in reading human thoughts with SQUIDs are thirty years old this year. It is now possible, at the cutting edge of neuroimaging, to actually read visual activations out of a person’s brain and reassemble them into sight pictures. Does this mean your visual processing is no longer subjective?
Meditate on that question for a while. There are multiple levels in it.
I have to disagree with Brazier’s opinion. The observations concerning the room that you can predict (the things that you agree on) move into the ‘objective’ column. After all, the ‘objective’ results of an experiment are considered so only because everyone reads the thermometer and comes up with the same temperature value.
>>Why is it crucial that it is intractable? My subjective feelings seem to exist whether or not someone else can read my mind.
I don’t think that Eric is saying that it’s crucial. I believe he is saying that it seems to be an observable phenomenon.
Also, that even if I could read your mind, that predicting your next step would still have an element of guess (Sometimes more, sometimes less — but never as certain as the prediction that an egg will break as you watch it roll off the edge of the table towards a bare floor with nothing available to cushion it.)
Eric:
Your definition of subjectivity sounds like it has a parallel with Heisenberg’s Uncertainty Principle in that a part of the information package is always unobtainable.
@ESR
So “subjective” plays the role of the microscopic states in thermodynamics. We can only observe the average over all states. The neural, or better, the synaptic, states are unobservable, only the states of the effector axons lead to observable, objective, facts.
I would prefer a more transparent idiom. Say, internal or microscopic states. But the we would lose the requirement that these states are unobservable (which I think is unnecessary).
@ TomA
We’re talkin’ ’bout chaos here. The predictability of weather is frequently vague 12 hours out and 36 hours out, it is often basically just speculation.
(This is true of Calgary because stuff from the West, the North East and sometimes the South West tend to fight it out here. I imagine that some places have more predictable weather. But if a place has weather at all, it probably tends to get vague a couple of days out.)
I’m not sure I agree that this corresponds with our intuitive grasp of “subjective.” The nature of the word “subjective” is that it is something entirely “you.” There is, for many people, a color of non objectivity. The word almost demands a non physical component, a soul of some kind.
And this is not some hair splitting difference. On the contrary it is the substance of the matter. The whole argument about subjectivity is really about the discomfort people have with the idea that we are deterministic machines (insofar as any machine is entirely deterministic of course.) To redefine the soul as a different machine whose workings are too hard to grasp does not resolve this issue, in fact it simply gives cover for those who want to say “and we call that the soul” in the same way that people who say “science doesn’t know before the big bang, and this we call ‘God'”.
To me it is providing mysticism and non materialism a cover, an imprimatur of science for their nonsense that they do no deserve.
I have often said that I can give you a definite, indisputable answer to the question “Do I have free will” if you can give me a definite, indisputable definition of the words “I”, “have”, “free” and “will.” Which is to say the mystics and non materialists love to hide their nonsense by obfuscating the meaning of words.
Which is to say there is an important and qualitative difference between something that present technology cannot measure and something which can never be measured. In fact there is a third level above that. Some things are measurable, but cannot be measured, such as both the position and momentum of a particle at the same time or the precise quantum state of a brain. And somethings don’t have an actual measure — such as the soul or God.
Truthfully though, think information in the brain is stored at a much more gross level than quantum state. The nature of the medium needs far more robustness than that, and the information density seems many order of magnitudes lower.
> in the same way that people who say “science doesn’t know before the big bang, and this we call ‘God’”.
… and thus fall in the God-of-the-gaps fallacy.
“To me it is providing mysticism and non materialism a cover, an imprimatur of science for their nonsense that they do no deserve.”
Note that the term “materialism” has been mostly deprecated, because “matter” from the physical point of view is of little philosophical interest. These days the equivalent position seems to be physicalism – is the mind physical? The obvious answer to me is– well duh, of course it is. It’s causally entangled with the physical universe. What else could it be?
Nevertheless, I think that there _is_ some support for the thesis that at least some parts of subjective experience are ontologically fundamental in some sense, and this is very much a non-“materialist” position (it’s usually known as panprotopsychism). The argument is essentially an Occamian one: since subjective experience and phenomenology is so central to us, it makes sense to assume that its underlying parts are ontologically quite basic. Computationalism is quite unsatisfactory, because there is simply no way to escape the notion that any large-enough physical object is simultaneously “instantiating” a huge number of possible mind-like computations with their attendant subjective experiences, and this renders the computationalist thesis quite vacuous.
guest on 2014-01-07 at 18:17:27 said:
> The argument is essentially an Occamian one: since subjective experience and phenomenology is so central to us, it makes sense to assume that its underlying parts are ontologically quite basic.
But if you are going to use Occam’s razor that way we would have to assume a god, since that seems to be pretty basic to most of human existence. Just because we sense things as categorically different doesn’t make it so. We sense the red light of a flame completely differently than we sense the infrared light from the flame, but really they are hardly different at all, except insofar as our sensory apparatus makes them so. Which is to say these sensory differences are an emergent property from the physical configurations of our bodies.
BTW, I think you make an intelligent comment, here, so this isn’t meant about you in particular, but as soon as people start throwing around long many syllable words that are neologisms, my immediate reaction is that they are hiding behind words rather than making a solid simple argument. There is nothing wrong with jargon, but just because you can label something with a long word doesn’t offer any real substance to the philosophy the word encompasses.
True profundity is the ability to explain deep, rich and complex things with a few simple words.
I never had the intuition that “being a person with subjective experience” was the same as “not being externally predictable.” Venkat Rao (http://www.ribbonfarm.com/) thinks they’re the same, which is why when he gives advice about finding a “free” life for yourself, he focuses mostly on not being predictable or falling into cliche. Scott Aaronson thinks so (http://arxiv.org/pdf/1306.0159v2.pdf), which is why he finds “free will” in the physical possibility of quantum states that are governed by Knightian uncertainty. He seems to have the intuition that anything predictable could never be “free” in the way we commonly use the term.
I have no idea where this comes from and it baffles me. You draw the box around the “self” in some way, some kind of blob in spacetime, probably including your brain, and any rule for such boundaries has an element of arbitrariness in some edge cases, and you call actions “yours” if their causal antecedents pass through that blob. It’s a notational convenience.
Or, heck, try nolipsism. (http://www.jenanni.com/papers/In%20Defense%20of%20Nolipsism.pdf) I think that one hangs together.
Hi. I’m not home right now. But if you’d like to make a call, please leave a message after the beep.
@esr Interesting way of torturing the human mind into a finite state machine using the complete mental state as a black box input and output. Only one problem: the human mind isn’t really a finite state machine because by definition it cannot be so. FSMs are completely predictable and the states are available for immediate inspection. The human mind, not so much.
There is something about the mind-as-information-state picture that seems paradoxical to me:
Information states can be taken to be dimensionless mathematical abstractions, not necessarily tied to any particular physical mechanism, which is why any Turing machine can simulate any other (discrete approximation of a) physical system, a Conways game of life can simulate a Turing machine, etc.
If extremely complicated series of information states are aware, if Turing machines/Conways games/entirely abstract patterns can be origins for conciousness, then it would seem to follow that any self-consistent awareness/world combination *is* aware somewhere out there in math-Platonic-space.
You might say that they don’t exist in any physical sense because they are not causally entangled with our universe, or physically instantiated anywhere. My mental exercise is devoid of the detail necessary to instantiate these hypothetical entities. However, it seems like physical existence relative to the universe we are aware of wouldn’t be necessary – Mathematical existence would be all that would be required for these patterns’ conciousness to be equivalent to our own from the standpoint of (pattern/mechanism=awareness).
So how the heck do you organize any of that? What confers realness/probability on a universe/worldline, except in some sort of local sense (in terms of the local laws) from within it? (This is even worse than the Born rule and MWI quantum mechanics) You could, for example, have a measureless infinity of variations on our universe that are entirely disconnected causally/physically, each of which contains some small variation on your concious experience. You have limited information as to which of these you live in, but there is no obvious way to assign any measure to probability, because unlike in QM, these have nothing to do with each other physically. What would it mean if there were (NaN) of one relative to (NaN) of another?
Anyone have any ideas?
Nevermind on the last part. It might just be a special case of “where do you get prior probabilities from anyway?”
Solomonoff, or punt.
“I have no idea where this comes from and it baffles me. You draw the box around the “self” in some way, some kind of blob in spacetime, probably including your brain, and any rule for such boundaries has an element of arbitrariness in some edge cases, and you call actions “yours” if their causal antecedents pass through that blob. It’s a notational convenience.”
This was always how I understood the term. I never felt acausality had to be part of drawing a box around the part of the world I have control over, calling it “me”, taking responsibility for what it does, and being annoyed with interference.
“You are making assumptions that will turn around and bite you. The earliest successes in reading human thoughts with SQUIDs are thirty years old this year. It is now possible, at the cutting edge of neuroimaging, to actually read visual activations out of a person’s brain and reassemble them into sight pictures. Does this mean your visual processing is no longer subjective?”
…
That’s a complete non sequitur. To make myself clear: according to the common intuitive understanding of the word, perceptions are obviously subjective experiences; according to the “definition” you give, perceptions can’t be “subjective” because they are very easily predicted; therefore your “definition” doesn’t capture the common understanding, which makes it wrong. The ability to read someone’s visual cortex and reconstruct an image of what they’re looking at is interesting in itself, but if anything it leaves your attempt at definition worse off than it was.
I’d also like to know what assumptions you think I’m making, since to the best of my knowledge I haven’t said enough for a sound inference of the assumptions I am making. (Well, except for quoting Feser – but that would help you only if you read one of his books, and I doubt that you, ESR, have done so.)
@morgan
If you were to show me an FSM containing at least 70 billion different states where every single prior transition in the history of the network (which transitions at roughly 100hz) is a potential input to the transition logic (which by the way, can reconfigure itself)… i’d probably classify it as a bit unpredictable. Especially if it had been running for anything more than 10 years (or about 31 Billion transitions).
FSM is at best a very leaky metaphor for human cognition, but ESR’s point that explaining subjectivity by pointing out that our computations change through our experiences and how we perceived them and thus any “human computation” engine would need the sum total of our current mental state doesn’t seem that controversial to me at all. It’s very similar to what Russel and Norvig put forward as a general model of “AI”.
Echoing Sarah: If “subjectivity” is simply that which cannot be objectively examined, who cares? It seems that framing the question in that way is not the important point. I care about whether I have free will. I do not care whether my free will or lack of same makes my actions hard to predict. If all my actions and thoughts were perfectly predictable, yet arose from my own predictable (but real) choices, I would be satisfied and I think most people would be as well. So again, who cares about subjectivity, so defined? Why does it matter?
This seems to be another case where philosophy has fallen prey to the Streetlight Problem.
” If all my actions and thoughts were perfectly predictable, yet arose from my own predictable (but real) choices, I would be satisfied and I think most people would be as well. So again, who cares about subjectivity, so defined? Why does it matter?”
The idea to shoehorn “subjective” into the gaps of our knowledge seems to raise more hackles than only mine.
I too still do not see why my perceptions and feelings are only “subjective” when they cannot be objectively verified. If I have pain, that experience is subjective. It is still the same experience if someone registers the neural states and hormonal levels that instantiate the pain in my body. Then my pain is objectively verifiable, but the experience is still subjective.
Also, if my happiness can be copied by another person, so we both experience the “same” happiness, then that happiness is still my subjective feeling. Dreams would be the summit of subjective experiences. Would they end being subjective when we can “copy” a dream onto another person?
And those who think a deterministic, Turing complete, brain is predictable, should read Turing’s proof of UN-decidability: You can see what a Turing computer is up to only if you run the simulation to its end.
So, if you want to predict a person’s thoughts/actions, you must make a copy (simulation) down to the ion channels in the synapses. There are interesting discussions about the amount of time, matter, and energy needed to do that. It is debatable whether you can run that simulation faster than real time.
So, even if the human brain is “only” a deterministic Turing machine, you will have to run a complete simulation of the person and his surroundings to “predict” her behavior. Essentially, you have to make a copy of a person and let her live her life in an identical world.
>I too still do not see why my perceptions and feelings are only “subjective” when they cannot be objectively verified.
You’re barking up the wrong tree.
You and Michael Brazier seem to have both hared off on the same tangent. It seems obviously wrong to me, but I’m not entirely sure where the disconnect is. It may be in the difference between the objective/subjective distinction and the way philosophers use the term “subjectivity”. Or it may be in excessively Aristotelian reasoning. It’s probably both.
So let’s start here: All experience is intrinsically subjective; what this means is that we are not able to communicate the activation states of our brains in such a way that they are perfectly replicable by others. There is an unknowability about the difference between what you experience in response to stimulus X and what I experience in response to stimulus X that is intrinsic.
Nevertheless, we can speak about “objectivity” because sometimes the unknowable differences between our different subjectivities are not functionally important. For example, if there are several apples on a table in front of you and me, and we are trying to figure out how to divide them equally between us, the difference in subjectivity between how you and I experience the color of the apples is not important. Counting the apples is a replicable decision procedure; thus we say that the number of apples is an “objective” fact.
Some features of our mental states are objective facts in the sense that they can be measured replicably with reasonable accuracy. Many other features are not. Some of the features that cannot be measured might become so if we had the right equipment (this is why I brought up neuroimaging and mechanical thought-reading in my reply to Michael Brazier).
The predictability of our mental states is something else again. Just because we can measure a quantity replicably, making it an objective fact, does not mean we can predict it – it may be a result of a chaotic dynamic process beyond our ability to model. If the quantity is a feature in a system as complicated as a brain this will often be the case. This is why Winter’s sentence that I’m replying to is barking up the wrong tree; it assumes an equivalence between “subjective” and “unpredictable” that I don’t.
I think I inadvertently encouraged this confusion by failing to distinguish between “subjective” and “subjectivity” in the OP. I said the latter but several commenters thought I meant the former. Later I tried to point out that these terms are not equivalent, but that apparently slipped by a lot of people.
You have a subjectivity that is the entirety of your phenomenal field (everything you perceive) minus those portions of it that you can tag as “objective fact” because you and other minds can agree on replicable decision procedures about them. While you can receive reliable information about features of someone else’s mental state (this is what language is for) you cannot know what it is like to inhabit another person’s subjectivity. This is definitional.
Now I hope the definition in my OP is clearer.
Think of music, Two people can hear the same piece, but one likes classical and the other likes jazz. Play one or the other to both, the sensory inputs to both are identical, their subjective experiences are completely different.
@Winter
> Then my pain is objectively verifiable, but the experience is still subjective.
To say it is subjective is to say that it is only something that Winter experiences. But the essence of the question is this: who is Winter? This subjective experience is subjective because it is in a box, separate from an objective experience. But what is the box? With sufficiently advanced technology can I see inside the box? If I can see inside the box, it isn’t a box anymore.
> And those who think a deterministic, Turing complete, brain is predictable,
As with your excellent point, deterministic is not the same as predictable though the difference is one of pragmatics. Again, as someone said, let’s not look for our keys under the street light. Ultimately the question is one of free will, whether something outside of the interaction of matter determines our outcomes.
It should be said for completeness that there really is no such thing as determinism, since at the quantum level entirely random events do occur, and our notion of determinism is really one of “determinism to a high probability”, but let’s not confuse the matter with that particular wrinkle.
Whether a Turing machine halts or not may not be predictable, however, nobody is suggesting that that Turing machine decides on its own whether to halt, and nobody is suggesting that a divine being intervenes to decide either.
@morgan
If I take this statement at face value, then no CPU ever designed could contain a bug (after all, every CPU is designed as a finite state machine, and the states are available for inspection)… since this is contradicted by history, either there is a fault in your premise regarding all FSMs being predictable, or the prima facie reading is not your original intent.
As a second case, consider a random number generator. Is this a finite state machine? Yes; whether implemented in hardware or software, there is a upper limit to the resources available to represent the machine state. Is this predictable? In theory, No; any ability to predict the output of the generator is considered to be a flaw by cryptographers.
@ESR
I think Feser is not stupid, I think you and him work from entirely different premises and live in entirely different intellectual universes. You are a practical man, your worldview is that of science and engineering and technology and you export it into philosophy, the cornerstones of your worldview are observation and prediction. You never talk about how things “really are”, you talk about what an observer can detect about a thing. You have even said a few years ago that people should not ask questions like “Is X really, ultimately, truly Y?” but rather ask questions like “What are the observable differences between a universe where X is truly, really, ultimately Y and in one where not?” Ultimately your intellectualism grows out from a Getting Stuff Done problem-solver mentality. The Via Activa.
Feser lives on an entirely different intellectual planet, one of humanities oriented philosophers going back ultimately to Plato, who would reject the whole empiricism behind your worldview because they would say they care about how things really are and not how things appear (observable) to be. They are NOT practical people. They don’t have practical problems to solve, they don’t have this whole Get Stuff Done mentality thus they don’t need to be empiricist. In fact their ancestor, Plato said very clearly that from his viewpoint the senses are fallible because material things are not quite real enough for him, so observation does not matter much, and only the Forms sensed by the mind are real. So Feser is trusting that logic is taking him to conclusions that are “more real” than observations, which are mere appearances. This of course will seem from your practical viewpoint as sheer lunacy. This all goes back to the fact that these guys are theists so their definition of truth is not “good predictions” but the Einsteinian “knowing gods thoughts” i.e. the “blueprint” behind the material reality, same as Plato’s Forms or Aristotle’s “essences”.
So what I am saying is that Feser is not simply a clumsy inhabitant of the same intellectual planet as yourself, but lives in an entirely different intellectual planet, and IMHO you basically have to kinda raise the bet and either respect Feser’s planet from a distance with a shrugging indifference or condemn the _whole_ thing as the Planet of Lunacy, but there is no middle way. Feser is not bad at your stuff because he is not doing your stuff at all, you either gotta ignore the whole thing or condemn the whole thing, the whole tradition of anti-empirical essential logicalism, so to speak.
>So what I am saying is that Feser is not simply a clumsy inhabitant of the same intellectual planet as yourself, but lives in an entirely different intellectual planet.
No. You’re telling me Feser lives in fantasyland. (But this I knew already.)
I in fact do condemn “the whole thing, the whole tradition of anti-empirical essential logicalism”. There is nothing that is “more real” than observations; to believe otherwise is epistemological insanity straight up, defective thinking with crazy consequences. Plato’s forms were full of shit and so were Aristotle’s essences.
Furthermore, as a Buddhist, you ought to be even more rejectionist than me about essentialism. In Buddhist psychology the belief in essences is a result of the defects of the senses and of clinging to the five Skandhas, and you must get free of it to achieve enlightenment.
Perhaps a simpler view of subjectivity.
At the Planck scale, it is very likely that all living things are unique. As such, it is also likely that we have unique and slightly different processors in our brains, in addition to unique life experiences and memory retention. As a species, we have great similarity, but are not identical in any real sense. Consequently, subjectivity may in essence just be uniqueness.
It may also be helpful to understand that prediction is just an educated guess about the future, and not a promise of certainty. It is evolutionarily advantageous to have some facility for predicting the behavior of other living things (as well as the inanimate world), but guessing wrong is not uncommon. Sometimes guessing wrong gets you dead and thereby not necessarily returning your genes into the gene pool.
>Furthermore, as a Buddhist, you ought to be even more rejectionist than me about essentialism.
Of course I dont believe in it, but I find it interesting. This tradition is at the root of both what is usually called common sense and the Western intellectual, academic tradition itself. For example when we think that nature has laws, not merely models, that these laws are true in a sense that is deeper than true predictions i.e. that science uncovers Truths, not just ever improving models, when we hope science finds some ultimate answers, or even when we hope – without any good materialistic reason – that the universe is somehow inherently rational enough to be fully understandable for us i.e. at some level of scientific discovery it wont go beyond the level of what we can comprehend, when we think logical fallacies constitute a wrong argument, when we think the sentence “what is justice” is a meaningful one (implying that justice “is” and not “made” etc. we indirectly refer to this tradition.
Buddhism is unbeatable for the purpose of personal development, but it is not really useful for the more practical matters like for creating a political philosophy. I cannot really decide matters like is there anything wrong with usury based on it. Or it does not have a definition of justice, only compassion which is not useful for political, because justice can be enforced, compassion not. So before discovering something ever more exotic I looked around in our own past and when looking for the roots of modern ideas at every corner I find concept defined and refined by Platonists, Aristoteleans, Scholastics, this is what makes it interesting. For example the very term “information” was invented by them and it meant something different, it largely meant the essences that give form to matter, like a plants DNA.
@esr
I still am not sure whether I actually understand your definition.
“Objective” seems to mean those “facts” that different subjects can independently verify through observation and computation. This could include “internal states” of some person. “Subjective” then seems to mean what cannot be independently verified by anyone but the person (the “I” speaking) who has the experiences.
The classical question would be how I experience “Yellow” and “Red”. There is no way anyone can know whether I do not see these colors switched from how you see (experience) them.
What I find disconcerting about these definitions is that they rely on the “gaps”. Those gaps in our knowledge might be closed in the future. I still have difficulty seeing how it would be relevant to my experience of the colors “Yellow” and “Red” to know whether or not that experience can be verified from the outside.
So, maybe I will have to think about this some more.
@Jessica Boxer
“If I can see inside the box, it isn’t a box anymore.”
My point completely. I take as “I” the person speaking, whatever that is (cogito ergo sum).
@Jessica Boxer
“Whether a Turing machine halts or not may not be predictable, however, nobody is suggesting that that Turing machine decides on its own whether to halt, and nobody is suggesting that a divine being intervenes to decide either.”
Indeed. The outcome is decided when the machine starts. However, the outcome cannot be predicted unless the machine has run its course.
There is indeed a complex maze of arguments and emotions about free will and decisions that look like those from time travel paradoxes: If I travel to the past, I know the outcome of what is going on as it has happened in my personal past, but it can only happen if I act out now what in my memory has already happened.
The same with free will in a deterministic brain. My decisions were fixed when the machine was set running. But I still have to act as if I decide them now or they will not happen. But my inaction would be preprogrammed too. Deep down, thermal or quantum fluctuations will offset the machine into unpredictable paths again.
I can only say that the class of “everything that is a mental state, except what can be reliably communicated to others” seems about as useful as the class of “everything that is a building, except what has been painted green”, or the class of “everything that is alive, except what is currently in Europe”. The notion that “subjectivity” could mean that for anyone never occurred to me before this.
What I meant by “subjectivity” when I said that computationalism implies that subjectivity is an illusion, is merely the quality of being a subjective experience, whether or not that experience has an objective correlate: perceptions (accurate or hallucinatory), beliefs (true or false), abstract reasoning (valid or fallacious.) With that understood, I stand by the statement now.
And I’ll set you a question, in return for the one you gave me. You said “Human beings have minds that are persistent information patterns of very high complexity,” and while I don’t disagree: under materialist assumptions, what can you mean by calling something “information”?
>what can you mean by calling something “information”?
The same thing Claude Shannon did. I don’t see an ontological problem here.
By the way, ESR, isn’t empiricism a bit too optimistic about the possibility to make mistakes in observation? Whenever you go the “what is the observable difference between” route, don’t you implicitly assume an infallible observer or observational method? What if observation is just as difficult and error-prone as thinking?
Furthermore, if there is such a mistake as to assume fewer mistakes in observation as in thinking, wouldn’t people with a programming background be especially likely to make that mistake, because input data for software is usually someone else’s responsibility and in the software development process inputs are usually assumed to be correct, because if no that is not the programmers job to fix? The whole software development process usually rests on the assumption of perfect inputs and buggy logic? So people with a programmer background might need to especially wary about not being too optimistic about observats being perfect…
This is why I actually think a different kind of criticism of empiricism, roughly the Bradley – Quite axis holds water. I.e. that truth = consistency, and the empiricist way, namely that theory must be made consistent with observations is one valid subset of it, but sometimes you test theories against theories, sometimes you test observations against other observations, sometimes even observations against theories (“that thing can’t be a perpetuum mobile, I don’t even need to examine it”), simply whichever seems to work best? Note: this to me sounds closer to how science works in actual practice as opposed to an idealized view of how science should work…
>What if observation is just as difficult and error-prone as thinking?
Oh, it is. You find out about your observational errors the same way you find about your theoretical errors: they fail to cash out to successful predictions.
>This is why I actually think a different kind of criticism of empiricism, roughly the Bradley – Quite axis holds water.
I think you probably meant “Quine” there, yes? As in Willard Van Orman?
Truth = consistency is the trap the Vienna Circle fell into, and the reason Logical Positivism is now only of historical interest.
>but sometimes you test theories against theories, sometimes you test observations against other observations, sometimes even observations against theories
Sure. But in the end it all has to cash out to predictions. This is why I don’t call myself an empiricist – that would be putting the ontological cart before the confirmational horse.
I can’t model what’s going on in your brain perfectly because brains are very complicated, very poorly understood, and most ways of observing them are invasive. fMRIs can show you patterns of blood flow in living people in real time, but that’s a far cry from observing individual neurons firing in real time. We may one day be able to isolate electrical potentials or neurotransmitter bursts down to the near-cellular level in animals — there are advances in optogenetics pointing in this direction. See this paper (http://arxiv.org/abs/1306.5709) for a review of some of the technologies that could lead to mapping the activity of all neurons at the sub-millisecond level.
Now, even *that* is a far cry from knowing what those patterns of neural firing mean. But one can see how progress on interpreting neural firing patterns might also be made by experiment.
It seems to me that a *sufficient explanation* for why mind-reading is currently impossible is that it’s a hard technical and scientific problem. It’s weird to me that people claim it’s also impossible in principle. Perhaps it is possible for the brain to be genuinely indeterminate, though Knightian uncertainty sounds suspicious to me (isn’t it vulnerable to Dutch-booking?) But even if it’s possible, we have no evidence whatsoever for that supposition.
It’s like saying “Perhaps it’s physically impossible to cure Alzheimer’s disease, because of some heretofore unknown quantum-physics thing.” Well, *maybe* the laws of nature don’t exclude such a thing. But isn’t it a simpler explanation that modern biology just hasn’t gotten that far yet?
Curiously, subjectivity, by Eric’s definition, would not include any mental state part (whatever its nature) that didn’t input to future states – e.g. anything that fizzled out, was absorbed into other neural firings, etc. …then again, maybe that’s not that profound. (If you have a thought, then dismiss it and never think it or use it again, is it part of your subjectivity? I suppose that’s about as meaningful as the falling tree question.)
Should this proposed definition of subjectivity hold outside of a computationalist context?
What, precisely, is a mental state? A “pattern of information”? In a brain? Does it have to be a brain? Can it be partially in a brain? (Could it be contained in, say, part of the skull as well? In any blood that has since left the head?)
How far, spatially, can that subjectivity be said to extend? The reach of neural firings seems like a good assessable limit, but I kinda wonder. (Does it extend into the spinal cord? The entire nervous system? Would that comport with the intuitive sense?)
Bravo!
Another wrinkle, which might annoy or surprise an incautious computationalist, is that while memories are indeed an input to the transition function, they are also an output from that, and in fact the “present” state of a memory is more a consequence of everything thought and experienced since it was “recorded” as it is of what actually happened to create the memory in the first place.
Something that I don’t see is why computationalism should exclude subjective feelings. My father was a symphonic musician for a good part if his life, and was able to give me the gift of an appreciation of fine music through exposure, lessons etc. I didn’t have it before he started, and it took quite a bit of time, but he succeeded in reprogramming my conscious mind. (OK, maybe he just reconfigured it…no, he installed a plug-in). He, and all those music teachers could not describe the process, and the result, except in the vaguest terms, along with helpful suggestions. At the end, my feelings when listening to Buxtehude are entirely subjective, yet I believe that:
1. I am a machine, and all my thoughts are supported by my physical brain and body. You need both.
2. My appreciation for specific types of music lies in my mind, which is supported by 1, above. As an example of the need for the body, hearing a really rich-sounding organ pipe provides a thrill because the hair cells in the cochlea are able to ‘take the sound apart’ (Fourier analysis in real time) so that the brain can appreciate it.
It seems to me that various commenters are discussing fundamentally different examples of subjectivity, and therefore talking past each other. Many different aspects of consciousness have a subjective character, but subjectivity as a whole includes all of them.
It seems to me that ESR’s definition only addresses some subjective aspects of consciousness, but fails entirely to engage with the kind of subjective experience that forms the basis of Chalmers’s “hard problem of consciousness” concerning _qualia_. (http://en.wikipedia.org/wiki/Hard_problem_of_consciousness)
I just thought to search and noticed that ESR *has* written on qualia in the hard problem of consciousness, here: http://esr.ibiblio.org/?p=1192#more-1192. In that posting, as in this, I find myself agreeing with his discussion as a partial assessment of the matter. I would be intrigued to hear how he reconciles that post with this one and retains this one’s definition of subjectivity, especially in light of sentences like this, from the earlier post, which clearly includes qualia as part of the totality of subjectivity:
“What I’m really arguing here is that Dennett, and thinkers like him, are stuck hard enough in a theoretical set of distinctions about ‘objective’ vs. ‘subjective’ to have ignored an important part of the phenomenology.”
>I would be intrigued to hear how he reconciles that post with this one and retains this one’s definition of subjectivity
Well, you saw that I wrote this: “Any theory of mind that can’t support questions about that meaning to Mary is dangerously impoverished”. This is exactly Gelernter’s objection, which I agree with in the OP, to theorists who want to deny that subjectivity can have any causal force. Does that help?
Hmmm…I’ve been rereading my last comment and think I lost the point I was trying to make. Why do the ‘thickheads’ that esr alluded to in the OP insist on banning subjective feelings from computationalism? Are they so bent on evicting the ghost from the machine that they’re in superstitious fear of them? They can’t design a grant proposal around them? Is it just an annoyance to them that they can’t get the results of the computations that Other People’s Brains are constantly making? Or are they just Brain Geeks, that like all geeks, are uncomfortable with other peoples’ feelings?
>Why do the ‘thickheads’ that esr alluded to in the OP insist on banning subjective feelings from computationalism?
I think it’s mostly from fear of conceding any ground to the religious nutters. We’ve seen the same flinch reaction in this thread; Jessica Boxer is exhibiting a classic case of it.
ESR, maybe I wasn’t clear enough. Predictions predict observations. This is the problem. The assumption that our observation of the outcome we predicted is infallible. If everything cashes out to predictions, everything rests on the correctness of the observation of the outcome.
Of course, for the practical life it is a no-problem as the whole process began by deciding what kind of outcome we desire. In practical life we want outcomes, we don’t observe them. But can this be generalized enough to form a philosophy of science and knowledge?
>But can this be generalized enough to form a philosophy of science and knowledge?
Not only can it be, I believe that’s the only way to tackle the problem that doesn’t collapse into circularity or nonsense.
@LS
” Why do the ‘thickheads’ that esr alluded to in the OP insist on banning subjective feelings from computationalism?”
I do not know about “computationalists”. Evolutionary psychology (which covers most of modern psychology) should not have a problem with it.
If I can divine the underlying reasoning, it would go like this:
Subjective feelings are “unobservable” internal states of the mind. If these internal states are causes of behavior, they work like external states. That could be handled.
However, when internal states (ie, subjective feelings) are causes, that implies that the mind is fragmented in modules that each behave like the others are “external”.
I suspect some people with too many roots in Behaviorism have a very big problem with this implication.
>I suspect some people with too many roots in Behaviorism have a very big problem with this implication.
Yes, that’s probably the second most common cause of the flinch reaction after fear of conceding ground to the religious.
Three thoughts.
1) The initial observation would usually (frequently?) lead to a prediction on a different observation. Thus both observations would have to be erroneous in the same ways.
2) Correctness and Repeatability. The whole point of repeatability is to cover for blind spots in observation.
3) You’re fundamentally right. See Quantum Mechanics.
‘However, when internal states (ie, subjective feelings) are causes, that implies that the mind is fragmented in modules that each behave like the others are “external”’
OK. That seems to be the trend in modern research. Dr. Daniel Dennet proposes that your consciousness is whichever of these modules has taken over the largest number of neurons at the moment. Looks like a multitasking operating system, certainly a computationalist should be happy with that.
I have not read this thing, but I was given this reference:
Daniel Dennet, “Conscousness Explained”, Penguin Books, Ltd., New York, NY 1993
@Alex k.
There are no software RNGs; they are all pseudo random number generators. They may not be easily predictable, but that’s not the same distinction.
Also, bugs say more about the nature of human cognition than they do about the predictability of finite stare machines.
@LS in regards to your musical experiences and learning… I really don’t think it is all that hard to understand. All that training built sets of pathways in your brain that let the experience of music and the various patterns that it produces to positive experience feedback in your brain, and consequently you enjoy music.
For reasons I have argued in a prior comment on the previous post, the musical pathways have a physiological connection to your emotional system, and so those pathways lead to emotional reactions like the hairs on the back of your neck perking up.
You can call it subjectivity if you like, but it is no different than any other process your brain undergoes for pattern matching with the purpose of producing a physiological reaction in your body. Something I always thought of as funny is that some people like to listen to music while having sex. Really, they are right on the money there, sex is why we like music (or one of the reasons anyway.) So I’d recommend that when you all get with your partners next time, crank up the tunes and get your freak on baby!!!
http://www.youtube.com/watch?v=0cD9cBEaNBc
There isn’t a ghost in the machine. As I have said elsewhere, terms like subjectivity are used as cover for people who pretend there is, and so those words make the hairs on the back of my neck stand up.
@ ESR – “everything cashes out to predictions”
For all living things, accurate prediction is a vitally important survival trait.
In the modern prosperous world for humans, it has evolved to be more of an avocation rather than survival determinant, but we are wired for this habit nonetheless. Because subjectivity is present in our current evolved psychology, it must have a positive influence on our aptitude for survival. It may be one of the factors explaining why we are at the top of the food chain.
Claude Shannon’s theory is not about information, but communication; it deals with, not what messages are, but how messages can be reliably transmitted. For analysis of how humans convey aspects of their minds to each other it’s invaluable; for analysis of minds as such, it’s irrelevant.
Regarding the philosophy of knowledge. Do you endorse David Hume’s analysis of our expectation of regular patterns in nature as justified neither by demonstration nor by experience, but solely by habit and custom? If so, how does that escape circularity, since habit is formed only by experience?
>Claude Shannon’s theory is not about information, but communication;
I meant the part where information is defined as choice among alternatives. Like, say, alternative synaptic states in a brain. Communication is involved in that other synapses make the distinction among states by the way they react.
>David Hume’s analysis of our expectation of regular patterns in nature as justified neither by demonstration nor by experience, but solely by habit and custom?
No, I do not. Our theories about causality are justified by observing temporal patterns.
Thoughts about the meaning of “information.”
Turn back the clock about a million years (and before the advent of complex language).
An earlier hominid grunts in his sleep and no other living thing is in earshot. Just a random noise with no intent or consequence.
Similar scenario, but now a group of hominids are sleeping together, and the grunt is just a digestive reflect. A few of the others are then awakened temporarily, scan the surroundings, note no danger, and return to sleep. An alert reflex induced by a random sound. Consequence without intent.
Similar scenario again, but now the first hominid hears a strange noise (and is the only one to notice), then proactively grunts so as to awaken the other to possible danger. Both intent and consequence.
What are the minimum attributes of a piece of information? Do you need to have a sentient being as one of the components?
>Well, you saw that I wrote this: “Any theory of mind that can’t support questions about that meaning to Mary is dangerously impoverished”. This is exactly Gelernter’s objection, which I agree with in the OP, to theorists who want to deny that subjectivity can have any causal force. Does that help?
I think you are getting at part of Gelernter’s objection, but not all. And I agree with both of you on that part!
But I don’t see how this squares with your definition in this post that subjectivity is the unpredictable. It’s not an *unpredictable* feeling that Mary has — it’s indescribable / personal, but highly predictable.
You might quibble that the exact details of the brain state are unpredictable. But even if you could somehow calculate in advance exactly what her brain state was going to be on seeing red for the first time, or on first looking into Chapman’s Homer, for that matter, the thing that makes the experience subjective is that she’s *feeling* what it’s like for this neural circuit to light up — that she’s having an experience, regardless of how predictable.
I have some further thoughts in response to your older post on the qualia problem, taking on a next step in the significance of this, but I wouldn’t want to impose on your comment thread to bring up that older topic, unless that’s ok by you.
“I meant the part where information is defined as choice among alternatives.”
That wasn’t a definition, but a deliberate restriction of topics. Shannon chose to disregard the meanings of the signals sent over a communication channel because his concern was with error detection and correction. Extending that restriction outside the domain of communication channels is reifying an abstraction.
“Our theories about causality are justified by observing temporal patterns.”
Oh? Then what was wrong with Hume’s argument that 1) without the Principle of Uniformity no amount of observation can justify any prediction whatsoever, and 2) the Principle of Uniformity can’t itself be derived from observation?
>Oh? Then what was wrong with Hume’s argument that 1) without the Principle of Uniformity no amount of observation can justify any prediction whatsoever, and 2) the Principle of Uniformity can’t itself be derived from observation?
The problem with Hume’s argument is subtle, and has a lot to do with what we meant by “Principle”, especially with a capital P. In the intellectual world he inhabited, empiricism was seen as a way to bootstrap your way to the point were you could frog-jump out of the phenomenal world, enunciate a “Principal of Uniformity” as Absolute Truth, and then gain additional knowledge from it by pure deduction.
Hume’s apparent paradoxes all derives from the fact that this is actually a rather crappy and over-rigid model of rationality. A good cure for it is Elizer Yudkowsky’s writings about Bayesian reasoning. In a Bayesian universe, the “Principal of Uniformity” does not at any point have to be a given. It’s a conditional posit like any other, confirmed by instances of causality-based prediction, disconfirmed by failures.
Michael Brazier wrote “Do you endorse David Hume’s analysis of our expectation of regular patterns in nature as justified neither by demonstration nor by experience, but solely by habit and custom? If so, how does that escape circularity, since habit is formed only by experience?”
I have never thought deeply about it because it’s one of the issues where rough-and-ready demonstrations in the spirit of Samuel Johnson’s “I refute it thus” seem good enough for practical purposes. But it seems to me that a fancier argument in in the spirit of Descartes’ “I think therefore I am” might do it too. I think, therefore I expect I am an element of a universe which has some regularity. (And therefore my prior probability for hypotheses should have some of the character of Occam’s razor.)
Of course, my “therefore” is an important and subtle step, and maybe I’m not thinking about it carefully enough, or I’m too set in my computational way of thinking about thought. But while Hume’s thought experiment of a universe without regularity is easy for me to swallow and take seriously, it’s not so easy for me to worry about being fooled by the trap of thinking while in a patternless universe, because it’s so implausible to me that there would be any thinkers within a patternless system. How do you construct a comprehensively arbitrary irregularly chaotic thinking observer? (And once you think you have done so, how can you verify it?)
Subjectivity and prediction.
If your wife asks you how she looks in her new dress, you’d better be able to predict her subjective response before you answer.
@Michael Brazier
Information in the brain is all about which neuron communicates with what synapses. The information is in the connections, their strength, and the excitations in flight.
Information is also negentropy (the complement of entropy). You do not need meaning to calculate entropy, nor to determine information.
I’ve always decided to use the equivalence principle when looking at a issues like this. Basically, can you tell the difference?
The age-old question of “If a tree falls in a forest and nobody is around to hear it, does it make a sound?” is an occasionally-interesting thought experiment, and the answer doesn’t matter. Why? Because there is no way to tell the difference. Losing sleep over the question is crazy.
OTOH, if the tree falls in a forest and lands on power lines, people *will* care about losing heat so the answer matters.
Some people have argued that The Mona Lisa was stolen and replaced by a fake. If you are a museum patron, why would you care? Would you have any way of being able to tell the difference? If not, why let the worry of looking at “a fake” waste any of your time of enjoying the experience?
Garrett on 2014-01-09 at 17:03:42 said:
> Because there is no way to tell the difference. Losing sleep over the question is crazy.
I don’t agree. It is entirely worth worrying about whether there really is a reality outside of your head. Understanding the world helps you make better predictions, helping you make better predictions makes your life better, generally speaking. Whether an unobserved tree falling in the forest makes a sound matters insofar as if it does not, it invalidates the model of the world that we, or I, depend on to decide what to do next. If it doesn’t then people who make loudspeakers or headphones have no idea what to do next.
@Winter
“You do not need meaning to calculate entropy, nor to determine information.”
I’m not so sure about this. If you know the exact microstate (probability in state space is a delta function) of an isolated deterministic system, entropy is always exactly zero (or negative infinity, depending on where your origin is and whether you want to be discrete or continuous). It is *your* uncertainty about the state of the system that determines what the entropy is. If you have cruder information (manifesting as different ways of doing work on your system – the bridge between information-theoretic entropy and thermodynamic entropy) about it’s state, then the system is to you in a higher state of entropy than it is relative to someone who has more information (and through the means of getting that information, more ways of getting work out of it)
It brings to mind E.T. Jaynes resolution of Gibb’s paradox – suppose you have a cylinder with a piston halfway down the cylinder. On either side of the piston is an equal volume of argon at equal pressure. If you were to remove the piston, the classical saying goes, there would be no change in entropy on the mixing of argon with argon, because the molecules of argon gas are “identical”.
(Only, they’re not. As we have known since the early 1900s, that argon could be in any number of different “internal states”. It can have different numbers of neutrons, etc. People try to weasel out of Gibb’s paradox by brushing the nature of identity under the rug of quantum physics and saying “beyond this point, all Xs or Ys are absolutely identical, case closed”.)
Suppose, unbeknownst to modern science, that argon is actually composed of two different types of argon – argon A and argon B. Now if it were all argon A on one side of the piston and argon B on the other, removing the piston causes a change in entropy. Jaynes made the point that it is a change in entropy *only to an observer that has a means of distinguishing between argon A and argon B*. That means has to manifest as some sort of mechanism that treats argon A differently than argon B. The mechanism he suggested in his paper was a special piston permeable only to argon A, which would, among other things, allow you to tell that it was different from argon B in the first place, and derive work from the partial pressure of argon B.
I’ve always loved this explaination – it removes some of the mystery behind what it means for a state to be indistinguishable, placing it back in the common sense realm of a statement about our ability to distinguish between two states, and eliminating any absolute ontological claims about the state’s “true identity”.
The age-old question of “If a tree falls in a forest and nobody is around to hear it, does it make a sound?” is an occasionally-interesting thought experiment, and the answer doesn’t matter. Why? Because there is no way to tell the difference. Losing sleep over the question is crazy.
I’ve always thought of it as hinging on the definition of “make a sound”.
If “sound” is “perception of the sound” (e.g. the act of hearing it, by “somebody” where “somebody” means any entity qualifying as able to “hear” it for the definition’s purposes), the answer is a plain “no”,
If “sound” is “the waves in air that would be so perceived if such a perceiving entity happened to be there” the answer is a plain “yes”.
The question sometimes appears profound only because it has hidden ambiguity; that’s the only meaning it has, in fact. Otherwise it’s just an exercise in unpacking definitions – not that that’s a useless thing.
“The problem with Hume’s argument is subtle, and has a lot to do with what we meant by “Principle”, especially with a capital P. In the intellectual world he inhabited, empiricism was seen as a way to bootstrap your way to the point were you could frog-jump out of the phenomenal world, enunciate a “Principal of Uniformity” as Absolute Truth, and then gain additional knowledge from it by pure deduction. ”
What? No, hardly. Here’s what Hume actually said:
“What is possible can never be demonstrated to be false; and ’tis possible the course of nature may change, since we can conceive such a change. Nay, I will go farther, and assert, that he could not so much as prove by any probable arguments, that the future must be conformable to the past. All probable arguments are built on the supposition, that there is this conformity betwixt the future and the past, and therefore can never prove it. This conformity is a matter of fact, and if it must be proved, will admit of no proof but from experience. But our experience in the past can be a proof of nothing for the future, but upon a supposition, that there is a resemblance betwixt them. This therefore is a point, which can admit of no proof at all, and which we take for granted without any proof.”
The whole point of Hume’s sceptical treatises was to argue that bootstrapping one’s way out of the phenomenal world to an Absolute Truth is impossible.
“In a Bayesian universe, the “Principal of Uniformity” does not at any point have to be a given. It’s a conditional posit like any other, confirmed by instances of causality-based prediction, disconfirmed by failures.”
Again, what? Without the Principle of Uniformity, the assumption that “the future must be conformable to the past” as Hume put it, there can be no meaningful Bayesian priors, because there’s no data from which to form one.
>The whole point of Hume’s sceptical treatises was to argue that bootstrapping one’s way out of the phenomenal world to an Absolute Truth is impossible.
I understand that perfectly well. But in speaking of a “Principle of Uniformity” Hume was adopting the a-prioristic habit of speech and thought that he elsewhere argued against,
>the assumption that “the future must be conformable to the past”
My error, I had Principle of Uniformity bound to “like causes had like effects”.
>there can be no meaningful Bayesian priors, because there’s no data from which to form one.
Nonsense. Whether or not this “Principle” holds, you still have a phenomenal field that is changing with time. That’s enough data to start with. If you can use inference to bootstrap yourself to a causal model, then you can posit a Principle of Uniformity. If you can’t…well, it’s a good thing that’s not this universe.
“Information” has different meanings in different contexts.
As Winter pointed out, to physicists “information” is about order versus disorder. No humans required. We are free to make observations and analysis, but the universe is doing what it’s doing without regard for our opinion of it.
In common usage and understanding, “information” is essentially about distinction, and the object of distinction varies greatly depending upon the participants and context. One man’s information is another man’s nonsense.
As regards philosophy, “information” is often a defined premise around which is constructed an argument that potentially leads to enlightenment. In my experience, at the end of the day, you tend to wind up with more argument than enlightenment.
@ Jessica Boxer:
“It is entirely worth worrying about whether there really is a reality outside of your head. Understanding the world helps you make better predictions, helping you make better predictions makes your life better, generally speaking.”
I absolutely agree on the part about the importance of making predictions. My point was more that if you don’t have a way of distinguishing two or more states, getting angsty about the possible difference is wasted effort. A method of distinguishing would involve different predictions. Poster ‘ams’ provides a lot of format technical methodology for where I was trying to go. (Thanks!).
@Sigivald:
> I’ve always thought of it as hinging on the definition of “make a sound”.
Shhhhhh! If you let people know that’s the answer, you avoid the ability to appear profound when talking to others. I needed a response to postmodern deconstruction to fake depth of insight – don’t ruin my fun!
“If a tree falls in a forest and nobody is around to hear it, does it make a sound?”
If a photon takes the slit on the left, and nobody is around to see it, does it interfere with the possibility that it took the slit on the right?
@ams
“If you know the exact microstate (probability in state space is a delta function) of an isolated deterministic system, entropy is always exactly zero (or negative infinity, depending on where your origin is and whether you want to be discrete or continuous). It is *your* uncertainty about the state of the system that determines what the entropy is.”
This is Maxwell’s demon. It has been banished
http://arxiv.org/abs/1110.4732
Entropy is about your ability to extract work from a system. To extract work (energy) from your perfectly known system, you will have to store the state of the system in enough detail. That means you have to take a memory of the right size and destroy it’s contents to overwrite it with the microstate information of your system. The loss of the information that was in the memory before you wiped it is exactly the entropy you remove from your system when you determine it’s microstates.
(actually, the argument is about a machine cycling between states, resetting the memory every time. Like the demon that operates a lid and decides which molecules to let through and which to stop)
“Nonsense. Whether or not this “Principle” holds, you still have a phenomenal field that is changing with time. That’s enough data to start with. If you can use inference to bootstrap yourself to a causal model, then you can posit a Principle of Uniformity. If you can’t…well, it’s a good thing that’s not this universe.”
No; the Principle of Uniformity is what you use to reach a causal model from the phenomenal field. You can’t infer anything from what you have observed about anything you haven’t without assuming some version of it. (BTW, “the future will resemble the past” and “like causes produce like effects” are equivalent formulations of uniformity.)
An example which, I hope, will make the point clear. There is no way to assign a meaningful probability to the hypothesis “Earth will be visited by travelers from a parallel universe”, for the simple reason that no data we possess is evidence either for or against the existence of such travelers (or, for that matter, of parallel universes.) Therefore Bayesian reasoning can’t be applied to that hypothesis – even though one could construct conditional predictions of what would happen if travelers arrived from a parallel universe.
More: nobody in practice thinks of uniformity as a probable hypothesis which might be confirmed or disconfirmed by observations, after the manner of Bayesian reasoning. When uniformity seems to fail, when the future does not conform to the past, when unlike effects seem to follow from like causes, it’s universally assumed that some new cause has interfered to create the discrepancy. Nor is there, in fact, any conceivable observed phenomenon that would disconfirm uniformity (produce one, if you doubt me!)
@ Michael Brazier – ‘There is no way to assign a meaningful probability to the hypothesis “Earth will be visited by travelers from a parallel universe”, for the simple reason that no data we possess is evidence either for or against the existence of such travelers’
First, inserting the word “meaningful” into your statement ensures that any response will be consumed in debate rather quickly.
Second, you’re not being very imaginative in tackling this problem.
Cosmologists and theoretical physicists already have a foundation for positing the existing of multiple universes; and consequently you could estimate a multiverse probability based upon their prior track record for similar such theories and validations.
In addition, we know of one example of intelligent life in the universe (us) and we have show a proclivity for travel (both on and off the planet). So again, you could reasonably estimate that most highly evolved life forms will eventually become travelers.
The potential existence of other life forms in our universe can be estimated based upon astronomical observation of other sun-like stars having planetary systems somewhat like our own. If it happened once, it could reasonably happen again.
Next, you would have to tackle the size of universe, factor in the limitation of the speed of light, and make some assumptions about convergence envelopes; but ultimately you could SWAG a calculation for two earth-like species eventually bumping into each other.
It may not be a particularly good initial prediction, but it’s a start, and it should improve with time.
I notice I am confused.
Any philosophy/science/activity/thing that leads to a better causal relationship between one’s prediction and what actually occurs is desirable. The mental model each of us carries around in our meat computer body should be in a state of continuous refinement. The mental model I use now is different from the one I used yesterday, is different from yours, etc. When my mental model is at odds with reality, my model must change. Where mental models converge, there can be objectivity. Assuming you exist and experience reality in a similar way that I do, then we should be able to agree on statements like “How many apples are in the bowl right now?” I would call that an objective fact. A fact like “how pretty are the apples?” would be more difficult to precisely agree upon because of the difficulty in communicating my actual experience to you. The apple prettiness is still an objective fact, but the language I know is insufficient to convey a precise definition.
I am perfectly amenable to the possibility I am an insane tentacled mass of green slime in another galaxy who just thinks I’m human. But the mental model that creates doesn’t add any predictive power to the reality as I experience it. I can’t be sure anyone or anything exists, but I assume they/it does because my causal predictions are better.
I guess I just don’t understand the the intense debate about the philosophical archtypes being discussed here. Still, it’s an interesting read and helps clarify my working definition of reality.
@Winter
“This is Maxwell’s demon. It has been banished”
It would be a Maxwell’s demon situation if the observer with more information had a means to acquire it without generating entropy, or if his information continued to be precise after random agitation from a heat bath letting heat into a system.
Thank you for the paper, btw.
I suppose the situation is like this:
If you have a system S in some state, and two observers A and B who start in an equivalent state of ignorance about it’s state: Observer A can engage in some process where he gains more information about the state (correlating it with memory that starts in a known state, or erasing memory irreversibly and generating entropy).
Observer A, because he knows more about system S, can now do things that observer B can’t, and would perceive as a second law violation. (I wasn’t claiming that observer A can engage in some cycle whereby his total information about the state of the universe increases. I was just saying that if he had information about S, it is in a lower entropy state for him, given his information.)
Meanwhile, I suppose he can amaze observer B, and clandestinely ship his dirty hard drives off to the dark side of Pluto, where they can be reset for energy cost –> 0.
In the Jaynes example, the observer that knows about the difference between argon A and argon B has more information than one that doesn’t. His initial knowledge of the system state looks like (argon A – left, argon B – right) where the other observer’s knowledge of the state looks like (1/2 argon A or B left, 1/2 argon A or B right), taking up a larger volume in phase-space. So observer A can do work that observer B can’t as long as he preserves his sharper knowledge of the system state.
@ESR
I don’t understand what this term subjectivity is buying us. In the end, minds are made out of the same stuff as everything else in the universe and are therefore subject to the same understanding as anything else in the universe, modulo our ability to model them in sufficient detail. I get that we do want to hold onto the notion that brains are pretty far from our current ability to model, and therefore what people say about their internal states conveys information that we can’t (in practice) get any other way, but I don’t see any reason to use a special word for this. We already have the word reductionism to refer to our need to build models of reality at different levels of abstraction. Why is the need to model minds as minds apart from their atoms different from the need to model airplanes as airfoils using fluid dynamics apart from their atoms?
@Ams
The loss of entropy and the gain by observer A is exactly the extra information A stores. Shipping this information to Pluto for errasure just means you run your engine in a heath bath of ~4k.
Thermodynamics has worked flawlessly for well over a century. And information as negentropy seems to “work” from steam engines to black holes.
@Michael Brazier:
Funny man! (Any student of Hume would know that the crux of his argument is that this request is impossible.)
The whole “induction problem” nonsense comes about from the attempt (by weaklings) to justify science logically, when as anyone who thinks about the problem for mere *seconds* can see, that is impossible, because logic does not allow you to go from weaker to stronger. All you can do is keep reframing the problem over and over in logical terms in an instinctive way. It is impossible to “close the loop”.
Luckily, applying logic to false assumptions isn’t harmful. In fact, it is the basis of all reasoning. For example, the belief in “objects” is logically absurd and yet without it it is impossible to get far in thought. “Systems” do not actually exist. There is only one system – the universe. But without believing in systems you will be a lousy engineer. Using logic to reframe a problem over and over like this is nothing more than a sign of weakness because it shows that nobody trusts your instincts.
@ Roger Philips – “to justify science”
The essence of scientific thought has been a memetic trait in our species for at least hundreds of thousand years, and only formalized as a thought category named “science” during the last few thousand years. It is feasible to explain this evolutionary development in terms of the advantages it confers to our survive and thrive imperative, but justification is not a relevant term is this context. For example, bicameral vision is an evolved trait endemic to our species, and it simply exists as a feature of our anatomy. Attempting to justify it’s existence is an empty exercise because no conceptual thought manipulation can undo or alter millions of years of evolution.
TomA please don’t quote three words from my post. It’s obvious that you didn’t understand what I wrote. E.g.:
>It is feasible to explain this evolutionary development in terms of the advantages it confers to our survive and thrive imperative, but justification is not a relevant term is this context.
The whole point of the “induction problem” is that inductive reasoning cannot be justified logically. What’s not “relevant” is you stating that it’s feasible to explain the evolutionary development of thought, since that’s exactly what I just started to do! The belief in objects exists only because all the beings who *correctly* rejected belief in objects are all dead.
>The belief in objects exists only because all the beings who *correctly* rejected belief in objects are all dead.
Yes. But you nevertheless have (or at least claim to have) this a-prioristic belief that objects don’t exist.
Think again.
Hint: All theory-building is motivated.
@Roger Phillips on 2014-01-12 at 20:44:55 said:
>The whole point of the “induction problem” is that inductive reasoning cannot be justified logically.
So I don’t think this is “the induction problem” at all. I think this is the “mathematics” problem, which is to say the idea that we can reliably know everything we need to know from a set of axioms and logical rules. I’m sure we are all aware that the 20th century was the century when these high Victorian notions came crashing down, with Godel, quantum mechanics and all sorts of annoying little things like the halting problem.
Rather in the real world a better approach is an evolutionary approach to knowledge, which is to say, our body of knowledge evolves over time, with feedback loops that improve it. Especially when we focus on, what to me is the only important measure of any knowledge, namely its ability to predict the future.
Evolutionary systems of course have problems. But they are also very effective if our goal is pragmatism rather than complete theoretical perfection.
@Eric
No, you read again (this time make some effort to understand what was written). Having an a priori belief is not a problem for me because I trust my instincts (including the instinct for logic). I don’t need to “close the loop”. That is what weaklings feel the need to do, because they doubt their instincts and want some immutable God (logic) on their side when they take their ideas to others. In fact, logic is just a tool and leads those with poor instincts even further astray (since it amplifies the magnitude of their stupidity). You see the same insecurity with all the fuss over the (supposed) failure of Hilbert’s program. In the end it doesn’t matter because mathematics is ultimately based on instinct, and closing the loop would prove nothing anyway because the whole idea of closing the loop is itself equally “shaky”.
In any case, the belief that no objects exist is not “a priori” because it is based upon observations. It isn’t reasoned using pure logic. If anything is “a priori” (and really, no such thing exists) it is the belief that objects do exist, and it is this a priori belief combined with observations that leads to the belief that objects do not exist. Whether I believe in objects (and the vast, vast majority of the time I do) is different at different times. I have no problem with inconsistency either, because it is unavoidable, since the whole object-based belief system that humans rely on is inconsistent.
@Roger Philips
“You see the same insecurity with all the fuss over the (supposed) failure of Hilbert’s program. In the end it doesn’t matter because mathematics is ultimately based on instinct, and closing the loop would prove nothing anyway because the whole idea of closing the loop is itself equally “shaky”.”
The problem with the failure of the Hilbert program was not at all related to empirical sciences (objects). The problem was that it showed that logic itself was shown to be inconsistent and/or incomplete.
@Roger Philips
“In any case, the belief that no objects exist is not “a priori” because it is based upon observations.”
I do not understand what definition of object you are referring to.
Let us say people define “object” empirically in youth as “any configuration of matter I can manipulate in whole” and then extended the definition to configurations of matter that have some aspects of persistence and well defined space boundaries. With such a simple strategy, neither congenital instincts nor axioms of “objects” are needed. Just plain simple learning algorithms.
If you want to go for instincts, I would suggest the concepts of “face” and “animated”.
@Winter
You didn’t read what I said (for comprehension at least). The point of Hilbert’s program was to close the loop on mathematics to avoid foundational errors. In the end it turned out it was unnecessary and that the instincts of mathematicians were fine. I didn’t say it was related to the empirical sciences (at least, not in the sense that you mean – and of course it IS related to the empirical sciences). What I said was that they were showing the same basic insecurity as the people fussing over the induction problem. How could you have missed that? At no point did I say any of this, it’s just some nonsense that you *imagined* me saying.
Your definition is consistent with what I said. Indeed, this is close to where you would end up once you forget about the fact that objects don’t exist and get on with things. But then you try quibbling (weaklings usually focus on the most irrelevant details and are usually wrong) over the term “instinct”. As though if you had a “learning algorithm” (more weakling jargon) built in that enabled you to discern objects that it wouldn’t count as an instinct for discerning objects. Except in a part of the universe without any objects in it, which is completely irrelevant to all known life. I can already tell that talking to you is going to be an exercise in rearranging the terminology.
@Roger Philips
I will gladly agree that I misread your comments. That often happens. I also am aware that I have “weakling” tendencies in my reasoning.
First, to get back to my previous comment. I was not so much criticizing you, as trying to understand your remark about “objects do not exist”. These depend on your definition of “object”. My example of a definition based on manipulation was just to show that there are other possibilities of a definition where “objects do not exist” does not make sense.
My only question was, what is your definition such that this makes sense?
@Roger Philips
“The point of Hilbert’s program was to close the loop on mathematics to avoid foundational errors. In the end it turned out it was unnecessary and that the instincts of mathematicians were fine.”
Here I disagree with “it was unnecessary“. The proof of Gödel (and Turing) showed, at the time, unanticipated foundational aspects of logic and mathematics. Hilbert’s program was fundamentally sound and the results would have been profound whether they were positive or negative.
@Roger Philips
“But then you try quibbling (weaklings usually focus on the most irrelevant details and are usually wrong) over the term “instinct”.”
In my experience, the use of the the word “instinct” is a good indicator of a fundamental error in reasoning. The use of this word is nearly always unnecessary, if not damaging, to the argument. I will always try to rephrase the argument without the use of “instinct” and see what is left.
@Roger Philips
“As though if you had a “learning algorithm” (more weakling jargon) built in that enabled you to discern objects that it wouldn’t count as an instinct for discerning objects.”
In this case, I am perfectly aware of the fact that the “learning algorithm” is part of our instinctive behavior. My point is that it would be an instinct related to behavior of the hands and linked to cognition and language (these have intricate connections). The concept of “Object” is only derived from this. It is entirely conceivable that many cultures do not have a unified “object” class like we do.
@Winter
An object is a whole/entirety, and the only true whole is the universe. Now people don’t deal with true wholes – since they don’t exist – but they these “fuzzy” objects are defined within a framework that takes objects for granted, and revolves ENTIRELY around a belief in wholes. Set theory (in other words: all mathematics) presupposes objects.
Closing the loop was unnecessary. Godel’s work etc are all interesting results, but the main impact was for people to stop wasting their time trying to close the loop. Philosophically they’re overblown because of (a) widespread misunderstanding of the results by non-mathematicians and (b) obsession with closing the loop.
No, in your experience quibbling over “instinct” is a good way to maintain your existing beliefs and muddle the conversation. You have no interest in the validity of arguments, as demonstrated by your steadfast refusal to respond to the point that was being made, instead trying to drag the argument down from the conceptual realm into the linguistic realm (the realm of base, petty argumentation). Naturally in your next paragraph you declare that these two realms are intricately linked (true!), as though that justified your behavior (not true).
Even DOGS recognise objects. Are they consciously aware of it? How do they deal with them linguistically? Do they recognise all the same objects as us? Who gives a shit? It’s of no consequence to my point because in the end they are dealing with wholes of various kinds regardless of how that behavior is generated, and if they don’t manage to abstract the concept of an object then they will be completely unable to apply logic. Show me a culture far along enough in development to have a working system of logic to whom the idea of an object is alien. You just say things like “it’s conceivable that” in a lame attempt to introduce a counterexample. A great many things are “conceivable”, but most are bullshit. Humans are dependent upon discerning objects and the dependence only gets worse the more complex your undertakings. Which is why people who are doing very little can afford to quibble over maybe there is some obscure culture of losers who haven’t figured out the concept of objects yet – a culture which will be completely irrelevant to world history until it gets over this hurdle but somehow seems relevant to you.
@Roger Philips
“You have no interest in the validity of arguments, as demonstrated by your steadfast refusal to respond to the point that was being made, instead trying to drag the argument down from the conceptual realm into the linguistic realm (the realm of base, petty argumentation).”
Animal behavior, linguistics, and language acquisition are burgeoning fields of investigation that produce a lot of interesting results. That random philosophers look down on them tells us more about these philosophers than about these fields.
I admit that I have a pet peeve with philosophers who make sweeping claims about human versus animal cognition without even a basic knowledge about the subject.
For instance, your statement “Even DOGS recognise objects.” seems to ignore the fact that dogs might not have a unified concept of object according to your definition of “An object is a whole/entirety, and the only true whole is the universe.”. A definition of “object” derived from manipulation in humans would suggest we might look to a dog’s definition derived from its mouth/jaws.
When did I say I looked down on them? This is how morons operate: make the same mistake over and over again (imagining I said things I never said), and NEVER does it occur to them that they should stop. I repeat: you have scarcely understood a WORD I’ve said, but this doesn’t deter you in the least from continuing with your inane criticisms. Linguistics is a fine field. What does this have to do with dragging conversations down to the linguistic level – what I actually said? Do you even know what that means? It means that the purpose of a conversation is for me to write little marks (or make sounds) to attempt to create the correct concepts in your mind, and for you to try to get them there. What imbeciles do is spend all their time arguing over terminology instead of racing to get into the conceptual realm where the REAL thinking happens. That explains why they’re uniformly boring people. Linguistics etc can all be recreated on the conceptual level, and that is the correct place to discuss them.
That you think I’m making a “sweeping claim” just shows that you’re trying really hard not to listen to me and to avoid building the concepts in your mind. Hence all this gibberish. I can even tell you I don’t give a shit, and that it’s irrelevant and you STILL keep bring back your moronic talk about dogs maybe not perceiving objects as a “unified class” in spite of that being TOTALLY FUCKING BESIDE THE POINT. My whole point (and undoubtedly like everything else I’ve said will go straight through your empty head) is that life goes nowhere without the affirmation of false or unjustified beliefs. Just one reason why your NEGATING BULLSHIT about “what dogs really think” is fucking pointless, because for the purposes of this conversation nobody cares what dogs think an object is – if dogs indeed THINK at all. I know what *I* think an object is, and dogs clearly recognise objects, or else they wouldn’t be able to fetch them for me. Since when the dog fetches the ball for me it is operating under my control, I care no more for how it “perceives” objects than whether some random part of my CNS “perceives” objects, so long as it can recognise them adequately for my purposes. Nor do I care that when I was a baby maybe I didn’t understand the concept of a “whole”. What I care about is that all these creatures depend on recognition of things that *I* would recognise as objects – having abstracted the idea out for the purposes of this and other conversations. Ultimately if dogs want to start doing logic they’ll have to abstract the concept of a whole too. Then they will realise (the smarter ones, at least) that there are no wholes and that consequently their system of logic is based upon a falsehood BUT WORKS ANYWAY (and then we get why this is so: instincts). Do you have anything to say about this? Or are you just going to keep having your weird little parallel conversation with an imagined version of me that is talking about something completely different?
The statement “DOGS recognise objects” has MANY possible meanings, and you are choosing the wrong one. This is probably because you don’t understand that to criticise first you have to understand what the other person means. And you have comprehensively failed to understand what I am saying, which is also probably why you haven’t said a DAMN THING about the points I was making. I don’t care how dogs think. Do dogs even deal with “concepts” at all? It’s irrelevant, and that you keep bringing this up shows you have NO IDEA what I’m talking about. Here’s an illustration of what you sound like:
Real thinking human being> Siri recognises words that it translates into queries and commands.
Winter> You “SEEM TO BE ASSUMING” that Siri has a “unified concept” of a word! In fact, it is “conceivable” that Siri doesn’t have any concept of a word. Perhaps it operates at the granularity of n-grams or whole sentences! Even if it operates with word granularity does that REALLY mean that it has a” unified concept” of a word? Do Androids really dream of electric sheep???
@Roger Philips
What I do get is that you say there is only one whole, object, and that is the universe (the universe is but an illusion scribbled on the inside of the cosmological horizon).
All other “objects” are just practical “illusions” that benefit the user in their struggle for survival.
That sounds like you are mixing a platonic definition with a utilitatian/pragmatic one. Confusing.
If there ought to be any real confusion it’s that I take the universe to mean EVERYTHING. So, for example, the term “multiverse” is nonsensical in my language. When you say “universe” you’re talking about something from cosmology. In any case, the definition of an object as a whole is a “pragmatic” one. Further, all definitions are platonic.. probably because they’re ALL based on objects. Okay okay in spite of your writing being riddled with errors I know what you MEAN here – I was just imitating you for my own amusement.
I am not mixing definitions of object. The variable is whether or not you think these things can exist outside your mind. Or, more precisely, whether you can justify their existence within an *empirical* framework where pretty much all concepts are built up out of objects. YOU brought up all these alternate definitions of object, getting us off track and confusing everything. Even the point about objects existing or not was a mere footnote in my original post.
To repeat my last, major objection in a different form: if everything has to cash out to predictions:
1) what if the observation of the event that was predicted is wrong?
2) this sounds like an approach of practical problem-solving: when solving a practical problem, we are predicting an outcome we WANT to reach, because that makes us or someone better off, therefore the observation of the predicted event is a non-problem. I.e. when I want to figure out how to bake non-sticky bread (a problem I am actually struggling with), there is no difference between a bread that is really non-sticky vs. one that merely appears to be non-sticky: the appearance in itself satisfies the goal.
But in more theoretical, less practical fields, where there is no predefined goal, I think everything cashing out to the prediction of an observed event, which can be a mere appearance, is just not sound enough.
Practicality is about knowing what outcome we want. Thus science is not always practical, often we just investigate and accept whatever we can find. And philosophy is often even less practical.
An alternative is the truth is coherence theory. This is broader and more flexible. Usually we want to solve practical problems, therefore, usually we want theories that generate predictions coherent with the observation of the practical goal we wanted to reach. But not always. Sometimes we want observations that conform to the accepted body of science, and thus exclude observational errors, sometimes we are so deep in theory that all we can do is to make them coherent with other theories, and sometimes we have no theory at all, we just tinker, which means basically generating randomized observations until we have a practically suitable one.
I see this an an axis, where one side ESR is the typical practical, problem-solving thinker, who is basically not interested in problems that don’t really have a predefined desirable outcome, and from it follows that every problem must be reduced to observable differences or else it cannot ever, in principle, can be ever be practically useful (this is important: some problems are not practical at the moment, but those problems that have no observational differences cannot ever be made practical, it would be a logical contradiction) the other extreme is Ed Feser with his Scholastic anti-practicality, and I am trying to define the middle, the mean that connects the two.
>1) what if the observation of the event that was predicted is wrong?
Eventually you find that out, because the theory that it mis-confirms makes too many other disconfirmed predictions.
>An alternative is the truth is coherence theory.
Which is untenable. This is old ground; it’s the trap Rudolf Carnap and much of the Vienna Circle fell into. Which is a shame, because they were right about so much else.