If science had them, there would be no mileage in the philosphical project, any more than there is currently mileage in trying to found dualism on the basis that matter can’t think.
I just went to reply you but after reading back on what was said I’m seeing a different context.
My stupid comment was about popularity not about usefulness. I was rambling about general public opinion on belief systems not what the topic was really about- if philosophy could move something forward.
We have prima facie reason to accept both of these claims:
A list of all the objective, third-person, physical facts about the world does not miss any facts about the world.
Which specific qualia I’m experiencing is functionally/causally underdetermined; i.e., there doesn’t seem even in principle to be any physically exhaustive reason redness feels exactly as it does, as opposed to feeling like some alien color.
1 is physicalism; 2 is the hard problem. Giving up 1 means endorsing dualism or idealism. Giving up 2 means endorsing reductive or eliminative physicalism. All of these options are unpalatable. Reductionism without eliminating anything seems off the table, since the conceivability of zombies seems likely to be here to stay, to remain as an ‘explanatory gap.’ But eliminativism about qualia means completely overturning our assumption that whatever’s going on when we speak of ‘consciousness’ involves apprehending certain facts about mind. I think this last option is the least terrible out of a set of extremely terrible options; but I don’t think the eliminative answer to this problem is obvious, and I don’t think people who endorse other solutions are automatically crazy or unreasonable.
That said, the problem is in some ways just academic. Very few dualists these days think that mind isn’t perfectly causally correlated with matter. (They might think this correlation is an inexplicable brute fact, but fact it remains.) So none of the important work Eliezer is doing here depends on monism. Monism just simplifies matters a great deal, since it eliminates the worry that the metaphysical gap might re-introduce an epistemic gap into our model.
Which specific qualia I’m experiencing is functionally/causally underdetermined; i.e., there doesn’t seem even in principle to be any physically exhaustive reason redness feels exactly as it does, as opposed to feeling like some alien color.
If I knew how the brain worked in sufficient detail, I think I’d be able to explain why this was wrong; I’d have a theory that would predict what qualia a brain experiences based on its structure (or whatever). No, I don’t know what the theory is, but I’m pretty confident that there is one.
Can you give me an example of how, even in principle, this would work? Construct a toy universe in which there are experiences causally determined by non-experiences. How would examining anything about the non-experiences tell us that the experiences exist, or what particular way those experiences feel?
Can you give me an example of how, even in principle, this would work? Construct a toy universe in which there are experiences causally determined by non-experiences. How would examining anything about the non-experiences tell us that the experiences exist, or what particular way those experiences feel?
It sounds like you’re asking me to do what I just asked you to do. I don’t know what experiences are, except by listing synonyms or by acts of brute ostension — hey, check out that pain! look at that splotch of redness! — so if I could taboo them away, it would mean I’d already solved the hard problem. This may be an error mode of ‘tabooing’ itself; that decision procedure, applied to our most primitive and generic categories (try tabooing ‘existence’ or ‘feature’), seems to either yield uninformative lists of examples, implausible eliminativisms (what would a world without experience, without existence, or without features, look like?), or circular definitions.
But what happens when we try to taboo a term is just more introspective data; it doesn’t give us any infallible decision procedure, on its own, for what conclusion we should draw from problem cases. To assert ‘if you can’t taboo it, then it’s meaningless!’, for example, is itself to commit yourself to a highly speculative philosophical and semantic hypothesis.
Can you give me an example of how, even in principle, this would work? Construct a toy universe in which there are experiences causally determined by non-experiences. How would examining anything about the non-experiences tell us that the experiences exist, or what particular way those experiences feel?
Can you give me an example of how, even in principle, this would work? Construct a toy universe in which there are computations causally determined by non-computations. How would examining anything about the non-computations tell us that the computations exist, or what particular functions those computations are computing?
My initial response is that any physical interaction in which the state of one thing differentially tracks the states of another can be modeled as a computation. Is your suggestion that an analogous response would solve the Hard Problem, i.e., are you endorsing panpsychism (‘everything is literally conscious’)?
Can you give me an example of how, even in principle, this would work? Construct a toy universe in which there are living things causally determined by non-living things? How would examining anything about the non-living things tell us that the living things exist, or what particular way those living things are alive?
“Explain how consciousness arises from non-conscious matter” doesn’t seem any more of an impossible problem than “Explain how life arises from non-living matter”.
We can define and analyze ‘life’ without any reference to life: As high-fidelity self-replicating macromolecules that interact with their environments to assemble and direct highly responsive cellular containers around themselves. There doesn’t seem to be anything missing from our ordinary notion of life here; or anything that is missing could be easily added by sketching out more physical details.
What might a purely physical definition of consciousness that made no appeal to mental concepts look like? How could we generate a first-person facts from a complex of third-person facts?
What you described as computation could apply to literally any two things in the same causal universe. But you meant two things that track each other much more tightly than usual. It may be that a rock is literally conscious, but if so, then not very much so. So little that it really does not matter at all. Humans are much more conscious because they reflect the world much more, reflect themselves much more, and [insert solution to Hard Problem here].
It may be that a rock is literally conscious, but if so, then not very much so. So little that it really does not matter at all.
I dunno. I think if rocks are even a little bit conscious, that’s pretty freaky, and I’d like to know about it. I’d certainly like to hear more about what they’re conscious of. Are they happy? Can I alter them in some way that will maximize their experiential well-being? Given how many more rocks there are than humans, it could end up being the case that our moral algorithm is dominated by rearranging pebbles on the beach.
Humans are much more conscious because they reflect the world much more, reflect themselves much more, and [insert solution to Hard Problem here].
Hah. Luckily, true panpsychism dissolves the Hard Problem. You don’t need to account for mind in terms of non-mind, because there isn’t any non-mind to be found.
I think if rocks are even a little bit conscious, that’s pretty freaky, and I’d like to know about it.
I meant, I’m pretty sure that rocks are not conscious. It’s just that the best way I’m able to express what I mean by “consciousness” may end up apparently including rocks, without me really claiming that rocks are conscious like humans are—in the same way that your definition of computation literally includes air, but you’re not really talking about air.
Luckily, true panpsychism dissolves the Hard Problem. You don’t need to account for mind in terms of non-mind, because there isn’t any non-mind to be found.
I don’t understand this. How would saying “all is Mind” explain why qualia feel the way they do?
I’m pretty sure that rocks are not conscious. It’s just that the best way I’m able to express what I mean by “consciousness” may end up apparently including rocks, without me really claiming that rocks are conscious like humans are—in the same way that your definition of computation literally includes air, but you’re not really talking about air.
This still doesn’t really specify what your view is. Your view may be that strictly speaking nothing is conscious, but in the looser sense in which we are conscious, anything could be modeled as conscious with equal warrant. This view is a polite version of eliminativism.
Or your view may be that strictly speaking everything is conscious, but in the looser sense in which we prefer to single out human-style consciousness, we can bracket the consciousness of rocks. In that case, I’d want to hear about just what kind of consciousness rocks have. If dust specks are themselves moral patients, this could throw an interesting wrench into the ‘dust specks vs. torture’ debate. This is panpsychism.
Or maybe your view is that rocks are almost conscious, that there’s some sort of Consciousness Gap that the world crosses, Leibniz-style. In that case, I’d want an explanation of what it means for something to almost be conscious, and how you could incrementally build up to Consciousness Proper.
I don’t understand this. How would saying “all is Mind” explain why qualia feel the way they do?
The Hard Problem is not “Give a reductive account of Mind!” It’s “Explain how Mind could arise from a purely non-mental foundation!” Idealism and panpsychism dissolve the problem by denying that the foundation is non-mental; and eliminativism dissolves the problem by denying that there’s such a thing as “Mind” in the first place.
Can you give me an example of how, even in principle, this would work?
In general, I would suggest as much looking at sensory experiences that vary among humans; there’s already enough interesting material there without wondering if there are even other differences. Can we explain enough interesting things about the difference between normal hearing and pitch perfect hearing without talking about qualia?
Once we’ve done that, are we still interested in discussing qualia in color?
So your argument is “Doing arithmetic requires consciousness; and we can tell that something is doing arithmetic by looking at its hardware; so we can tell with certainty by looking at certain hardware states that the hardware is sentient”?
So your argument is “Doing arithmetic requires consciousness; and we can tell that something is doing arithmetic by looking at its hardware; so we can tell with certainty by looking at certain hardware states that the hardware is sentient”?
Well, it’s certainly possible to do arithmetic without consciousness; I’m pretty sure an abacus isn’t conscious. But there should be a way to look at a clump of matter and tell it is conscious or not (at least as well as we can tell the difference between a clump of matter that is alive and a clump of matter that isn’t).
So your argument is “We have explained some things physically before, therefore we can explain consciousness physically”?
It’s a bit stronger than that: we have explained basically everything physically, including every other example of anything that was said to be impossible to explain physically. The only difference between “explaining the difference between conscious matter and non-conscious matter” and “explaining the difference between living and non-living matter” is that we don’t yet know how to do the former.
I think we’re hitting a “one man’s modus ponens is another man’s modus tollens” here. Physicalism implies that the “hard problem of consciousness” is solvable; physicalism is true; therefore the hard problem of consciousness has a solution. That’s the simplest form of my argument.
Basically, I think that the evidence in favor of physicalism is a lot stronger than the evidence that the hard problem of consciousness isn’t solvable, but if you disagree I don’t think I can persuade you otherwise.
No abacus can do arithmetic. An abacus just sits there.
No backhoe can excavate. A backhoe just sits there.
A trained agent can use an abacus to do arithmetic, just as one can use a backhoe to excavate. Can you define “do arithmetic” in such a manner that it is at least as easy to prove that arithmetic has been done as it is to prove that excavation has been done?
I’ve watched mine for several hours, and it hasn’t.
No, you haven’t. (p=0.9)
Have you observed a calculator doing arithmetic? What would it look like?
It could look like an electronic object with a plastic shell that starts with “(23 + 54) / (47 * 12 + 76) + 1093” on the screen and some small amount of time after an apple falls from a tree and hits the “Enter” button some number appears on the screen below the earlier input, beginning with “1093.0”, with some other decimal digits following.
If the above doesn’t qualify as the calculator doing “arithmetic” then you’re just using the word in a way that is not just contrary to common usage but also a terrible way to carve reality.
I didn’t do that immediately prior to posting, but I have watched my calculator for a cumulative period of time exceeding several hours, and it has never done arithmetic. I have done arithmetic using said calculator, but that is precisely the point I was trying to make.
Does every device which looks like that do arithmetic, or only devices which could in principle be used to calculate a large number of outcomes? What about an electronic device that only alternates between displaying “(23 + 54) / (47 * 12 + 76) + 1093” and “1093.1203125″ (or “1093.15d285805de42”) and does nothing else?
Does a bucket do arithmetic because the number of pebbles which fall into the bucket, minus the number of pebbles which fall out of the bucket, is equal to the number of pebbles in the bucket? Or does the shepherd do arithmetic using the bucket as a tool?
I didn’t do that immediately prior to posting, but I have watched my calculator for a cumulative period of time exceeding several hours, and it has never done arithmetic. I have done arithmetic using said calculator, but that is precisely the point I was trying to make.
And I would make one of the following claims:
Your calculator has done arithmetic, or
You are using your calculator incorrectly (It’s not a paperweight!) Or
There is a usage of ‘arithmetic’ here that is a highly misleading way to carve reality.
Does every device which looks like that do arithmetic, or only devices which could in principle be used to calculate a large number of outcomes?
In the same way that a cardboard cutout of Decius that has a speech bubble saying “5” over its head would not be said to be doing arithmetic a device that looks like a calculator but just displays one outcome would not be said to be doing arithmetic.
I’m not sure how ‘large’ the number of outcomes must be, precisely. I can imagine particularly intelligent monkeys or particularly young children being legitimately described as doing rudimentary arithmetic despite being somewhat limited in their capability.
Does a bucket do arithmetic because the number of pebbles which fall into the bucket, minus the number of pebbles which fall out of the bucket, is equal to the number of pebbles in the bucket? Or does the shepherd do arithmetic using the bucket as a tool?
It would seem like in this case we can point to the system and say that system is doing arithmetic. The shepherd (or the shepherd’s boss) has arranged the system so that the arithmetic algorithm is somewhat messily distributed in that way. Perhaps more interesting is the case where the bucket and pebble system has been enhanced with a piece of fabric which is disrupted by passing sheep, knocking in pebbles reliably, one each time. That system can certainly be said to be “counting the damn sheep”, particularly since it so easily generalizes to counting other stuff that walks past.
But now allow me to abandon my rather strong notions that “calculators multiply stuff and mechanical sheep counters count sheep”. I’m curious just what the important abstract feature of the universe is that you are trying to highlight as the core feature of ‘arithmetic’. It seems to be something to do with active intent by a generally intelligent agent? So that whenever adding or multiplying is done we need to track down what caused said adding or multiplication to be done, tracing the causal chain back to something that qualifies as having ‘intention’ and say that the ‘arithmetic’ is being done by that agent? (Please correct me if I’m wrong here, this is just my best effort to resolve your usage into something that makes sense to me!)
It’s not a feature of arithmetic, it’s a feature of doing.
I attribute ‘doing’ an action to the user of the tool, not to the tool. It is a rare case in which I attribute an artifact as an agent; if the mechanical sheep counter provided some signal to indicate the number or presence of sheep outside the fence, I would call it a machine that counts sheep. If it was simply a mechanical system that moved pebbles into and out of a bucket, I would say that counting the sheep is done by the person who looks in the bucket.
If a calculator does arithmetic, do the components of the calculator do arithmetic, or only the calculator as a whole? Or is it the system of which does arithmetic?
I’m still looking for a definition of ‘arithmetic’ which allows me to be as sure about whether arithmetic has been done as I am sure about whether excavation has been done.
Well, you do have to press certain buttons for it to happen. ;) And it looks like voltages changing inside an integrated circuit that lead to changes in a display of some kind. Anyway, if you insist on an example of something that “does arithmetic” without any human intervention whatsoever, I can point to the arithmetic logic unit inside a plugged-in arcade machine in attract mode.
Can you define “do arithmetic” in such a manner that it is at least as easy to prove that arithmetic has been done as it is to prove that excavation has been done?
Is still somewhat important to the discussion. I can’t define arithmetic well enough to determine if it has occurred in all cases, but ‘changes on a display’ is clearly neither necessary nor sufficient.
Well, I’d say that a system is doing arithmetic if it has behavior that looks like it corresponds with the mathematical functions that define arithmetic. In other words, it takes as inputs things that are representations of such things as “2”, “3“, and “+” and returns an output that looks like “6”. In an arithmetic logic unit, the inputs and outputs that represent numbers and operations are voltages. It’s extremely difficult, but it is possible to use a microscopic probe to measure the internal voltages in an integrated circuit as it operates. (Mostly, we know what’s going on inside a chip by far more indirect means, such as the “changes on a screen” you mentioned.)
There is indeed a lot of wiggle room here; a sufficiently complicated scheme can make anything “represent” anything else, but that’s a problem beyond the scope of this comment. ;)
Note that neither an abacus nor a calculator in a vacuum satisfy that definition.
I’ll allow voltages and mental states to serve as evidence, even if they are not possible to measure directly.
Does a calculator with no labels on the buttons do arithmetic in the same sense that a standard one does?
Does the phrase “2+3=6” do arithmetic? What about the phrase “2*3=6″?
I will accept as obvious that arithmetic occurs in the case of a person using a calculator to perform arithmetic, but not obvious during precisely what periods arithmetic is occurring and not occurring.
Anyway, if you insist on an example of something that “does arithmetic” without any human intervention whatsoever, I can point to the arithmetic logic unit inside a plugged-in arcade machine in attract mode.
… which was plugged in and switched on by, well, a human.
I think the OP is using their own idiosyncratic definition of “doing” to require a conscious agent. This is more usual among those confused by free will.
The only difference between “explaining the difference between conscious matter and non-conscious matter” and “explaining the difference between living and non-living matter” is that we don’t yet know how to do the former.
It’s impossible to express a sentence like this after having fully appreciated the nature of the Hard Problem. In fact, whether you’re a dualist or a physicalist, I think a good litmus test for whether you’ve grasped just how hard the Hard Problem is is whether you see how categorically different the vitalism case is from the dualism case. See: Chalmers, Consciousness and its Place in Nature.
Physicalism implies that the “hard problem of consciousness” is solvable; physicalism is true; therefore the hard problem of consciousness has a solution.
Physicalism, plus the unsolvability of the Hard Problem (i.e., the impossibility of successful Type-C Materialism), implies that either Type-B Materialism (‘mysterianism’) or Type-A Materialism (‘eliminativism’) is correct. Type-B Materialism despairs of a solution while for some reason keeping the physicalist faith; Type-A Materialism dissolves the problem rather than solving it on its own terms.
Basically, I think that the evidence in favor of physicalism is a lot stronger than the evidence that the hard problem of consciousness isn’t solvable
The probability of physicalism would need to approach 1 in order for that to be the case.
It’s impossible to express a sentence like this after having fully appreciated the nature of the Hard Problem. In fact, whether you’re a dualist or a physicalist, I think a good litmus test for whether you’ve grasped just how hard the Hard Problem is is whether you see how categorically different the vitalism case is from the dualism case. See: Chalmers, Consciousness and its Place in Nature.
::follows link::
Call me the Type-C Materialist subspecies of eliminativist, then. I think that a sufficient understanding of the brain will make the solution obvious; the reason we don’t have a “functional” explanation of subjective experience is not because the solution doesn’t exist, but that we don’t know how to do it.
Van Gulick (1993) suggests that conceivability arguments are question-begging, since once we have a good explanation of consciousness, zombies and the like will no longer be conceivable.
A list of all the objective, third-person, physical facts about the world does not miss any facts about the world.
What’s your reason for believing this? The standard empiricist argument against zombies is that they don’t constrain anticipated experience.
One problem with this line of thought is that we’ve just thrown out the very concept of “experience” which is the basis of empiricism. The other problem is that the statement is false: the question of whether I will become a zombie tomorrow does constrain my anticipated experiences; specifically, it tells me whether I should anticipate having any.
I’m not a positivist, and I don’t argue like one. I think nearly all the arguments against the possibility of zombies are very silly, and I agree there’s good prima facie evidence for dualism (though I think that in the final analysis the weight of evidence still favors physicalism). Indeed, it’s a good thing I don’t think zombies are impossible, since I think that we are zombies.
What’s your reason for believing this?
My reason is twofold: Copernican, and Occamite.
Copernican reasoning: Most of the universe does not consist of humans, or anything human-like; so it would be very surprising to learn that the most fundamental metaphysical distinction between facts (‘subjective’ v. ‘objective,’ or ‘mental’ v. ‘physical,’ or ‘point-of-view-bearing’ v. ’point-of-view-lacking, ’or what-have-you) happens to coincide with the parts of the universe that bear human-like things, and the parts that lack human-like things. Are we really that special? Is it really more likely that we would happen to gain perfect, sparkling insight into a secret Hidden Side to reality, than that our brains would misrepresent their own ways of representing themselves to themselves?
Occamite reasoning: One can do away with the Copernican thought by endorsing panpsychism; but this worsens the bite from the principle of parsimony. A universe with two kinds of fundamental fact is less likely, relative to the space of all the models, then one with one kind (or with many, many more than two kinds). It is a striking empirical fact that, consciousness aside, we seem to be able to understand the whole rest of reality with a single grammatical kind of description—the impersonal, ‘objective’ kind, which states a fact without specifying for whom the fact is. The world didn’t need to turn out to be that way, just as it didn’t need to look causally structured. This should give us reason to think that there may not be distinctions between fundamental kinds of facts, rather than that we happen to have lucked out and ended up in one of the universes with very few distinctions of this sort.
Neither of these considerations, of course, is conclusive. But they give us some reason to at least take seriously physicalist hypotheses, and to weight their theoretical costs and benefits against the dualists’.
One problem with this line of thought is that we’ve just thrown out the very concept of “experience” which is the basis of empiricism.
We’ve thrown out the idea of subjective experience, of pure, ineffable ‘feels,’ of qualia. But we retain any functionally specifiable analog of such experience. In place of qualitative red, we get zombie-red, i.e., causal/functional-red. In place of qualitative knowledge, we get zombie-knowledge.
And since most dualists already accepted the causal/functional/physical process in question (they couldn’t even motivate the zombie argument if they didn’t consider the physical causally adequate), there can be no parsimony argument against the physicalists’ posits; the only argument will have to be a defense of the claim that there is some sort of basic, epistemically infallible acquaintance relation between the contents of experience and (themselves? a Self??...). But making such an argument, without begging the question against eliminativism, is actually quite difficult.
In place of qualitative red, we get zombie-red, i.e., causal/functional-red. In place of qualitative knowledge, we get zombie-knowledge.
At this point, you’re just using the language wrong. “knowledge” refers to what you’re calling “zombie-knowledge”—whenever we point to an instance of knowledge, we mean whatever it is humans are doing. So “humans are zombies” doesn’t work, unless you can point to some sort of non-human non-zombies that somehow gave us zombies the words and concepts of non-zombies.
At this point, you’re just using the language wrong.
That assumes a determinate answer to the question ‘what’s the right way to use language?’ in this case. But the facts on the ground may underdetermine whether it’s ‘right’ to treat definitions more ostensively (i.e., if Berkeley turns out to be right, then when I say ‘tree’ I’m picking out an image in my mind, not a non-existent material plant Out There), or ‘right’ to treat definitions as embedded in a theory, an interpretation of the data (i.e., Berkeley doesn’t really believe in trees as we do, he just believes in ‘tree-images’ and misleadingly calls those ‘trees’). Either of these can be a legitimate way that linguistic communities change over time; sometimes we keep a term’s sense fixed and abandon it if the facts aren’t as we thought, whereas sometimes we’re more intensionally wishy-washy and allow terms to get pragmatically redefined to fit snugly into the shiny new model. Often it depends on how quickly, and how radically, our view of the world changes.
(Though actually, qualia may raise a serious problem for ostension-focused reference-fixing: It’s not clear what we’re actually ostending, if we think we’re picking out phenomenal properties but those properties are not only misconstrued, but strictly non-existent. At least verbal definitions have the advantage that we can relatively straightforwardly translate the terms involved into our new theory.)
Moreover, this assumes that you know how I’m using the language. I haven’t said whether I think ‘knowledge’ in contemporary English denotes q-knowledge (i.e., knowledge including qualia) or z-knowledge (i.e., causal/functional/behavioral knowledge, without any appeal to qualia). I think it’s perfectly plausible that it refers to q-knowledge, hence I hedge my bets when I need to speak more precisely and start introducing ‘zombified’ terms lest semantic disputes interfere in the discussion of substance. But I’m neutral both on the descriptive question of what we mean by mental terms (how ‘theory-neutral’ they really are), and on the normative question of what we ought to mean by mental terms (how ‘theory-neutral’ they should be). I’m an eliminativist on the substantive questions; on the non-substantive question of whether we should be revisionist or traditionalist in our choice of faux-mental terminology, I’m largely indifferent, as long as we’re clear and honest in whatever semantic convention we adopt.
Copernican reasoning: Most of the universe does not consist of humans, or anything human-like; so it would be very surprising to learn that the most fundamental metaphysical distinction between facts (‘subjective’ v. ‘objective,’ or ‘mental’ v. ‘physical,’ or ‘point-of-view-bearing’ v. ’point-of-view-lacking, ’or what-have-you) happens to coincide with the parts of the universe that bear human-like things, and the parts that lack human-like things. Are we really that special? Is it really more likely that we would happen to gain perfect, sparkling insight into a secret Hidden Side to reality, than that our brains would misrepresent their own ways of representing themselves to themselves?
It’s not surprising that a system should have special insight into itself. If a type of system had special insight into some other, unrelated, type of system, then that would be peculiar. If every systems had insights
(panpsychism) that would also be peculiar. But a system, one capable of haing insights, having special insights into itself is not unexpected
Occamite reasoning: One can do away with the Copernican thought by endorsing panpsychism; but this worsens the bite from the principle of parsimony. A universe with two kinds of fundamental fact is less likely, relative to the space of all the models, then one with one kind (or with many, many more than two kinds).
That is not obvious. If the two kinds of stuff (or rather property) are fine-grainedly picked from some
space of stuffs (or rather properties), then that would be more unlikely that just one being picked.
OTOH, if you have a just one, coarse-grained kind of stuff, and there is just one other coarse-grained
kind of stuff, such that the two together cover the space of stuffs, then it is a mystery
why you do not have both, ie every possible kind of stuff. A concrete example is the predominance
of matter over antimatter in cosmology, which is widely interpreted as needing an explanation.
(It’s all about information and probability. Adding one fine grained kind of stuff to another
means that two low probabilities get multiplies together, leading to a very low one that
needs a lot of explainging. Having every logically possible kind of stuff has a high probability,
because we don’t need a lot of information to pinpoint the universe).
So..if you think of Mind as some very specific thing, the Occamite objection goes through. However,
modern dualists are happy that most aspects of consciousness have physical explanations. Chalmers-style
dualism is about explaining qualia, phenomenal qualities. The quantitative properties (Chalmers calls them stuctural-functional) of physicalism and intrinsically qualitative properties form a dyad that covers
property-space in the same way that the matter-antimatter dyad covers stuff-space. In this way, modern dualism can avoid the Copernican Objection.
It is a striking empirical fact that, consciousness aside, we seem to be able to understand the whole rest of reality with a single grammatical kind of description—the impersonal, ‘objective’ kind, which states a fact without specifying for whom the fact is.
(Here comes the shift from properties to aspects).
Although it does specify that the fact is outside me. If physical and mental properties are both
intrinsic to the world, then the physical properties seem to be doing most of the work, and the mental
ones seem redundant. However, if objectivity is seen as a perspective, ie an external perspective, it is no
longer an empirical fact. It is then a tautology that the external world will seem, from the outside, to be
objective, becaue objectivity just is the view from outside. And subjectivity, likewise, is the view from inside, and not any extra stuff, just another way of looking at the same stuff. There are in any case, a set of relations between a thing-and-itself, and another set between a thing-and-other-things Nothing
novel is being introduced by noting the existence of inner and outer aspects. The novel content of the
Dual Aspect solution lies on identifying the Objective Perspective with quantities (broadly including structures and functions) and the Subjective Perspective with qualities, so that Subjective Qualities, qualia, are just how neuronal processing seems from the inside. This point needs justication, which I believe I have, but will
not nmention here.
As far as physicalism is concerned: physicalism has many meanings. Dual aspect theory is incompatible
with the idea that the world is instrinsically objective and physical, since these are not intrinsic
charateristics, accoding to DAT. DAT is often and rightly associated with neutral monism, the idea
that the world is in itself neither mental nor physical, neither objective nor subjective. However,
this in fact changes little for most physicalists: it does not suggest that there are any ghostly substances
or indetectable properties. Nothing changes methodologically; naturalism, inerpreted as the investigation
of the world from the objetive perspective can continue. The Strong Physicalist claim that a complete
phyiscal description of the world is a complete dsecription tout court becomes problematic. Although
such a description is a description of everything, it nonetheless leaves out the subjective perspectives
embedded in it, which cannot be recovered just as Mary the superscientist cannot recover the subjective sensation of Red from the information she has. I believe that a correct understanding of the nature of information shows that “complete information” is a logically incoherent notion in any case, so that DAT does not entail the loss of anything that was ever available in that respect. Furthermore, the absence of complete information has little practical upshot because of the unfeasability of constructing such a complete decription in the first place. All in all, DAT means physicalism is technically false in a way that changes little in practice. The flipside of DAT is Neutral Monism. NM is an inherently attractive metaphsycis, because it means that the universe has no overall characteristic left dangling in need of an
explanation—no “why physical, rather than mental?”.
As far as causality is concerned, the fact that a system’s physical or objective aspects are
enough to predict its behaviour does not mean that its subjective aspects are an unnecessary multiplication
of entities, since they are only a different perspective on the same reality. Causal powers are vested in the neutral reality of which the subjective and the objective are just aspects. The mental is neither causal in itself, or causally idle in itself, it is rather a persepctive on what is causally empowered. There are no grounds for saying that either set of aspects is
exclusively responsible for the causal behaviour of the system, since each is only a perspective on
the system.
I have avoided the Copernican problem, special pleading for human consciousness by pinning mentality, and particualrly subjectivity to a system’s internal and self-refexive relations. The counterpart to excesive anthropocentricism is insufficient anthopocentricism, ie free-wheeling panpsychism, or the Thinking Rock problem.
I believe I have a way of showing that it is logically ineveritable that simple entities cannot have subjective
states that are significantly different from their objective descriptions.
Nothing novel is being introduced by noting the existence of inner and outer aspects.
I’m not sure I understand what an ‘aspect’ is, in your model. I can understand a single thing having two ‘aspects’ in the sense of having two different sets of properties accessible in different viewing conditions; but you seem to object to the idea of construing mentality and physicality as distinct property classes.
I could also understand a single property or property-class having two ‘aspects’ if the property/class itself were being associated with two distinct sets of second-order properties. Perhaps “being the color of chlorophyll” and “being the color of emeralds” are two different aspects of the single property green. Similarly, then, perhaps phenomenal properties and physical properties are just two different second-order construals of the same ultimately physical, or ultimately ideal, or perhaps ultimately neutral (i.e., neither-phenomenal-nor-physical), properties.
I call the option I present in my first paragraph Property Dualism, and the option I present in my second paragraph Multi-Label Monism. (Note that these may be very different from what you mean by ‘property dualism’ and ‘neutral monism;’ some people who call themselves ‘neutral monists’ sound more to me like ‘neutral trialists,’ in that they allow mental and physical properties into their ontology in addition to some neutral substrate. True monism, whether neutral or idealistic or physicalistic, should be eliminative or reductive, not ampliative.) Is Dual Aspect Theory an intelligible third option, distinct from Property Dualism and Multi-Label Monism as I’ve distinguished them? And if so, how can I make sense of it? Can you coax me out of my parochial object/property-centric view, without just confusing me?
I’m also not sure I understand how reflexive epistemic relations work. Epistemic relations are ordinarily causal. How does reflexive causality work? And how do these ‘intrinsic’ properties causally interact with the extrinsic ones? How, for instance, does positing that Mary’s brain has an intrinsic ‘inner dimension’ of phenomenal redness Behind The Scenes somewhere help us deterministically explain why Mary’s extrinsic brain evolves into a functional state of surprise when she sees a red rose for the first time? What would the dynamics of a particle or node with interactively evolving intrinsic and extrinsic properties look like?
A third problem: You distinguish ‘aspects’ by saying that the ‘subjective perspective’ differs from the ‘objective perspective.’ But this also doesn’t help, because it sounds anthropocentric. Worse, it sounds mentalistic; I understand the mental-physical distinction precisely inasmuch as I understand the mental as perspectival, and the physical as nonperspectival. If the physical is itself ‘just a matter of perspective,’ then do we end up with a dualistic or monistic theory, or do we instead end up with a Berkeleian idealism? I assume not, and that you were speaking loosely when you mentioned ‘perspectives;’ but this is important, because what individuates ‘perspectives’ is precisely what lends content to this ‘Dual-Aspect’ view.
All in all, DAT means physicalism is technically false in a way that changes little in practice.
Yes, I didn’t consider the ‘it’s not physicalism!!’ objection very powerful to begin with. Parsimony is important, but ‘physicalism’ is not a core methodological principle, and it’s not even altogether clear what constraints physicalism entails.
It’s not surprising that a system should have special insight into itself.
It’s not surprising that an information-processing system able to create representations of its own states would be able to represent a lot of useful facts about its internal states. It is surprising if such a system is able to infallibly represent its own states to itself; and it is astounding if such a system is able to self-represent states that a third-person observer, dissecting the objective physical dynamics of the system, could never in principle fully discover from an independent vantage point. So it’s really a question of how ‘special’ we’re talking.
If a type of system had special insight into some other, unrelated, type of system, then that would be peculiar.
I’m not clear on what you mean. ‘Insight’ is, presumably, a causal relation between some representational state and the thing represented. I think I can more easily understand a system’s having ‘insight’ into something else, since it’s easier for me to model veridical other-representation than veridical self-representation. (The former, for instance, leads to no immediate problems with recursion.) But perhaps you mean something special by ‘insight.’ Perhaps by your lights, I’m just talking about outsight?
If every systems had insights (panpsychism) that would also be peculiar.
If some systems have an automatic ability to non-causally ‘self-grasp’ themselves, by what physical mechanism would only some systems have this capacity, and not all?
if you have a just one, coarse-grained kind of stuff, and there is just one other coarse-grained kind of stuff, such that the two together cover the space of stuffs, then it is a mystery why you do not have both, ie every possible kind of stuff. A concrete example is the predominance of matter over antimatter in cosmology, which is widely interpreted as needing an explanation.
If you could define a thingspace that meaningfully distinguishes between and admits of both ‘subjective’ and ‘objective’ facts (or properties, or events, or states, or thingies...), and that non-question-beggingly establishes the impossibility or incoherence of any other fact-classifications of any analogous sorts, then that would be very interesting. But I think most people would resist the claim that this is the one unique parameter of this kind (whatever kind that is, exactly...) that one could imagine varying over models; and if this parameter is set to value ‘2,’ then it remains an open question why the many other strangely metaphysical or strangely anthropocentric parameters seem set to ‘1’ (or to ‘0,’ as the case may be).
But this is all very abstract. It strains comprehension just to entertain a subjective/objective distinction. To try to rigorously prove that we can open the door to this variable without allowing any other Aberrant Fundamental Categorical Variables into the clubhouse seems a little quixotic to me. But I’d be interested to see an attempt at this.
A concrete example is the predominance of matter over antimatter in cosmology, which is widely interpreted as needing an explanation.
Sure, though there’s a very important disparity between observed asymmetries between actual categories of things, and imagined asymmetries between an actual category and a purely hypothetical one (or, in this case, a category with a disputed existence). In principle the reasoning should work the same, but in practice our confidence in reasoning coherently (much less accurately!) about highly abstract and possibly-not-instantiated concepts should be extremely low, given our track record.
The quantitative properties (Chalmers calls them stuctural-functional) of physicalism and intrinsically qualitative properties form a dyad that covers property-space
How do we know that? If we were zombies, prima facie it seems as though we’d have no way of knowing about, or even positing in a coherent formal framework, phenomenal properties. But in that case, any analogous possible-but-not-instantiated-property-kinds that would expand the dyad into a polyad would plausibly be unknowable to us. (We’re assuming for the moment that we do have epistemic access to phenomenal and physical properties.) Perhaps all carbon atoms, for instance, have unobservable ‘carbonomenal properties,’ (Cs) which are related to phenomenal and physical properties (P1s and P2s) in the same basic way that P1s are related to P2s and Cs, and that P2s are related to P1s and Cs. Does this make sense? Does it make sense to deny this possibility (which requires both that it be intelligible and that we be able to evaluate its probability with any confidence), and thereby preserve the dyad? I am bemused.
1) If you embrace SSA, then you being you should be more likely on humans being important than on panpsychism, yes? (You may of course have good reasons for preferring SIA.)
2) Suppose again redundantly dual panpsychism. Is there any a priori reason (at this level of metaphysical fancy) to rule out that experiences could causally interact with one another in a way that is isomorphic to mechanical interactions? Then we have a sort of idealist field describable by physics, perfectly monist. Or is this an illegitimate trick?
(Full disclosure: I’d consider myself a cautious physicalist as well, although I’d say psi research constitutes a bigger portion of my doubt than the hard problem.)
The theory you propose in (2) seems close to Neutral Monism. It has fallen into disrepute (and near oblivion) but was the preferred solution to the mind-body problem of many significant philosophers of the late 19th-early 20th, in particular of Bertrand Russell (for a long period). A quote from Russell:
We shall seek to construct a metaphysics of matter which shall make the gulf between physics and perception as small, and the inferences involved in the causal theory of perception as little dubious, as possible. We do not want the percept to appear mysteriously at the end of a causal chain composed of events of a totally different nature; if we can construct a theory of the physical world which makes its events continuous with perception, we have improved the metaphysical status of physics, even if we cannot prove more than that our theory is possible.
Ooo! Seldom do I get to hear someone else voice my version of idealism. I still have a lot of thinking to do on this, but so far it seems to me perfectly legitimate. An idealism isomorphic to mechanical interactions dissolves the Hard Problem of consciousness by denying a premise. It also does so with more elegance than reductionism since it doesn’t force us through that series of flaming hoops that orbits and (maybe) eventually collapses into dualism.
This seems more likely to me so far than all the alternatives, so I guess that means I believe it, but not with a great deal of certainty. So far every objection I’ve heard or been able to imagine has amounted to something like, “But but but the world’s just got to be made out of STUFF!!!” But I’m certainly not operating under the assumption that these are the best possible objections. I’d love to see what happens with whatever you’ve got to throw at my position.
Occamite reasoning: One can do away with the Copernican thought by endorsing panpsychism; but this worsens the bite from the principle of parsimony. A universe with two kinds of fundamental fact is less likely, relative to the space of all the models, then one with one kind (or with many, many more than two kinds). It is a striking empirical fact that, consciousness aside, we seem to be able to understand the whole rest of reality with a single grammatical kind of description—the impersonal, ‘objective’ kind, which states a fact without specifying for whom the fact is. The world didn’t need to turn out to be that way, just as it didn’t need to look causally structured. This should give us reason to think that there may not be distinctions between fundamental kinds of facts, rather than that we happen to have lucked out and ended up in one of the universes with very few distinctions of this sort.
The problem is that we already have two kinds of fundamental facts, (and I would argue we need more). Consider Eliezer’s use of “magical reality fluid” in this post. If you look at context, it’s clear that he’s trying to ask whether the inhabitants of the non-causally stimulated universes poses qualia without having to admit he cares about qualia.
Eliezer thinks we’ll someday be able to reduce or eliminate Magical Reality Fluid from our model, and I know of no argument (analogous to the Hard Problem for phenomenal properties) that would preclude this possibility without invoking qualia themselves. Personally, I’m an agnostic about Many Worlds, so I’m even less inclined than EY to think that we need Magical Reality Fluid to recover the Born probabilities.
I also don’t reify logical constructs, so I don’t believe in a bonus category of Abstract Thingies. I’m about as monistic as physicalists come. Mathematical platonists and otherwise non-monistic Serious Scientifically Minded People, I think, do have much better reason to adopt dualism than I do, since the inductive argument against Bonus Fundamental Categories is weak for them.
Eliezer thinks we’ll someday be able to reduce or eliminate Magical Reality Fluid from our model, and I know of no argument (analogous to the Hard Problem for phenomenal properties) that would preclude this possibility without invoking qualia themselves.
I could define the Hard Problem of Reality, which really is just an indirect way of talking about the Hard Problem of Consciousness.
Personally, I’m an agnostic about Many Worlds, so I’m even less inclined than EY to think that we need Magical Reality Fluid to recover the Born probabilities.
As Eliezer discuses in the post, Reality Fluid isn’t just for Many Worlds, it also relates to questions about stimulation.
As Eliezer discuses in the post, Reality Fluid isn’t just for Many Worlds, it also relates to questions about [simulation].
Only as a side-effect. In all cases, I suspect it’s an idle distraction; simulation, qualia, and born-probability models do have implications for each other, but it’s unlikely that combining three tough problems into a single complicated-and-tough problem will help gin up any solutions here.
Here’s my argument for why you should.
Give me an example of some logical constructs you think I should believe in. Understand that by ‘logical construct’ I mean ‘causally inert, nonspatiotemporal object.’ I’m happy to sort-of-reify spatiotemporally instantiated properties, including relational properties. For instance, a simple reason why I consistently infer that 2 + 2 = 4 is that I live in a universe with multiple contiguous spacetime regions; spacetime regions are similar to each other, hence they instantiate the same relational properties, and this makes it possible to juxtapose objects and reason with these recurrent relations (like ‘being two arbitrary temporal intervals before’ or ‘being two arbitrary spatial intervals to the left of’).
Daniel Dennett’s ‘Quining Qualia’ (http://ase.tufts.edu/cogstud/papers/quinqual.htm) is taken (’round these parts) to have laid the theory of qualia to rest. Among philosophers, the theory of qualia and the classical empiricism founded on it are also considered to be dead theories, though it’s Sellers “Empiricism and the Philosophy of Mind” (http://www.ditext.com/sellars/epm.html) that is seen to have done the killing.
I’ve not actually read this essay (will do so later today), but I disagree that most people here consider the issue of qualia and the “hard problem of consciousness” to be a solved one.
I just read ‘Quining Qualia’. I do not see it as a solution to the hard problem of consciousness, at all. However, I did find it brilliant—it shifted my intuition from thinking that conscious experience is somehow magical and inexplicable to thinking that it is plausible that conscious experience could, one day, be explained physically. But to stop here would be to give a fake explanation...the problem has not yet been solved.
A triumphant thundering refutation of [qualia], an absolutely unarguable proof that [qualia] cannot exist, feels very satisfying—a grand cheer for the home team. And so you may not notice that—as a point of cognitive science—you do not have a full and satisfactory descriptive explanation of how each intuitive sensation arises, point by point.
I think I have qualia. I probably don’t have qualia as defined by Dennett, as simultaneously ineffable, intrinsic, etc, but there are nonetheless ways things seem to me.
It maybe just my opinion, but please don’t quote people and then insert edits into the quotation. Although at least you did do that with parenthesis.
By doing so you seem to say that free will and qualia are the same or interchangeable topics that share arguments for and against. But that is not the case. The question of free will is often misunderstood and is much easier to handle.
Qualia is, in my opinion, the abstract structure of consciousness. So on the underlying basic level you have physics and purely physical things, and on the more abstract level you have structure that is transitive with the basic level.
To illustrate what this means, I think Eliezer had an excellent example(though I’m not sure if his intention was similar): The spiking pattern of blue and actually seeing blue. But even the spiking pattern is far from completely reduced. But the idea is the same. On the level of consciousness you have experience which corresponds to a basic level thing. Very similar to the map and the territory analogue. Colorvision is hard to approach though, and it might be easier to start of with binary vision of 1 pixel. It’s either 1 or 0. Imagine replacing your entire visual cortex with something that only outputs 1 or 0 - though brain is not binary—your entire field of vision having only 2 distinct experienced states. Although if you do that it certainly will result into mind-projection fallacy, since you can’t actually change your visual cortex to only output 1 or 0. Anyway the rest of your consciousness has access to that information, and it’s very very much easier to see how this binary state affects the decisions you make. And it’s also much easier to do the transition from experience to physics and logic. Anyway then you can work your way back up to the normal vision by going several different pixels that are either 1 or 0.. To grayscale vision. But then colors make it much harder. But this doesn’t resolve the qualia issue—how would feel like to have a 1-bit vision? How do you produce a set of rules that is transitive with the experience of vision?
Even if you grind everything down to the finest powder it still will be hard to see where this qualia business comes from, because you exist between the lines.
But this doesn’t resolve the qualia issue—how would feel like to have a 1-bit vision? How do you produce a set of rules that is transitive with the experience of vision?
I agree that that doesn’t resolve the qualia issue. To begin with, we’d need to write a SeeRed() function, that will write philosophy papers about the redness it perceives, and wonder whence it came, unless it has access to its own source code and can see inside the black box of the SeeRed() function. Even epiphenomenalists agree that this can be done, since they say consciousness has no physical effect on behavior. But here is my intuition (and pretty much every other reductionist’s, I reckon) that leads me to reject epiphenomenalism: When I say, out loud (so there is a physical effect) “Wow, this flower I am holding is beautiful!”, I am saying it because it actually looks beautiful to me! So I believe that, somehow, the perception is explainable, physically. And, at least for me, that intuition is much stronger than the intuition that conscious perception and computation are in separate magisteria.
We’ll be able to get a lot further in this discussion once someone actually writes a SeeRed() function, which both epiphenomenalists and reductionists agree can be done.
Meanwhile, dualists think writing such a SeeRed() function is impossible. Time will tell.
So I believe that, somehow, the perception is explainable, physically. And, at least for me, that intuition is much stronger than the intuition that conscious perception and computation are in separate magisteria.
It’s possible for physicalism to be true, and computationalism false.
We’ll be able to get a lot further in this discussion once someone actually writes a SeeRed() function, which both epiphenomenalists and reductionists agree can be done.
I’ll say. Solving the problem does tend to solve the problem.
I haven’t read either of those but will read them. Also I totally think there was a respectable hard problem and can only stare somewhat confused at people who don’t realize what the fuss was about. I don’t agree with what Chalmers tries to answer to his problem, but his attempt to pinpoint exactly what seems so confusing seems very spot-on. I haven’t read anything very impressive yet from Dennett on the subject; could be that I’m reading the wrong things. Gary Drescher on the other hand is excellent.
It could be that I’m atypical for LW.
EDIT: Skimmed the Dennett one, didn’t see much of anything relatively new there; the Sellers link fails.
Sellars is important to contemporary philosophy, to the extent that a standard course in epistemology will often end with EPM. I’m not sure it’s entirely worth your time though, because an argument against classical (not Bayesian) empiricism.
The basic question is over whether our beliefs are purely justified by other beliefs, or whether our (visual, auditory, etc.) perceptions themselves ‘represent the world as being a certain way’ (i.e., have ‘propositional content’) and, without being beliefs themselves, can lend some measure of support to our beliefs. Note that this is a question about representational content (intentionality) and epistemic justification, not about phenomenal content (qualia) and physicalism.
Right—to hammer on the point, the common-ish (EDIT: Looks like I was hastily generalizing) LW opinion is that there never was any “hard problem of consciousness” (EDIT: meaning one that is distinct from “easy” problems of consciousness, that is, the ones we know roughly how to go about solving). It’s just that when we meet a problem that we’re very ignorant about, a lot of people won’t go “I’m very ignorant about this,” they’ll go “This has a mysterious substance, and so why would learning more change that inherent property?”
It should be remembered though that the guy who’s famous for formulating the hard problem of consciousness is:
1) A fan of EY’s TDT, who’s made significant efforts to get the theory some academic attention.
2) A believer in the singularity, and its accompanying problems.
3) The student of Douglas Hofstrader.
4) Someone very interested in AI.
5) Someone very well versed and interested in physics and psychology.
6) A rare, but sometimes poster on LW.
7) Very likely one of the smartest people alive.
etc. etc.
I think consciousness is reducible too, but David Chalmers is a serious dude, and the ‘hard problem’ is to be taken very, very seriously. It’s very easy to not see a philosophical problem, and very easy to think that the problem must be solved by psychology somewhere, much harder to actually explain a solution/dissolution.
I agree with you about how smart Chalmers is and that he does very good philosophical work. But I think you have a mistake in terminology when you say
I think consciousness is reducible too, but David Chalmers is a serious dude, and the ‘hard problem’ is to be taken very, very seriously.
It is an understandable mistake, because it is natural to take “the hard problem” as meaning just “understanding consciousness”, and I agree that this is a hard problem in ordinary terms and that saying “there is a reduction/dissolution” is not enough. But Chalmers introduced the distinction between the “hard problem” and the “easy problems” by saying that understanding the functional aspects of the mind, the information processing, etc, are all “easy problems”. So a functionalist/computationalist materialist, like most people on this site, cannot buy into the notion that there is a serious “hard problem” in Chalmers’ sense. This notion is defined in a way that begs the question assuming that qualia are irreducible. We should say instead that solving the “easy problems” is at the same time much less trivial than Chalmers makes it seem, and enough to fully account for consciousness.
cannot buy into the notion that there is a serious “hard problem” in Chalmers’ sense. This notion is defined in a way
way that begs the question assuming that qualia are irreducible.
No it isn’t. Here is what Chalmers says:
“It is undeniable that some organisms are subjects of experience. But the question of how it is that these systems are subjects of experience is perplexing. Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does.”
There is no statement of irreducubility there. There is a statement that we have “no good explanaion” and we don’t.
What makes the easy problems easy? For these problems, the task is to explain certain behavioral or cognitive functions: that is, to explain how some causal role is played in the cognitive system, ultimately in the production of behavior. To explain the performance of such a function, one need only specify a mechanism that plays the relevant role. And there is good reason to believe that neural or computational mechanisms can play those roles.
What makes the hard problem hard? Here, the task is not to explain behavioral and cognitive functions: even once one has an explanation of all the relevant functions in the vicinity of consciousness—discrimination, integration, access, report, control—there may still remain a further question: why is the performance of these functions accompanied by experience?
It seems clear that for Chalmers any description in terms of behavior and cognitive function is by definition not addressing the hard problem.
Why should physical processing give rise to a rich inner life at all?
What does this mean by “why”? What evolutionary advantage is there? Well, it enables imagination, which lets us survive a wider variety of dangers. What physical mechanism is there? That’s an open problem in neurology, but they’re making progress.
I’ve read this several times, and I don’t see a hard philosophical problem.
It’s definitely a how-it-happens “why” and not how-did-it-evolve “why”
Well, it enables imagination,
There’s more to qualia than free-floating representations. There is no reason to suppose an AI’s internal
maps have phenomenal feels, no way of testing that they do, and no way of engineering them in.
I’ve read this several times, and I don’t see a hard philosophical problem.
It’s a hard scientific problem. How could you have a theory that tells you how the world seems to a bat
on LSD? How can you write a SeeRed() function?
Presumably, the exact same way you’d write any other function.
In this case, all that matters is that instances of seeing red things correctly map to outputs expected when one sees red things as opposed to not seeing red things.
If the correct behavior is fully and coherently maintained / programmed, then you have no means of telling it apart from a human’s “redness qualia”. If prompted and sufficiently intelligent, this program will write philosophy papers about the redness it perceives, and wonder whence it came, unless it has access to its own source code and can see inside the black box of the SeeRed() function.
Of course, I’m arguing a bit by the premises here with “correct behavior” being “fully and coherently maintained”. The space of inputs and outputs to take into account in order to make a program that would convince you of its possession of the redness qualia is too vast for us at the moment.
TL;DR: It all depends on what the SeeRed() function will be used for / how we want it to behave.
In this case, all that matters is that instances of seeing red things correctly map to outputs expected when one sees red things as opposed to not seeing red things.
False. In this case what matters is the perception of a red colour that occurs between input and ouput. That is what the Hard Problem, the problem of qualia is about.
If the correct behavior is fully and coherently maintained / programmed, then you have no means of telling it apart from a human’s “redness qualia”
That doesn’t mean there are no qualia (I have them so I know there are). That also doesn’t mean qualia just
serendiptously arrive whenever the correct mapping from inputs to outputs is in place. You have not written a SeeRed() or solved the HP. You have just assumed that what is very possible a zombie is good enough.
That doesn’t mean there are no qualia (I have them so I know there are). That also doesn’t mean qualia just serendiptously arrive whenever the correct inputs and outputs are in place. You have not written a SeeRed() or solved the HP. You have just assumed that what is very possible a zombie is good enough
None of these were among my claims. For a program to reliably pass turing-like tests for seeing redness, a GLUT or zombielike would not cut it, you’d need some sort of internal system that generates certain inner properties and behaviors, one that would be effectively indistinguishable from qualia (this is my claim), and may very well be qualia (this is not my core claim, but it is something I find plausible).
Obviously I haven’t solved the Hard Problem just by saying this. However, I do greatly dislike your apparent premise* that qualia can never be dissolved to patterns and physics and logic.
* If this isn’t among your premises or claims, then it still does appear that way, but apologies in advance for the strawmanning.
None of these were among my claims. For a program to reliably pass turing-like tests for seeing redness, a GLUT or zombielike would not cut it, you’d need some sort of internal system that generates certain inner properties and behaviors, one that would be effectively indistinguishable from qualia (this is my claim), and may very well be qualia (this is not my core claim, but it is something I find plausible).
Sorry that is most definitely “serendipitously arrive”. You don’t know how to engineer the Redness in explicilty,
you are just assuming it must be there if everything else is in place.
However, I do greatly dislike your apparent premise* that qualia can never be dissolved to patterns and physics and logic.
The claimis more like “hasn’t been”, and you haven’t shown me a SeeRed().
Is there a reason to suppose that anybody else’s maps have phenomenal feels, a way of testing that they do, or a way of telling the difference? Why can’t those ways be generalized to Intelligent entities in general?
I’m also saying that it doesn’t matter. The p-zombies are still conscious. They just don’t have any added “conscious” XML tags as per some imaginary, crazy-assed unnecessary definition of “consciousness”.
Tangential to that point: I think any morality system which relies on an external supernatural thinghy in order to make moral judgments or to assign any terminal value to something is broken and not worth considering.
You appear to be making an unfortunate assumption that what Chalmers and Peterdjones are talking about is crazy-assed unnecessary XML tags, as opposed to, y’know, regular old consciousness.
I’m not sure where my conception of p-zombies went wrong, then. P-zombies are assumed by the premise, if my understanding is correct, to behave physically exactly the same, down to the quantum level (and beyond if any exists), but to simply not have something being referred to as “qualia”. This seems to directly imply that the “qualia” is generated neither by the physical matter, nor by the manner in which it interacts.
Like Eliezer, I believe physics and logic are sufficient to describe eventually everything, and so qualia and consciousness must be made of this physical matter and the way it interacts. Therefore, since the p-zombies have the same matter and the same interactions, they have qualia and consciousness.
What, then, is a non-p-zombie? Well, something that has “something more” (implied: Than physics or logic) added into it. Since it’s something exceptional that isn’t part of anything else so far in the universe to my knowledge, calling it a “crazy-ass unnecessary XML tag” feels very worthy of its plausibility and comparative algorithmic complexity.
The point being that, under this conception of p-zombies and with my current (very strong) priors on the universe, non-p-zombies are either a silly mysterious question with no possible answer, or something supernatural on the same level of silly as atom-fiddling tiny green goblins and white-winged angels of Pure Mercy.
But anyway, EY’s zombies sequences was all about saying that if physics and math is everything, then p-zombies are a silly mysterious question. Because a p-zombie was supposed to be like a normal human to the atomic level, but without qualia. Which is absurd if, as we expect, qualia are within physics and math. Hence there are no p-zombies.
I guess the point is that saying there are no non-p-zombies as a result of this is totally confusing, because it totally looks like saying no-one has consciousness.
(Tangentially, it probably doesn’t help that apparently half of the philosophical world use “qualia” to mean some supernatural XML tags, while the other half use the word to mean just the-way-things-feel, aka. consciousness. You seem to get a lot of arguments between those in each of those groups, with the former group arguing that qualia are nonsense, and the latter group rebutting that “obviously we have qualia, or are you all p-zombies?!” resulting in a generally unproductive debate.)
I guess the point is that saying there are no non-p-zombies as a result of this is totally confusing, because it totally looks like saying no-one has consciousness.
Hah, yes. That seems to be partly a result of my inconsistent way of handling thought experiments that are broken or dissolved in the premises, as opposed to being rejected due to a later contradiction or nonexistent solution.
I’m also saying that it doesn’t matter. The p-zombies are still conscious. They just don’t have any added “conscious” XML tags as per some imaginary, crazy-assed unnecessary definition of “consciousness”.
I have no idea what you are gettign at. Please clarify.
Tangential to that point: I think any morality system which relies on an external supernatural thinghy in order to make moral judgments or to assign any terminal value to something is broken and not worth considering.
That has no discernable relationship to anythign I have said. Have you confused me with someone else?
I’m not sure where I implied that I’m getting at anything. We’re p-zombies, we have no additional consciousness, and it doesn’t matter because we’re still here doing things.
The tangent was just an aside remark to clarify my position, and wasn’t to target anyone.
We may already agree on the consciousness issue, I haven’t actually checked that.
In that sense, what I was getting at is that asking the question of whether we are p-zombies is redundant and irrelevant, since there’s no reason to want or believe in the existence of non-p-zombies.
The core of my claim is basically that our consciousness is the logic and physics that goes on in our brain, not something else that we cannot see or identify. I obviously don’t have conclusive proof or evidence of this, otherwise I’d be writing a paper and/or collecting my worldwide awards for it, but all (yes, all) other possibilities seem orders of magnitude less likely to me with my current priors and model of the world.
TL;DR: Consciousness isn’t made of ethereal acausal fluid nor of magic, but of real physics and how those real physics interact in a complicated way.
since there’s no reason to want or believe in the existence of non-p-zombies.
I believe in the existence of at least onen non-p-zombie, because I have at least indirect evidence of one in the form
of my own qualia.
The core of my claim is basically that our consciousness is the logic and physics that goes on in our brain, not something else that we cannot see or identify.
We can see and identify our consciousness from the inside. It’s self awareness. If you try to treat
consciousness from the outside, you are bound to miss 99% of the point. None of this has antyhing
to do with what consciousness is “made of”.
I believe in the existence of at least onen non-p-zombie, because I have at least indirect evidence of one in the form of my own qualia.
I have a question about qualia from your perspective. If Omega hits you with an epiphenomenal anti-qualia hammer that injures your qualia and only your qualia such that you essentially have no qualia (I.E, you are a P-zombie) for an hour until your qualia recovers (When you are no longer a P-Zombie), what, if anything, might that mean?
1: You’d likely notice something, because you have evidence that qualia exist. That implies you would notice if they vanished for about an hour, since you would no longer be getting that evidence for that hour
2: You’d likely not notice anything, because if you did, a P-Zombie would not be just like you.
3: Epiphenomenal anti-qualia hammers can’t exist. For instance, it might be impossible to affect your qualia and only your qualia, or perhaps it is impossible to make any reversible changes to qualia.
This might seem reasonable at first—it is a strangely appealing image—but something very odd is going on here. My experiences are switching from red to blue, but I do not notice any change. Even as we flip the switch a number of times and my qualia dance back and forth, I will simply go about my business, not noticing anything unusual.
This seems to support an answer of:
2: You’d likely not notice anything, because if you did, a P-Zombie would not be just like you.
But if that’s the case, it seems to contradict the idea of red qualia’s existence even being a useful discussion. If you don’t expect to notice when something vanishes, how do you have evidence that it exists or that it doesn’t exist?
Now, to be fair, I can think you can construct something where it is meaningful to talk about something that you have no evidence of.
If an asteroid goes outside our light cone, we might say: “We have no evidence that this asteroid still exists since to our knowledge, evidence travels at the speed of light and this is outside our light cone. However, if we can invent FTL Travel, and then follow it’s path, we would not expect it to not have winked out of existence right as it crossed our light cone, based on conservation of mass/energy.”
That sounds like a comprehensible thing to say, possibly because it is talking about something’s potential existence given the development of a future test.
And it does seem like you can also do that with Religious epiphenomenon, like souls, that we can’t see right now.
“We have no evidence that our soul still exists since to our knowledge, people are perfectly intelligible without souls and we don’t notice changes in our souls. However, if in the future we can invent soul detectors, we would expect to find souls in humans, based on religious texts.”
That makes sense. It may be wrong, but if someone says that to me, My reaction would be “Yeah, that sounds plausible.”, or perhaps “But how would you invent a soul detector?” much like my reaction would be to the FTL asteroid “Yeah, that sounds plausible.”, or perhaps “But how would you invent FTL?”
I suppose, in essence, that these can be made to pay rent in anticipated experiences, but they are only under conditional circumstances, and those conditions may be impossible.
But for qualia, does this?
“We have no evidence that our qualia still exists since to our knowledge, P-zombies are perfectly intelligible without qualia and we don’t notice changes in our qualia. However, if we can invent qualia detectors, we would expect to detect qualia in humans, based on thought experiments.”
It doesn’t in my understanding, because it seems like one of the key points of qualia is that we can notice it right now and that no on else can ever notice it. Except that according to one of its core proponents, we can’t notice it either. I mean, I can form sentences about FTL or Souls and future expectations that seem reasonable, but even those types of sentences seem to fail at talking about qualia properly.
2: You’d likely not notice anything, because if you did, a P-Zombie would not be just like you.
P-zombies are behaviourally like me. That means I would notn act as if I noticed anything. OTOH qualia
are part of conciouness, so my conscious awarenss would change. I would be compelled to lie, in a sense.
Would you lie then, or are you lying now? You have just said that your experience of qualia is not evidence even to yourself that you experience qualia.
Or is there a possible conscious awareness change that has zero effect? Can doublethink go to that metalevel?
I belive in the existence of at least on non-p-zombie, because I have at least indirect evidence of one in the form of my own qualia.
I must not be working with the right / same conception of p-zombies then, because to me qualia experience provides exactly zero bayesian evidence for or against p-zombies on its own.
“A philosophical zombie or p-zombie in the philosophy of mind and perception is a hypothetical being that is indistinguishable from a normal human being except in that it lacks conscious experience, qualia, or sentience.[1] “—WP
I am of course taking a p-zombie to be lacking in qualia. I am not sure that alternatives are even coherent, since I don’t see how other aspects of consciousness could go missing without affecting behaviour.
Wait, those premises just seem wrong and contradictory.
To even work in the thought experiment, p-zombies live in a world with physics and logic identical to our own (with possibility of added components).
In principle, qualia can either be generated by physics, logic, or something else (i.e. magic), or any combination thereof.
There is no magic / something else.
We have qualia, generated apparently only by physics and/or logic.
p-zombies have the exact same physics and logic, but still no qualia.
???
My only remaining hypothesis is that p-zombies live in a world where the physics and logic are there, but there is also something else entirely magical that does not seem to exist in our universe that somehow prevents their qualia, by hypothesis. Very question-begging. Also unnecessarily complex. I am apparently incapable of working with thought experiments that defy the laws of logic by their premises.
You seem to have done a 180 shift from insiting that there are only zombies to saying there are no zombies.
3 There is no magic / something else.
[..]
I am apparently incapable of working with thought experiments that defy the laws of logic by their premises.
I don’t know of any examples. Typically zombie gedankens do not take 3 as a premise, and conclude
the oppoiste—that there is an extra non-physical ingredient as a conclusion.
You seem to have done a 180 shift from insiting that there are only zombies to saying there are no zombies.
Yes. My understanding of p-zombies was incorrect/different. If p-zombies have no qualia by the premises, as you’ve shown me a clear definition of, then we can’t be p-zombies. (ignoring the details and assuming your experiences are like my own, rather than the Lords of the Matrix playing tricks on me and making you pretend you have qualia; I think this is a reasonable assumption to work with)
I don’t know of any examples. Typically zombie gedankens do not take 3 as a premise, and conclude the oppoiste—that there is an extra non-physical ingredient as a conclusion.
So they write their bottom line in the premises of the thought experiment in a concealed manner? I’m almost annoyed enough to actually give them that question they’re begging for so much.
Now E.Y.’s Zombie posts are starting to make a lot more sense.
So they write their bottom line in the premises of the thought experiment in a concealed manner?
No. Leaving physicalism out as a premise is not the same as incuding non-physicalaism as a premise. Likewise, concluding non-physicalism is not assuming it.
There must be non-physical things to assume that there is any difference between “us” and “p-zombies”. This is a logical requirement. They posit that there effectively is a difference, in the premises right there, by asserting that p-zombies do not have qualia, while we do.
Premise: P-zombies have all the physical and logical stuff that we do.
Premise: P-zombies DO NOT have qualia.
Premise: We have qualia.
Implied premise: This thought experiment is logically consistent.
The only way 4 is possible is if it is also implied that:
Implied premise: Either us, or P-Zombies, have something magical that adds or removes qualia.
By the reasoning which prompts them to come up with the thought experiment in the first place, it cannot be the zombies that have an additional magical component, because this would contradict the implied premise that the thought experiment is logically consistent (and would question the usefulness and purpose of the thought experiment).
Therefore:
“Conclusion”: We have something magical that gives us qualia.
The p-zombie thought experiment is usually intended to prove that qualia is magical, yes. This is one of those unfortunate cases of philosophers reasoning from conceivability, apparently not realising that such reasoning usually only reveals stuff about their own mind.
I wouldn’t say “qualia is magic” is actually a premise, but the argument involves assuming “qualia could be magical” and then invalidly dropping a level of “could”.
In this case the “could” is an epistemic “could”—“I don’t know whether qualia is magical”. Presumably, iff qualia is magical, then p-zombies are possible (ie. exist in some possible world, modal-could), so we deduce that “it epistemic-could be the case that p-zombies modal-could exist”. Then I guess because epistemic-could and modal-could feel like the same thing¹, this gets squished down to “p-zombies modal-could exist” which implies qualia is magical.
Anyway, the above seems like a plausible explanation of the reasoning, although I haven’t actually talked to ay philosophers to ask them if this is how it went.
¹ And could actually be (partially or completely) the same thing, since unless modal realism is correct, “possible worlds” don’t actually exist anywhere. Or something. Regardless, this wouldn’t make the step taken above legal, anyway. (Note that the previous “could” there is an epistemic “could”! :p)
I had always understood that “We have something magical that gives us qualia” was one of the explicit premises of p-zombies (p-zombies being defined as that which lacks that magical quality, but appears otherwise human). One could then see p-zombies as a way to try to disprove the “something magical” hypothesis by contradiction—start with someone who doesn’t have that magical something, continue on from there, and stop once you hit a contradiction.
We have something magical that gives us qualia” was one of the explicit premises of p-zombies
Nope. eg.
According to physicalism, all that exists in our world (including consciousness) is physical.
Thus, if physicalism is true, a logically-possible world in which all physical facts are the same as those of the actual world must contain everything that exists in our actual world. In particular, conscious experience must exist in such a possible world.
In fact we can conceive of a world physically indistinguishable from our world but in which there is no consciousness (a zombie world). From this (so Chalmers argues) it follows that such a world is logically possible.
Therefore, physicalism is false. (The conclusion follows from 2. and 3. by modus tollens.)
(Chalmer’s argument according to WP)
One could then see p-zombies as a way to try to disprove the “something magical” hypothesis by contradiction
Thus, if physicalism is true, a logically-possible world in which all physical facts are the same as those of the actual world must contain everything that exists in our actual world. In particular, conscious experience must exist in such a possible world.
In fact we can conceive of a world physically indistinguishable from our world but in which there is no consciousness (a zombie world). From this (so Chalmers argues) it follows that such a world is logically possible.
These two steps are contradictory. In the first one, you state that a world physically indistinguishable from ours must include consciousness; then in the very next point, you consider a world physically indistinguishable from ours which does not include consciousness to be logically possible—exactly what the previous step claims is not logically possible.
So the second is then implicitly assuming that physicalism is not true; it seems to me that the whole argument is basically a longwinded way of saying “I can’t imagine how consciousness can possibly be physical, therefore since I am conscious, physicalism is false”.
One might as easily imagine a world physically indistinguishable from ours, but in which there is no gravity, and thence conclude that gravity is not physical but somehow magical.
For some values of “imagine”. Given relativity, it would be pretty difficult to coheretly unplug gravity from mass, space and acceleration. It would be easier under Newton. I conclude that the unpluggabiliy of qualia means we just don’t have a relativity-grade eplanation of them, an explanation that makes them deeply interwoven with other things.
I conclude that the unpluggabiliy of qualia means we just don’t have a relativity-grade eplanation of them, an explanation that makes them deeply interwoven with other things.
Inertia and mass are the same thing. You probably meant “the same proportionality constant between mass and gravitational force”, that is, imagine that the value of Newton’s constant G was different.
But this (like CCC’s grandparent post introducing the gravity analogy) actually goes in Chalmers’ favor. Insofar as we can coherently imagine a different value of G with all non-gravitational facts kept fixed, the actual value of G is a new “brute fact” about the universe that we cannot reduce to non-gravitational facts. The same goes for consciousness with respect to all physical facts, according to Chalmers. He explicitly compares consciousness to fundamental physical quantities like mass and electric charge.
The problem is that one aspect of the universe being conceptually irreducible at the moment (which is all that such thought experiments prove) does not imply it might forever remain so when fundamental theory changes, as Peterdjones says. Newton could imagine inertia without gravity at all, but after Einstein we can’t. Now we are able to imagine a different value of G, but maybe later we won’t (and I can actually sketch a plausible story of how this might come to happen if anyone is interested).
No, I meant a form of matter which coexisted with current forms of matter but which was accelerated by a force disproportionately to the amount of force exerted through the gravity force. One such possibility would be something that is ‘massless’ in that it isn’t accelerated by gravity but that has electric charge.
And by definition, the value of G is equal to 1, just like every other proportionality constant. I wasn’t postulating that MG/NS^2 have a different value.
One might as easily imagine a world physically indistinguishable from ours, but in which there is no gravity, and thence conclude that gravity is not physical but somehow magical.
Oooh, good one. I’m trying this if someone ever seriously tries to argue p-zombies with me.
Within this discussion, I’ve tried to consistently use “magic” as meaning “not physics or logic”. Essentially, things that, given a perfect model of the (physical) universe that we live in, would be considered impossible or would go against all predictions for no cause that we can attribute to physics or logic or both.
So dualism is only one example, another could be intervention by the Lords of the Matrix (depending on how you draw boundaries for “universe that we live in”), and God or ontologically basic mental entities could be others.
So the assertion “we have something magical” is equivalent to “qualia is made of nonlogics” (although “nonlogics” is arguably still much more useful than “nonapples” as a conceptspace pointer).
Errr, yes..that is the intended conclusion. But I don’t think you can say an argument is question begging beccause the intended conclusion follows from the premises taken jointly.
And how, pray tell, did they reach into the vast immense space of possible hypotheses and premises, and pluck out this one specific set of premises which just so happens that if you accept it completely, it inevitably must result in the conclusion that we have something magical granting us qualia?
The begging was done while choosing the premises, not in one of the premises individually.
Premise: All Bob Chairs must have seventy three thousand legs exactly. Premise: Things we call chairs are illusions unless they are Bob Chairs. Premise: None of the things we call chairs have exactly seventy three thousand legs. Therefore, all of the things we call chairs are illusions and do not exist.
I seriously don’t see how the above argument is any more reasonable and any more or less question-begging than the p-zombie argument I’ve made in the grandparent. No single premise here assumes the conclusion, right? So no problem!
ETA: Perhaps it’s more clear if I just say that in order for the premises of the grandparent to be logically valid, one must also assume as a premise that having the information patterns of the human brain without creating qualia is possible in the first place. This is the key point that is the source of the question begging: It is assumed that the brain interactions do not create qualia, implicitly as part of the premises, otherwise the statement “P-zombies have the same brain interactions that we do but no qualia” is directly equivalent to “A → B, A, ¬B”.
So for A (brain interactions identical to us), B (possess qualia), and C (has magic):
(A → B) <==> ¬B → ¬A
((C → B) OR (AC → B)) <==> ¬(A → B)
A
¬B
Refactor to one single “question-begging” premise: ((((C ->B) OR (AC → B)) → C) <==> ¬(¬B → ¬A)) AND A AND ¬B
And how, pray tell, did they reach into the vast immense space of possible hypotheses and premises, and pluck out this one specific set of premises which just so happens that if you accept it completely, it inevitably must result in the conclusion that we have something magical granting us qualia?
I suppose they have the ability to formulate arguments that support their views. Are you saying that the honest way to argue is to fling premises together at random and see what happens?
The begging was done while choosing the premises, not in one of the premises individually.
Joint implication by premsies is validity not petitio principi.
Premise: All Bob Chairs must have seventy three thousand legs exactly.
Premise: Things we call chairs are illusions unless they are Bob Chairs.
Premise: None of the things we call chairs have exactly seventy three thousand legs.
Therefore, all of the things we call chairs are illusions and do not exist.
That is an example of a True Scotsman fallacy, or argument by tendentious redefinition. I don’t see the parallel.
However, all they’ve done is pick specific premises that hide clever assumptions that logically must end up with their desired conclusion, without any reason in particular to believe that their premises make any sense. See the amateur logic I did in my edits of the grandparent.
It is very much assumed, by asserting the first, third and fourth premises, that qualia does not require brain interactions, as a prerequisite for positing the existence of p-zombies in the thought experiment.
I have, but unfortunately that’s mostly because I don’t know the formal nomenclature and little details of writing conceivability and possibility logical statements.
I wouldn’t really trust myself to write formal logic with conceivability and probability without missing a step or strawmanning one of the premises at some point, with my currently very minimal understanding of that stuff.
But putting in the statement that zombies have all of the physical and logical characteristics of people, but lack some other characteristic, requires that some non-physical characteristic exists. You can’t say “I don’t assume magic” and then assume a magician!
Well, I understand that if consciousness was physical, but didn’t effect our behavior, then removing that physical process would result in a zombie. That’s usually the example given, not magic.
The usual p-zombie argment in the literature does not assume consciousness is entirely physical. Which is not the same as assuming it is non physical...
Just to be clear, the fact that they talk about bridging laws or such doesn’t mean they didn’t generate the idea with magical thinking, or that is has a hope in hell of being actually true. It just means they managed to put a band-aid over that particular fallacy.
No comment. That’s not what I said and I’m not saying it now. My point is that, while the p-zombie argument may have been formulated with “magical” explanations in mind, it does not directly reference them in the form usually presented.
I see little point in ignoring what an argument states explicily in favour of speculations about what the formulaters had in mind. I also think that rhetorical use of the word “magic” is mind killing. Quantum teleportation might seem magical to a 19th century physicist, but it still exists.
Which is why my point is that that the argument makes no mention of “magic”.
My point is that, while the p-zombie argument may have been formulated with “magical” explanations in mind, it does not directly reference them in the form usually presented.
Removing something physical doesn’t create a p-zombie, it creates a lobotomized person. If there was a form of brain damage that could not be detected by any means and had no symptoms, would it be a possible side effect of medication?
Compare two people who are physically identical except for one thing which doesn’t change anything else micro or macro scale. Clearly, one of them is a p-zombie, because that one lacks qualia.
I still don’t understand what the difference is between someone who lacks consciousness but is otherwise identical to someone who has consciousness.
With actual humans, p-zombies are almost certainly impossible. But imagine a world in which humans aren’t controlled by their brains; the Zombie Fairy intervenes and makes them act as she predicts they would act. Now the Zombie Fairy is so good at her job that the people of this world experience controlling their own bodies; but in actuality, they have no effect on their actions (except by coincidence.) If one of their brains was somehow altered without the Fairy’s knowledge, they would discover their strange predicament (but be unable to tell anyone—they would live out their life as a silent observer.) If one of their brains was destroyed without the Fairy’s noticing, they would continue as as a lifeless puppet, indistinguishable from regular humans—a p-zombie.
Now, it could be argued that the Fairy—who is what is usually referred to as a Zombie Master—is herself conscious, and as such these zombies are not true p-zombies. But this should give you some idea of what people are imagining when they say “p-zombie”.
That scenario sounds identical to “everybody is a p-zombie”.
It is! Unless of course you happen to be one of the poor people who exist solely to grant said zombies qualia.
Is there also a perception fairy, since perceiving the zombie fairy’s influence doesn’t create any physical changes in brain state or behavior?
Perception proceeds as normal in this counterfactual world. Of course, this world is not necessarily identical to our world, depending on how obvious the Perception Fairy is.
Does “As normal” mean that noticing the effects of the zombie fairy results in electrochemical changes in the brain that are different from those which occur in the absence of noticing those effects?
For some reason I can understand it better if I think of a sentient computer with standard input devices as things that it considers “real”, and a debugger that reads and alters memory states at will, outside the loop of what the machine can know. Assuming that such a system could be self-aware in the same sense that I think I am, how would it respond if every time it asked a class of question, the answer was modified by ‘magic’?
Does “As normal” mean that noticing the effects of the zombie fairy results in electrochemical changes in the brain that are different from those which occur in the absence of noticing those effects?
...yes? How would one notice something without changing brain-state to reflect that?
For some reason I can understand it better if I think of a sentient computer with standard input devices as things that it considers “real”, and a debugger that reads and alters memory states at will, outside the loop of what the machine can know. Assuming that such a system could be self-aware in the same sense that I think I am, how would it respond if every time it asked a class of question, the answer was modified by ‘magic’?
I think you may have misunderstood. The fairy controls the bodies, but has perfectly predicted in advance what the human would have done. Thus whatever they try to do is simultaneously achieved by the fairy; but they have no effect on their bodies. The fairy doesn’t alter their brains at all. If something else did alter their brain, but for some reason the fairy didn’t notice and update her predictions, then they would become “out of sync” with their body.
You need to specify whether your “putting in” is assuming or concluding. In general, it would help to refer to a concrete example of a p-zombie argment from a primary source.
Defining. A p-zombie is defined by all of the primary sources as having all of the physical qualities that humans have, but lacking something that humans have.
A magician is defined as a human that can do magic. Magicians (people identical to humans but with supernatural powers) don’t prove anything about physicalism any more than p-zombies do, unless it can be shown that either are exemplified.
unless it can be shown that either are exemplified.
The literature suggests that p-zombies can be significant if they are only conceptually possible. In fact, zombie theorists like Chalmers think they are naturalistically impossible and so cannot be exemplified. You may not like arguments from conceptiual possibility, but he has argued for his views, where you have so far only expressed opinion.
Then the literature suggests that magicians can be significant if they are only conceptually possible. And the conceptual possibility of non-physicalism disproves physicalism.
Magicians are defined as physically identical to humans and p-zombies but they have magic. Magic has no physical effects, doesn’t even trigger neurons, but humans with magic experience it and regular humans and p-zombies don’t.
So it has all of the characteristics of qualia. Any evidence for qualia is also evidence for this type of magic.
Yes. The argument of the grandparent is logically consistent AFAICT.
P-zombies are (Non-self-contradictory) IFF qualia comes from nonlogics and nonphysics.
Qualia comes from nonlogics and nonphysics IFF nonlogics and nonphysics are possible. (this is trivially obvious)
P(Magicians | “nonlogics and nonphysics are possible”) > P(Magicians | ¬”nonlogics and nonphysics are possible”)
ETA: That last one is probably misleading / badly written. Is there a proper symbol for “No definite observation of X or ¬X”, AKA the absence of this piece of evidence?
If qualia is defined such that is is conceptually possible that one person can experience qualia while a physically identical person cannot the other does not, then qualia are defined to be non physical.
Didn’t we have his exact same argument? Even if qualia are generated by our (physical) brains, this doesn’t mean that they could counterfactually be epiphenomenal if something was reproducing the effects they have on our bodies.
The same could be said of cats: Even if cats are part of the physical universe, they could counterfactually be epiphenomenal if something was reproducing the effects they have on the world.
How does the argument apply to qualia and not to cats?
Generating effects indistinguishable from the result of an ordinary cat—from reflected light to half-eaten mice. Of course, there are a few … extra effects in there. So you know none of you are ordinary cats.
The epiphenomenal cats, on the other hand, are completely undetectable. Except to themselves.
I’m not granting cats a point of view for this discussion: they are something that we can agree clearly exists and we can describe their boundaries with a fair degree of precision.
What do these ‘extra effects’ look like, and are they themselves proof that physicalism is wrong?
The whole point was that if the cats have a point of view, then they have the information to posit themselves; even though an outside observer wouldn’t.
It’s subjective information. I can’t exactly show my qualia to you; I can describe them, but so can a p-zombie.
Didn’t I say I wasn’t going to discuss qualia with you until you actually knew what they were? Because you’re starting to look like a troll here. Not saying you are one, but …
So, you’re saying that it is subjective whether qualia have a point of view, or the ability to posit themselves?
Because I have all of the observations needed to say that cats exist, even if they don’t technically exist. I do not have the observations needed to say that there is a non-physical component to subjective experience.
Y’know, I did say I wasn’t going to discuss qualia with you unless you knew what they were. Do some damn research, then come back here and start arguments about them.
I’m very confused. Are you implying that experiencing qualia is no reason to posit that qualia exists, period?
Or maybe you’re just saying “Hey, unless the cats have conscious self-aware minds that can experience cats, then they still can’t either!”—which I took for granted and assumed the jump from there to “assuming cats have the required mental parts” was a trivial inference to make.
OK, it’s just that the statement “if something is reproducing the effect of cats on the world we have no reason to posit cats as existing” declares that something that is not really a “cat” the way we perceive it, but only an “effect of a cat”, then it does not “exist”. Ergo, if you are only an effect of a cat, you don’t exist as a cat.
Maybe your objection is that we should taboo and dissolve that whole “existing” thing?
Wouldn’t that be nice, but unfortunately EY-style realism and my version of instrumentalism seem to diverge at that definition.
Re qualia, I don’t understand what you are asking. The term means no more to me than a subroutine in a reasonably complex computer program, if currently run on a different substrate.
And, if I understand correctly, this subroutine exists (and is felt / has effect on its host program) whether or not it “exists as qualia” in the particular sense that some clever arguer wants to define qualia as anything other than that subroutine. The fact that there is an effect of the subroutine is all that is required for the subroutine to exist in the first sense, while whether it is “the subroutine” or only a mimicking effect is only relevant for the second sense of “exist”, which is irrelevant to you.
In this case, feel free to assume no-one ever tries to observe cat brains. The “simulation” only has to reproduce your actions, which it does with magic.
Oh, well there’s your problem then. You’re not part of “the effect of cats”. That’s stuff like air displacement, reflected light, purring, that sort of thing.
If you’re using some nonstandard epistemology that doesn’t distinguish between observations that point to something and the thing itself, then nothing. Otherwise the difference between a liar and a reality warper.
Interesting point. Observations are certainly effects, but you’re right, not all effects are observations. Of course, the example wouldn’t be hurt by my specifying that they only bother faking effects that will lead to observations ;)
the example wouldn’t be hurt by my specifying that they only bother faking effects that will lead to observations ;)
I think it would. I think it’s not the same example at all anymore.
Something that reproduces all effects of cats is effectively producing all the molecular interactions and neurons and flesh and blood and fur that we think are what produces our observations of cats.
On the other hand, something that only reproduces the effects that lead directly to observations is, in its simplest form, something that analyzes minds and finds out where to inject data into them to make these minds have the experiences of the presence of cats, and analyzes what other things in the world a would-be-cat would change, and just change those directly (i.e. if a cat would’ve drank milk and produced feline excrement, then milk disappears and feline excrement appears, and a human’s brain is modified such that the experience of seeing a cat drink milk and make poo is simulated).
Something that reproduces all effects of cats is effectively producing all the molecular interactions and neurons and flesh and blood and fur
Not unless something is somehow interacting with their neurons, which I stated isn’t happening for simplicity, and most of the time not for the blood or flesh.
On the other hand, something that only reproduces the effects that lead directly to observations is, in its simplest form, something that analyzes minds and finds out where to inject data into them to make these minds have the experiences of the presence of cats, and analyzes what other things in the world a would-be-cat would change, and just change those directly (i.e. if a cat would’ve drank milk and produced feline excrement, then milk disappears and feline excrement appears, and a human’s brain is modified such that the experience of seeing a cat drink milk and make poo is simulated).
Oh, I meant the interactions occur where they would if the cat was real, but these increasingly-godlike fairies are lazy and don’t bother producing them if their magic tells them it wouldn’t lead to an observation.
My (admittedly lacking) understanding of Information Theory precludes any possibility of perfectly reproducing all effects of the presence of cats throughout the universe (or multiverse or whatever) without having in some form or another a perfect model or simulation of all the individual interactions of the base elements which cats are made of. This would, as it contains the same patterns within the model which when made of “physical matter” produce cats, essentially still produce cats.
So if there’s a mechanism somewhere making sure that the reproduction is perfect, it’s almost certainly (to my knowledge) “simulating” the cats in some manner, in which case the cats are in that simulation and perceive the same experiences they would if they were “really” there in atoms instead of being in the simulation.
If you posit some kind of ontologically basic entity that somehow magically makes a universal consistency check for the exact worldstates that could plausibly be computed if the cat were present, without actually simulating any cat, then sure… but I think that’s also not the same problem anymore. And it requires accepting a magical premise.
Oh, right. Yup, anything simulating you that perfectly is gonna be conscious—but it might be using magic. For example, perhaps they pull their data out of parallel universe where you ARE real. Or maybe they use some black-swan technique you can’t even imagine. They’re fairies, for godssake. And you’re an invisible cat. Don’t fight the counterfactual.
Haha, that one made me laugh. Yes, it’s fighting the counterfactual a bit, but I think that this is one of the reasons why there was a chasm of misunderstandings in this and other sub-threads.
Anyway, I don’t see any tangible things left to discuss here.
Oh, you mean we shouldn’t assume we’re the same as the other cats. Obviously there’s some possibility that we’re unique, but (assuming our body is “simulated” as well, obviously) it seems like all “cats” probably contain epiphenomenal cats as well. Do you think everyone else is a p-zombie? Obviously it’s a remote possibility, but...
Oh, you mean we shouldn’t assume we’re the same as the other cats.
No, I did not mean that, unless one finds some good evidence supporting this additional assumption. My point was quite the opposite, that your statement “if something is reproducing the effect of cats on the world we have no reason to posit cats as existing” does not need a qualifier.
No, I did not mean that, unless one finds some good evidence supporting this additional assumption. My point was quite the opposite, that your statement “if something is reproducing the effect of cats on the world we have no reason to posit cats as existing” does not need a qualifier.
Look, if all “cats” are actually magical fairies using their magic to reproduce the effect of cats, yet I find myself as a cat—whose effect on the world consists of a fairy pretending to be me so well even I don’t notice (except just now, obviously.). Thus, for the one epiphenomenal cat I can know about—myself—I am associated with a “cat” that perfectly duplicates my actions. I can’t check if all “cats” have similar cats attached, since they would be epiphenomenal, but it seems likely, based on myself, that there are.
Do you think everyone else is a p-zombie?
Not sure why you bring that silly concept up...
Because the whole point of this cat metaphor was to make a point about p-zombies. That’s what they are. They’re p-zombies for cats instead of qualia.
Because the whole point of this cat metaphor was to make a point about p-zombies. That’s what they are. They’re p-zombies for cats instead of qualia.
Well, the point was to point out that we only think things exist because we experience them, and therefore that anything which duplicates the experience is as real as the original artifact.
Suppose there were to be no cats, but only a magical fairy which knocks things from the mantlepiece and causes us to hallucinate in a consistent manner (among other things). There is no reason to consider that world distinguishable, even in principle, from the standard model.
Now, suppose that you couldn’t see cats, but instead could see the ‘cat fairy’. What is different now, assuming that the cat fairy is working properly and providing identical sensory input as the cats?
There are two differences: the presence of the fairy (which can be observed … somehow) and the possibility of deviating from the mind. P-zombies are described as acting just like humans, but lack consciousness. “Cats” are generally like the human counterparts to p-zombies (who act just the same—by definition—but have epiphenomenal consciousness.)
TL;DR: it’s observable in principle. But I, as author, have decreed that you arn’t getting to check if your friends are cats as well as “cats”.
Y’know, I’m starting to think this may have been a poor example. It’s a little complicated.
If the fairy is observable despite being in principle not observable… I break.
If it is in principle possible to experience differently from what a quantum scan of the brain and body would indicate, but behave in accordance with physicalism … how would you know if what you experienced was different from what you thought you experienced, or if what you thought was different from what you honestly claimed that you thought?
That would seem to be close to several types of abnormal brain function, where a person describes themself as not in control of their body. I think those cases are better explained by abnormal internal brain communication, but further direct evidence may show that the ‘reasoning’ and ‘acting’ portions of some person are connected similarly enough to normal brains that they should be working the same way, but aren’t. If there is a demonstrated case either of a pattern of neurons firing corresponding to similar behavior in all typical brains and a different behavior in a class of brains of people with such abnormal functioning (or in physically similar neurons firing differently under similar stimuli), then I would accept that as evidence that the fairy perceived by those people existed.
If the fairy is observable despite being in principle not observable… I break.
It’s observable. The cats are epiphenomenal, and thus unobservable, except to themselves.
If it is in principle possible to experience differently from what a quantum scan of the brain and body would indicate, but behave in accordance with physicalism … how would you know if what you experienced was different from what you thought you experienced, or if what you thought was different from what you honestly claimed that you thought?
Pardon?
That would seem to be close to several types of abnormal brain function, where a person describes themself as not in control of their body. I think those cases are better explained by abnormal internal brain communication, but further direct evidence may show that the ‘reasoning’ and ‘acting’ portions of some person are connected similarly enough to normal brains that they should be working the same way, but aren’t. If there is a demonstrated case either of a pattern of neurons firing corresponding to similar behavior in all typical brains and a different behavior in a class of brains of people with such abnormal functioning (or in physically similar neurons firing differently under similar stimuli), then I would accept that as evidence that the fairy perceived by those people existed.
Well, if they can tell you what the problem is then they clearly have some control. More to the point, it is a known feature of the environment that all observed cats are actually illusions produced by fairies. It is a fact, although not generally known, that there are also epiphenomenal (although acted upon by the environment) cats; these exist in exactly the same space as the illusions and act exactly the same way. If you are a human, this is all fine and dandy, if bizarre. But if you are a sentient cat (roll with it) then you have evidence of the epiphenomenal cats, even though this evidence is inherently subjective (since presumably the illusions are also seemingly sentient, in this case.)
If it is in principle possible to experience differently from what a quantum scan of the brain and body would indicate, but behave in accordance with physicalism … how would you know if what you experienced was different from what you thought you experienced, or if what you thought was different from what you honestly claimed that you thought?
Pardon?
How could you tell if you were experiencing something differently from the way a p-zombie would (or, if you are a p-zombie, if you were experiencing something differently from the way a human would)?
But if you are a sentient cat (roll with it) then you have evidence of the epiphenomenal cats, even though this evidence is inherently subjective (since presumably the illusions are also seemingly sentient, in this case.)
In every meaningful way, the cat fairy is a cat. There is no way for an epiphenomenal sentient cat to differentiate itself from a cat fairy, nor any way for a cat fairy to differentiate itself from whatever portions of ‘cats’ it controls (without violating the constraints on cat fairy behavior). Of course, there’s also the conceivability of epiphenomenal sentient ghosts which cannot have any effect on the world but still observe. (That’s one of my death nightmares—remaining fully perceptive and cognitive but unable to act in any way.)
You seem to be somewhat confused about the notion of a p-zombie. A p-zombie is something physically identical to a human, but without consciousness. A p-zombie does not experience anything in any way at all. P-zombies are probably self-contradictory.
How could you tell if you were experiencing something differently from the way a p-zombie would (or, if you are a p-zombie, if you were experiencing something differently from the way a human would)?
I am experiencing something, therefore I am not a p-zombie.
Consider the possibility that you are not experiencing everything that humans do. Can you provide any evidence, even to yourself, that you are? Could a p-zombie provide that same evidence?
Are you asking what I would experience? Because I wouldn’t. Not to mention that such a thing can’t happen if, as I expect, subjective experience arises from physics.
i you cannot find any experimental differences betweenn you and a you NOT experiencing
I cannot present you with evidence that I am experiencing, except maybe by analogy with yourself. I, however, know that I experience because I experience it.
How could you tell if you were experiencing something differently from the way a p-zombie would (or, if you are a p-zombie, if you were experiencing something differently from the way a human would)?
Because p-zombies aren’t conscious. By definition.
In every meaningful way, the cat fairy is a cat. There is no way for an epiphenomenal sentient cat to differentiate itself from a cat fairy, nor any way for a cat fairy to differentiate itself from whatever portions of ‘cats’ it controls (without violating the constraints on cat fairy behavior). Of course, there’s also the conceivability of epiphenomenal sentient ghosts which cannot have any effect on the world but still observe. (That’s one of my death nightmares—remaining fully perceptive and cognitive but unable to act in any way.)
Well, the cat does have an associated cat fairy. So, since the only cat fairy who’s e-cat it could observe (its own) has one, I think it should rightly conclude that all cat fairies have cats. But yes, epiphenomenal sentient “ghosts” are possible, and indeed the p-zombie hypothesis requires that the regular humans are in fact such ghosts. They just don’t notice. Yes, there are people arguing this is true in the real world, although not all of them have worked out the implications.
Now conceive of something which is similar to consciousness, but distinct; like consciousness, it has no physical effects on the world, and like consciousness, anyone who has it experiences it in a manner distinct from their physicality. Call this ‘magic’, and people who posses it ‘magi’.
What aspect does magic lack that consciousness has, such that a p-zombie cannot consider if it is conscious, but a human can ask if they are a magi?
Who said consciousness has no effects on the physical world? Apart from those idiots making the p-zombie argument that is. Pretty much everyone here thinks that’s nonsense, including me and, statistically, probably srn347 (although you never know, I guess.)
Regarding your Magi, if it affects their brain, it’s not epiphenomenal. So there’s that.
And the point I am trying to make is that p-zombies are not only a coherent idea, but compatible with human-standard brains as generally modelled on LW. That they don’t in any way demonstrate the point they were intended to make is quite another thing.
And the point I am trying to make is that p-zombies are not only a coherent idea, but compatible with human-standard brains as generally modelled on LW.
Yes, it merely requires redefining things like ‘conscious’ or ‘experience’ (whatever you decide p-zombies do not have) to be something epiphenomenal and incidentally non-existent.
Um, could you please explain this comment? I think there’s a fair chance you’ve stumbled into the middle of this discussion and don’t know what I’m actually talking about (except that it involves p-zombies, I guess.)
I think there’s a fair chance you’ve stumbled into the middle of this discussion and don’t know what I’m actually talking about (except that it involves p-zombies, I guess.)
I know only the words spoken, not those intended. (And concluded early in the conversation that the entire subthread should be truncated and replaced with a link). So much confusion and muddled thinking!)
Seems reasonable. For reference, then, I suggested the analogous thought experiment of fairies using magic to reproduce all the effects of cats on the environment. Also, there are epiphenomenal ghost cats that occupy the same space and are otherwise identical to the fairies’ illusions, down to the subatomic level. An outside observer would, of course, have no reason to postulate these epiphenomenal cats, but if the cats themselves were somehow conscious, they would.
This was intended to help with understanding p-zombies, since it avoids the … confusing … aspects.
Well, I personally find it an interesting concept. It’s basically a reformulation of standard Sequences stuff, though, so it shouldn’t be surprising, at least ’round here.
Unless you actually understand what “qualia” means, I’m not going to bother discussing the topic with you. If you have, if fact, done the basic research necessary to discuss p-zombies, than I’m probably misinterpreting you in some way. But I don’t think I am.
Though on the other hand, we don’t have room to take everything serious dudes say seriously—too many dudes, not enough time.
If a problem happens not to exist, then I suppose one will just have to nerve onesself and not see it. Yes, there are non-hard problems of consciousness, where you explain how a certain process or feeling occurs in the brain, and sure, there are some non-hard problems I’d wave away with “well, that’s solved by psychology somewhere.” But no amount of that has any bearing on the “hard problem,” which will remain in scare quotes as befits its effective nonexistence—finding a solution to a problem that is not a problem would be silly.
(EDIT: To clarify, I am not saying qualia do not exist, I am saying some mysterious barrier of hardness around qualia does not exist.)
This sort of thing is sufficient for me, like Achilles’ explanations were enough for Achilles. But if, say, the perception of the hard problem was causally unrelated to the actual existence of a hard problem (for epiphenominalism, this is literally what is going on), then gosh, it would seem like no matter what explanations you heard, the hard problem wouldn’t go away—so it must be either a proof of dualism or a mistake.
But not for me. Indeed. I am pretty sure none of those articles is even intended as a solution
to the HP. And if they are, why not publish them is a journal and become famous?
How an Algorithm Feels From Inside.
Intended as a solution to FW.
Stimulating the Visual Cortex Makes the Blind See
So? Every living qualiaphile accepts some sort of relationship between brain states and qualia.
if, say, the perception of the hard problem was causally unrelated to the actual existence of a hard problem (for epiphenominalism, this is literally what is going on),
The non-parenthetical was a throwback to a whole few posts ago, where I claimed that perception of the hard problem was often from the mind projection fallacy.
Other than that, I don’t have much to respond to here, since you’re just going “So?”
The non-parenthetical was a throwback to a whole few posts ago, where I claimed that perception of the hard problem was often from the mind projection fallacy.
I can’t find the posting, and I don’t see how the MPF would relate to e12ism anyway.
The non-parenthetical was a throwback to a whole few posts ago, where I claimed that perception of the hard problem was often from the mind projection fallacy.
How did you expect to convive me? I am familar with all the stuff you are quoting, and I still think there is an HP. So do many people.
Right. I have not said any actual arguments against the hard problem of consciousness.
EDIT: Was true when I said it, then I replied to PeterD, not that it worked (as I noted in that very post, the direct approach has little chance against a confusion)
Sorry, I was misusing terminology. Any ignorance-generating / ignorance-embodying explanation (e.g.s quantum mysticism / elan vital) uses what I’m calling “mysterious substance.”
Basically I’m calling “quantum” a mysterious substance (for the quantum mystics), even though it’s not like you can bottle it.
There is a Hard Prolem, becuase there is basically no (non eliminative) science or technology of qualia at all. We cna get a start on the problem of building cognition, memory and perception into an AI, but we can;t get a start on writing code for Red or Pain or Salty. You can thell there is basically no non-eliminative science or technology of qualia because the best LWers’ can quote is Dennett’s eliminative theory.
Among philosophers, the theory of qualia and the classical empiricism founded on it are also considered to be dead theories
Do you have evidence of this? The PhilPapers survey suggests that only 56.5% of philosophers identify as ‘physicalists,’ and 59% think that zombies are conceivable (though most of these think zombies are nevertheless impossible). It would also help if you explained what you mean by ‘the theory of qualia.’
Sellars’ argument, I think, rests on a few confusions and shaky assumptions. I agree this argument is still extremely widely cited, but I think that serious epistemologists no longer consider it conclusive, and a number reject it outright. Jim Pryor writes:
These anti-Given arguments deserve a re-examination, in light of recent developments in the philosophy of mind. The anti-Given arguments pose a dilemma: either (i) direct apprehension is not a state with proposition content, in which case it’s argued to be incapable with providing us with justification for believing any specific proposition; or (ii) direct apprehension is a state with propositional content. This second option is often thought to entail that direct apprehension is a kind of believing, and hence itself would need justification. But it ought nowadays to be very doubtful that the second option does entail such things. These days many philosophers of mind construe perceptual experience as a state with propositional content, even thought experience is distinct from, and cannot be reduced to, any kind of belief. Your experiences represent the world to you as being a certain way, and the way they represent the world as being is their propositional content. Now, surely, its looking to you as if the world is a certain way is not a kind of state for which you need any justification. Hence, this construal of perceptual experience seems to block the step from ‘has propositional content’ to ‘needs justification’. Of course, what are ‘apprehended’ by perceptual experiences are facts about your perceptual environment, rather than facts about your current mental states. But it should at least be clear that the second horn of the anti-Given argument needs more argument than we’ve seen so far.
I mentioned in a subsequent post that there was an ambiguity in my original claim. Qualia have been used by philosophers to do two different jobs: 1) as the basis of the hard problem of consciousness, and 2) as the foundation of foundationalist theories of empiricism. Sellars essay, in particular is aimed at (2), not (1), and the mention of ‘qualia’ to which I was responding was probably a case of (1). The question of physicalism and the conceivability of p-zombies isn’t directly related to the epistemic role of qualia, and one could reject classical empiricism on the basis of Sellars’ argument while still believing that the reality of irreducible qualia speak against physicalism and for the conceivability of p-zombies.
Sellers’ argument, I think, rests on a few confusions and shaky assumptions.
That may be, it’s a bit outside my ken. Thanks for posting the quote. I won’t go trying to defend the overall organization EPM, which is fairly labyrinthine, but I have some confidence in its critiques: I’d need more familiarity with Pryor’s work to level a serious criticism, but he on the basis of your quote he seems to me to be missing the point: Sellars is not arguing that something’s appearing to you in a certain way is a state (like a belief) which requires justification. He argues that it is not tenable to think of this state as being independent of (e.g. a foundation for) a whole battery of concepts including epistemic concepts like ‘being in standard perceptual conditions’. Looking a certain way is posterior (a sophistication of) its being that way. Looking red is posterior to simply being red. And this is an attack on the epistemic role of qualia insofar as this theory implies that ‘looking red’ is in some way fundamental and conceptually independent.
Sellars is not arguing that something’s appearing to you in a certain way is a state (like a belief) which requires justification. He argues that it is not tenable to think of this state as being independent of (e.g. a foundation for) a whole battery of concepts including epistemic concepts like ‘being in standard perceptual conditions’. Looking a certain way is posterior (a sophistication of) its being that way. Looking red is posterior to simply being red. And this is an attack on the epistemic role of qualia insofar as this theory implies that ‘looking red’ is in some way fundamental and conceptually independent.
Yes, that is the argument. And I think its soundness is far from obvious, and that there’s a lot of plausibility to the alternative view. The main problem is that this notion of ‘conceptual content’ is very hard to explicate; often it seems to be unfortunately confused with the idea of linguistic content; but do we really think that the only things that should add or take away any of my credence in any belief is the words I think to myself? In any case, Pryors’ paper Is There Non-Inferential Justification? is probably the best starting point for the rival view. And he’s an exceedingly lucid thinker.
I’ll read the Pryor article, in more detail, but from your gloss and from a quick scan, I still don’t see where Pryor and Sellars are even supposed to disagree. I think, without being totally sure, that Sellars would answer the title question of Pryor’s article with an emphatic ‘yes!’. Experience of a red car justifies belief that the car is red. While experience of a red car also presupposes a battery of other concepts (including epistemic concepts), these concepts are not related to the knowledge of the redness of the car as premises to a conclusion.
Here’s a quote from EPM p148, which illustrates that the above is Sellars’ view (italics mine). Note that in the following, Sellars is sketching the view he wants to attack:
One of the forms taken by the Myth of the Given is the idea that there
is, indeed must be, a structure of particular matter of fact such that (a) each
fact can not only be noninferentially known to be the case, but presupposes
no other knowledge either of particular matter of fact, or of general truths;
and (b) such that the noninferential knowledge of facts belonging to this
structure constitutes the ultimate court of appeals for all factual claims --
particular and general—about the world. It is important to note that I
characterized the knowledge of fact belonging to this stratum as not only
noninferential, but as presupposing no knowledge of other matter of fact,
whether particular or general. It might be thought that this is a redundancy,
that knowledge (not belief or conviction, but knowledge) which logically
presupposes knowledge of other facts must be inferential. This, however, as
I hope to show, is itself an episode in the Myth.
So Sellars wants to argue that empiricism has no foundation because experience (as an epistemic success term) is not possible without knowledge of a bunch of other facts. But it does not follow from this that a) Sellars thinks knowledge derived from experience is inferential, or b) Sellars thinks non-inferential knowledge as such is a problem.
But that said, I haven’t read enough of Pryor’s paper(s) to understand his critiques. I’ll take a look.
Hmmm. The only enthusiast for Sellars I know finds it necessary to adopt Direct Realism, which is a horribly flawed theory. In fact most of the problems with it consist of reconciling it with a naturalistic world view.
I’m not at all convinced that all LWers have been persuaded that they don’t have qualia.
Well, it’s probably important to distinguish between to uses to which the theory of qualia is put: first as the foundation of foundationalist empiricism, and second as the basis for the ‘hard problem of consciousness’. Foundationalist theories of empiricism are largely dead, as is the idea that qualia are a source of immediate, non-conceptual knowledge. That’s the work that Sellars (a strident reductivist and naturalist) did.
Now that I read it again, I think my original post was a bit misleading because I implied that the theory of qualia as establishing the ‘hard problem’ is also a dead theory. This is not the case, and important philosophers still defend the hard problem on these grounds. Mea Culpa.
The only enthusiast for Sellars I know finds it necessary to adopt Direct Realism, which is a horribly flawed theory. In fact most of the problems with it consist of reconciling it with a naturalistic world view.
Once direct realism as an epistemic theory is properly distinguished from a psychological theory of perception, I think it becomes an extremely plausible view. I think I’d probably call myself a direct realist.
I take ‘conceptual’ to mean thought which is at least somewhat conscious and which probably can be represented verbally. What do you mean by the word?
I mean ‘of such a kind as to be a premise or conclusion in an inference’. I’m not sure whether I agree with your assessment or not: if by ‘non-conceptual processing’ you mean to refer to something like a physiological or neurological process, then I think I disagree (simply because physiological processes can’t be any part of an inference, even granting that often times things that are part of an inference are in some way identical to a neurological process).
I think we’re looking at qualia from different angles. I agree that the process which leads to qualia might well be understood conceptually from the outside (I think that’s what you meant). However, I don’t think there’s an accessible conceptual process by which the creation of qualia can be felt by the person having the qualia.
I don’t know what others accept as a solution to the qualia problem, but I’ve found the explanations in “How an algorithm feels from the inside” quite on spot. For me, the old sequences have solved the qualia problem, and from what I see the new sequence presupposes the same.
I’ve found the explanations in “How an algorithm feels from the inside” quite on spot.
I’m not sure I understand what it means for an algorithm to have an inside, let alone for an algorithm to “feel” something from the inside. “Inside” is a geometrical concept, not an algorithmical one.
Please explain what the inside feeling of e.g. the Fibonacci sequence (or an algorithm calculating such) would be.
I’m not sure I understand what it means for an algorithm to have an inside, let alone for an algorithm to “feel” something from the inside. “Inside” is a geometrical concept, not an algorithmical one.
Well, that’s just the title, you know? The original article was talking about cognitive algorithms (an algorithm, not any algorithm).
Unless you assume some kind of un-physical substance having a causal effect on your brain and your continued existence after death, you is what your cognitive algorithm feels when it’s run on your brain wetware.
“Inside” is a geometrical concept, not an algorithmical one.
That’s not true: every formal system that can produce a model of a subsets of its axioms might be considered as having an ‘inside’ (as in set theory: constructible model are called ‘inner model’), and that’s just one possible definition.
The original article was talking about cognitive algorithms (an algorithm, not any algorithm).
So what’s the difference between cognitive algorithms with the ability of “feeling from the inside” and the non-cognitive algorithms which can’t “feel from the inside”?
Unless you assume some kind of un-physical substance having a causal effect on your brain and your continued existence after death, you is what your cognitive algorithm feels when it’s run on your brain wetware.
Please don’t construct strawmen. I never once mentioned unphysical substances having any causal effect, nor do I believe in such. Actually from my perspective it seems to me that it is you who are referring to unphysical substances called “algorithms” “models”, the “inside”, etc. All these seem to me to be on the map, not on the territory.
And to say that I am my algorithm running on my brain doesn’t help dissolve for me the question of qualia anymore than if some religious guy had said that I’m the soul controlling my body.
So what’s the difference between cognitive algorithms with the ability of “feeling from the inside” and the non-cognitive algorithms which can’t “feel from the inside”?
If I knew I would have already written an AI. This is an NP problem, easy to check, hard to find a solution for: I knew that the one running on my brain is of the kind, and the one spouting Fibonacci number is not. I can only guess that involves some kind of self-representation.
Please don’t construct strawmen. I never once mentioned unphysical substances having any causal effect, nor do I believe in such.
Sorry if I seemed to do so, I wasn’t attributing those beliefs to you, I was just listing the possible escape routes from the argument.
Actually from my perspective it seems to me that it is you who are referring to unphysical substances called “algorithms” “models”, the “inside”, etc. All these seem to me to be on the map, not on the territory.
Well, if you already do not accept those concepts, you need to tell me what your basic ontology is so we can agree on definitions. I thought that we already have “algorithm” covered by “Please explain what the inside feeling of e.g. the Fibonacci sequence (or an algorithm calculating such) would be”
And to say that I am my algorithm running on my brain doesn’t help dissolve for me the question of qualia anymore than if some religious guy had said that I’m the soul controlling my body.
That’s because it was not the question that my sentence was answering. You have to admit that writing “I’m not sure I understand what it means for an algorithm to have an inside” is a rather strange way to ask “Please justify the way the sequence has in your opinion dissolved the qualia problem”. If you’re asking me that, I might just want to write an entire separate post, in the hope of being clearer and more convincing.
I think this is confusing qualia with intelligence. There’s no big confusion about how an algorithm run on hardware can produce something we identify as intelligence—there’s a big confusion about such an algorithm “feeling things from the inside”.
Well, if you already do not accept those concepts, you need to tell me what your basic ontology is so we can agree on definitions.
It seems to me that in a physical universe, the concept of “algorithms” is merely an abstract representation in our minds of groupings of physical happenings, and therefore algorithms are no more ontologically fundamental than the category of “fruits” or “dinosaurs”.
Now starting with a mathematical ontology instead, like Tegmark IV’s Mathematical Universe Hypothesis, it’s physical particles that are concrete representations of algorithms instead (very simple algorithms in the case of particles). In that ontology, where algorithms are ontologically fundamental and physical particles aren’t, you can perhaps clearly define qualia as the inputs of the much-more-complex algorithms which are our minds...
That’s sort-of the way that I would go about dissolving the issue of qualia if I could. But in a universe which is fundamentally physical it doesn’t get dissolved by positing “algorithms” because algorithms aren’t fundamentally physical...
I’m going to write a full-blown post so that I can present my view more clearly. If you want we can move the discussion there when it will be ready (I think in a couple of days).
You talk like you’ve solved qualia. Have you?
“Qualia” is something our brains do. We don’t know how our brains do it, but it’s pretty clear by now that our brains are indeed what does it.
That’s about 10% of a solution. The “how” is enough to keep most contemorary dualism afloat.
Aren’t the details of the “how” more a question of science than philosophy?
If science had them, there would be no mileage in the philosphical project, any more than there is currently mileage in trying to found dualism on the basis that matter can’t think.
There is mileage in philosophy? Says you. Are you talking about in context of general population of a country? Of “intellectuals? Your mates?
If philosophy has mileage (compared to science) then so does any other religion. I guess that’s all dualism is though.
Eh?
I just went to reply you but after reading back on what was said I’m seeing a different context. My stupid comment was about popularity not about usefulness. I was rambling about general public opinion on belief systems not what the topic was really about- if philosophy could move something forward.
We have prima facie reason to accept both of these claims:
A list of all the objective, third-person, physical facts about the world does not miss any facts about the world.
Which specific qualia I’m experiencing is functionally/causally underdetermined; i.e., there doesn’t seem even in principle to be any physically exhaustive reason redness feels exactly as it does, as opposed to feeling like some alien color.
1 is physicalism; 2 is the hard problem. Giving up 1 means endorsing dualism or idealism. Giving up 2 means endorsing reductive or eliminative physicalism. All of these options are unpalatable. Reductionism without eliminating anything seems off the table, since the conceivability of zombies seems likely to be here to stay, to remain as an ‘explanatory gap.’ But eliminativism about qualia means completely overturning our assumption that whatever’s going on when we speak of ‘consciousness’ involves apprehending certain facts about mind. I think this last option is the least terrible out of a set of extremely terrible options; but I don’t think the eliminative answer to this problem is obvious, and I don’t think people who endorse other solutions are automatically crazy or unreasonable.
That said, the problem is in some ways just academic. Very few dualists these days think that mind isn’t perfectly causally correlated with matter. (They might think this correlation is an inexplicable brute fact, but fact it remains.) So none of the important work Eliezer is doing here depends on monism. Monism just simplifies matters a great deal, since it eliminates the worry that the metaphysical gap might re-introduce an epistemic gap into our model.
If I knew how the brain worked in sufficient detail, I think I’d be able to explain why this was wrong; I’d have a theory that would predict what qualia a brain experiences based on its structure (or whatever). No, I don’t know what the theory is, but I’m pretty confident that there is one.
Can you give me an example of how, even in principle, this would work? Construct a toy universe in which there are experiences causally determined by non-experiences. How would examining anything about the non-experiences tell us that the experiences exist, or what particular way those experiences feel?
Taboo experiences.
It sounds like you’re asking me to do what I just asked you to do. I don’t know what experiences are, except by listing synonyms or by acts of brute ostension — hey, check out that pain! look at that splotch of redness! — so if I could taboo them away, it would mean I’d already solved the hard problem. This may be an error mode of ‘tabooing’ itself; that decision procedure, applied to our most primitive and generic categories (try tabooing ‘existence’ or ‘feature’), seems to either yield uninformative lists of examples, implausible eliminativisms (what would a world without experience, without existence, or without features, look like?), or circular definitions.
But what happens when we try to taboo a term is just more introspective data; it doesn’t give us any infallible decision procedure, on its own, for what conclusion we should draw from problem cases. To assert ‘if you can’t taboo it, then it’s meaningless!’, for example, is itself to commit yourself to a highly speculative philosophical and semantic hypothesis.
Can you give me an example of how, even in principle, this would work? Construct a toy universe in which there are computations causally determined by non-computations. How would examining anything about the non-computations tell us that the computations exist, or what particular functions those computations are computing?
My initial response is that any physical interaction in which the state of one thing differentially tracks the states of another can be modeled as a computation. Is your suggestion that an analogous response would solve the Hard Problem, i.e., are you endorsing panpsychism (‘everything is literally conscious’)?
Sorry, bad example… Let’s try again.
Can you give me an example of how, even in principle, this would work? Construct a toy universe in which there are living things causally determined by non-living things? How would examining anything about the non-living things tell us that the living things exist, or what particular way those living things are alive?
“Explain how consciousness arises from non-conscious matter” doesn’t seem any more of an impossible problem than “Explain how life arises from non-living matter”.
We can define and analyze ‘life’ without any reference to life: As high-fidelity self-replicating macromolecules that interact with their environments to assemble and direct highly responsive cellular containers around themselves. There doesn’t seem to be anything missing from our ordinary notion of life here; or anything that is missing could be easily added by sketching out more physical details.
What might a purely physical definition of consciousness that made no appeal to mental concepts look like? How could we generate a first-person facts from a complex of third-person facts?
What you described as computation could apply to literally any two things in the same causal universe. But you meant two things that track each other much more tightly than usual. It may be that a rock is literally conscious, but if so, then not very much so. So little that it really does not matter at all. Humans are much more conscious because they reflect the world much more, reflect themselves much more, and [insert solution to Hard Problem here].
I dunno. I think if rocks are even a little bit conscious, that’s pretty freaky, and I’d like to know about it. I’d certainly like to hear more about what they’re conscious of. Are they happy? Can I alter them in some way that will maximize their experiential well-being? Given how many more rocks there are than humans, it could end up being the case that our moral algorithm is dominated by rearranging pebbles on the beach.
Hah. Luckily, true panpsychism dissolves the Hard Problem. You don’t need to account for mind in terms of non-mind, because there isn’t any non-mind to be found.
I meant, I’m pretty sure that rocks are not conscious. It’s just that the best way I’m able to express what I mean by “consciousness” may end up apparently including rocks, without me really claiming that rocks are conscious like humans are—in the same way that your definition of computation literally includes air, but you’re not really talking about air.
I don’t understand this. How would saying “all is Mind” explain why qualia feel the way they do?
This still doesn’t really specify what your view is. Your view may be that strictly speaking nothing is conscious, but in the looser sense in which we are conscious, anything could be modeled as conscious with equal warrant. This view is a polite version of eliminativism.
Or your view may be that strictly speaking everything is conscious, but in the looser sense in which we prefer to single out human-style consciousness, we can bracket the consciousness of rocks. In that case, I’d want to hear about just what kind of consciousness rocks have. If dust specks are themselves moral patients, this could throw an interesting wrench into the ‘dust specks vs. torture’ debate. This is panpsychism.
Or maybe your view is that rocks are almost conscious, that there’s some sort of Consciousness Gap that the world crosses, Leibniz-style. In that case, I’d want an explanation of what it means for something to almost be conscious, and how you could incrementally build up to Consciousness Proper.
The Hard Problem is not “Give a reductive account of Mind!” It’s “Explain how Mind could arise from a purely non-mental foundation!” Idealism and panpsychism dissolve the problem by denying that the foundation is non-mental; and eliminativism dissolves the problem by denying that there’s such a thing as “Mind” in the first place.
In general, I would suggest as much looking at sensory experiences that vary among humans; there’s already enough interesting material there without wondering if there are even other differences. Can we explain enough interesting things about the difference between normal hearing and pitch perfect hearing without talking about qualia?
Once we’ve done that, are we still interested in discussing qualia in color?
http://lesswrong.com/lw/p5/brain_breakthrough_its_made_of_neurons/
http://lesswrong.com/lw/p3/angry_atoms/
So your argument is “Doing arithmetic requires consciousness; and we can tell that something is doing arithmetic by looking at its hardware; so we can tell with certainty by looking at certain hardware states that the hardware is sentient”?
So your argument is “We have explained some things physically before, therefore we can explain consciousness physically”?
So your argument is “Mental states have physical causes, so they must be identical with certain brain-states”?
Set aside whether any of these would satisfy a dualist or agnostic; should they satisfy one?
Well, it’s certainly possible to do arithmetic without consciousness; I’m pretty sure an abacus isn’t conscious. But there should be a way to look at a clump of matter and tell it is conscious or not (at least as well as we can tell the difference between a clump of matter that is alive and a clump of matter that isn’t).
It’s a bit stronger than that: we have explained basically everything physically, including every other example of anything that was said to be impossible to explain physically. The only difference between “explaining the difference between conscious matter and non-conscious matter” and “explaining the difference between living and non-living matter” is that we don’t yet know how to do the former.
I think we’re hitting a “one man’s modus ponens is another man’s modus tollens” here. Physicalism implies that the “hard problem of consciousness” is solvable; physicalism is true; therefore the hard problem of consciousness has a solution. That’s the simplest form of my argument.
Basically, I think that the evidence in favor of physicalism is a lot stronger than the evidence that the hard problem of consciousness isn’t solvable, but if you disagree I don’t think I can persuade you otherwise.
No abacus can do arithmetic. An abacus just sits there.
No backhoe can excavate. A backhoe just sits there.
A trained agent can use an abacus to do arithmetic, just as one can use a backhoe to excavate. Can you define “do arithmetic” in such a manner that it is at least as easy to prove that arithmetic has been done as it is to prove that excavation has been done?
Does a calculator do arithmetic?
I’ve watched mine for several hours, and it hasn’t. Have you observed a calculator doing arithmetic? What would it look like?
No, you haven’t. (p=0.9)
It could look like an electronic object with a plastic shell that starts with “(23 + 54) / (47 * 12 + 76) + 1093” on the screen and some small amount of time after an apple falls from a tree and hits the “Enter” button some number appears on the screen below the earlier input, beginning with “1093.0”, with some other decimal digits following.
If the above doesn’t qualify as the calculator doing “arithmetic” then you’re just using the word in a way that is not just contrary to common usage but also a terrible way to carve reality.
Upvoted for this alone.
I didn’t do that immediately prior to posting, but I have watched my calculator for a cumulative period of time exceeding several hours, and it has never done arithmetic. I have done arithmetic using said calculator, but that is precisely the point I was trying to make.
Does every device which looks like that do arithmetic, or only devices which could in principle be used to calculate a large number of outcomes? What about an electronic device that only alternates between displaying “(23 + 54) / (47 * 12 + 76) + 1093” and “1093.1203125″ (or “1093.15d285805de42”) and does nothing else?
Does a bucket do arithmetic because the number of pebbles which fall into the bucket, minus the number of pebbles which fall out of the bucket, is equal to the number of pebbles in the bucket? Or does the shepherd do arithmetic using the bucket as a tool?
And I would make one of the following claims:
Your calculator has done arithmetic, or
You are using your calculator incorrectly (It’s not a paperweight!) Or
There is a usage of ‘arithmetic’ here that is a highly misleading way to carve reality.
In the same way that a cardboard cutout of Decius that has a speech bubble saying “5” over its head would not be said to be doing arithmetic a device that looks like a calculator but just displays one outcome would not be said to be doing arithmetic.
I’m not sure how ‘large’ the number of outcomes must be, precisely. I can imagine particularly intelligent monkeys or particularly young children being legitimately described as doing rudimentary arithmetic despite being somewhat limited in their capability.
It would seem like in this case we can point to the system and say that system is doing arithmetic. The shepherd (or the shepherd’s boss) has arranged the system so that the arithmetic algorithm is somewhat messily distributed in that way. Perhaps more interesting is the case where the bucket and pebble system has been enhanced with a piece of fabric which is disrupted by passing sheep, knocking in pebbles reliably, one each time. That system can certainly be said to be “counting the damn sheep”, particularly since it so easily generalizes to counting other stuff that walks past.
But now allow me to abandon my rather strong notions that “calculators multiply stuff and mechanical sheep counters count sheep”. I’m curious just what the important abstract feature of the universe is that you are trying to highlight as the core feature of ‘arithmetic’. It seems to be something to do with active intent by a generally intelligent agent? So that whenever adding or multiplying is done we need to track down what caused said adding or multiplication to be done, tracing the causal chain back to something that qualifies as having ‘intention’ and say that the ‘arithmetic’ is being done by that agent? (Please correct me if I’m wrong here, this is just my best effort to resolve your usage into something that makes sense to me!)
It’s not a feature of arithmetic, it’s a feature of doing.
I attribute ‘doing’ an action to the user of the tool, not to the tool. It is a rare case in which I attribute an artifact as an agent; if the mechanical sheep counter provided some signal to indicate the number or presence of sheep outside the fence, I would call it a machine that counts sheep. If it was simply a mechanical system that moved pebbles into and out of a bucket, I would say that counting the sheep is done by the person who looks in the bucket.
If a calculator does arithmetic, do the components of the calculator do arithmetic, or only the calculator as a whole? Or is it the system of which does arithmetic?
I’m still looking for a definition of ‘arithmetic’ which allows me to be as sure about whether arithmetic has been done as I am sure about whether excavation has been done.
Well, you do have to press certain buttons for it to happen. ;) And it looks like voltages changing inside an integrated circuit that lead to changes in a display of some kind. Anyway, if you insist on an example of something that “does arithmetic” without any human intervention whatsoever, I can point to the arithmetic logic unit inside a plugged-in arcade machine in attract mode.
And if you don’t want to call what an arithmetic logic unit does when it takes a set of inputs and returns a set of outputs “doing arithmetic”, I’d have to respond that we’re now arguing about whether trees that fall in a forest with no people make a sound and aren’t going to get anywhere. :P
Well, yeah. My question:
Is still somewhat important to the discussion. I can’t define arithmetic well enough to determine if it has occurred in all cases, but ‘changes on a display’ is clearly neither necessary nor sufficient.
Well, I’d say that a system is doing arithmetic if it has behavior that looks like it corresponds with the mathematical functions that define arithmetic. In other words, it takes as inputs things that are representations of such things as “2”, “3“, and “+” and returns an output that looks like “6”. In an arithmetic logic unit, the inputs and outputs that represent numbers and operations are voltages. It’s extremely difficult, but it is possible to use a microscopic probe to measure the internal voltages in an integrated circuit as it operates. (Mostly, we know what’s going on inside a chip by far more indirect means, such as the “changes on a screen” you mentioned.)
There is indeed a lot of wiggle room here; a sufficiently complicated scheme can make anything “represent” anything else, but that’s a problem beyond the scope of this comment. ;)
edit: I’m an idiot, 2 + 3 = 5. :(
Note that neither an abacus nor a calculator in a vacuum satisfy that definition.
I’ll allow voltages and mental states to serve as evidence, even if they are not possible to measure directly.
Does a calculator with no labels on the buttons do arithmetic in the same sense that a standard one does?
Does the phrase “2+3=6” do arithmetic? What about the phrase “2*3=6″?
I will accept as obvious that arithmetic occurs in the case of a person using a calculator to perform arithmetic, but not obvious during precisely what periods arithmetic is occurring and not occurring.
… which was plugged in and switched on by, well, a human.
I think the OP is using their own idiosyncratic definition of “doing” to require a conscious agent. This is more usual among those confused by free will.
It’s impossible to express a sentence like this after having fully appreciated the nature of the Hard Problem. In fact, whether you’re a dualist or a physicalist, I think a good litmus test for whether you’ve grasped just how hard the Hard Problem is is whether you see how categorically different the vitalism case is from the dualism case. See: Chalmers, Consciousness and its Place in Nature.
Physicalism, plus the unsolvability of the Hard Problem (i.e., the impossibility of successful Type-C Materialism), implies that either Type-B Materialism (‘mysterianism’) or Type-A Materialism (‘eliminativism’) is correct. Type-B Materialism despairs of a solution while for some reason keeping the physicalist faith; Type-A Materialism dissolves the problem rather than solving it on its own terms.
The probability of physicalism would need to approach 1 in order for that to be the case.
::follows link::
Call me the Type-C Materialist subspecies of eliminativist, then. I think that a sufficient understanding of the brain will make the solution obvious; the reason we don’t have a “functional” explanation of subjective experience is not because the solution doesn’t exist, but that we don’t know how to do it.
This is where I think we’ll end up.
It’s a lot closer to 1 than a clever-sounding impossibility argument. See: http://lesswrong.com/lw/ph/can_you_prove_two_particles_are_identical/
What’s your reason for believing this? The standard empiricist argument against zombies is that they don’t constrain anticipated experience.
One problem with this line of thought is that we’ve just thrown out the very concept of “experience” which is the basis of empiricism. The other problem is that the statement is false: the question of whether I will become a zombie tomorrow does constrain my anticipated experiences; specifically, it tells me whether I should anticipate having any.
I’m not a positivist, and I don’t argue like one. I think nearly all the arguments against the possibility of zombies are very silly, and I agree there’s good prima facie evidence for dualism (though I think that in the final analysis the weight of evidence still favors physicalism). Indeed, it’s a good thing I don’t think zombies are impossible, since I think that we are zombies.
My reason is twofold: Copernican, and Occamite.
Copernican reasoning: Most of the universe does not consist of humans, or anything human-like; so it would be very surprising to learn that the most fundamental metaphysical distinction between facts (‘subjective’ v. ‘objective,’ or ‘mental’ v. ‘physical,’ or ‘point-of-view-bearing’ v. ’point-of-view-lacking, ’or what-have-you) happens to coincide with the parts of the universe that bear human-like things, and the parts that lack human-like things. Are we really that special? Is it really more likely that we would happen to gain perfect, sparkling insight into a secret Hidden Side to reality, than that our brains would misrepresent their own ways of representing themselves to themselves?
Occamite reasoning: One can do away with the Copernican thought by endorsing panpsychism; but this worsens the bite from the principle of parsimony. A universe with two kinds of fundamental fact is less likely, relative to the space of all the models, then one with one kind (or with many, many more than two kinds). It is a striking empirical fact that, consciousness aside, we seem to be able to understand the whole rest of reality with a single grammatical kind of description—the impersonal, ‘objective’ kind, which states a fact without specifying for whom the fact is. The world didn’t need to turn out to be that way, just as it didn’t need to look causally structured. This should give us reason to think that there may not be distinctions between fundamental kinds of facts, rather than that we happen to have lucked out and ended up in one of the universes with very few distinctions of this sort.
Neither of these considerations, of course, is conclusive. But they give us some reason to at least take seriously physicalist hypotheses, and to weight their theoretical costs and benefits against the dualists’.
We’ve thrown out the idea of subjective experience, of pure, ineffable ‘feels,’ of qualia. But we retain any functionally specifiable analog of such experience. In place of qualitative red, we get zombie-red, i.e., causal/functional-red. In place of qualitative knowledge, we get zombie-knowledge.
And since most dualists already accepted the causal/functional/physical process in question (they couldn’t even motivate the zombie argument if they didn’t consider the physical causally adequate), there can be no parsimony argument against the physicalists’ posits; the only argument will have to be a defense of the claim that there is some sort of basic, epistemically infallible acquaintance relation between the contents of experience and (themselves? a Self??...). But making such an argument, without begging the question against eliminativism, is actually quite difficult.
At this point, you’re just using the language wrong. “knowledge” refers to what you’re calling “zombie-knowledge”—whenever we point to an instance of knowledge, we mean whatever it is humans are doing. So “humans are zombies” doesn’t work, unless you can point to some sort of non-human non-zombies that somehow gave us zombies the words and concepts of non-zombies.
That assumes a determinate answer to the question ‘what’s the right way to use language?’ in this case. But the facts on the ground may underdetermine whether it’s ‘right’ to treat definitions more ostensively (i.e., if Berkeley turns out to be right, then when I say ‘tree’ I’m picking out an image in my mind, not a non-existent material plant Out There), or ‘right’ to treat definitions as embedded in a theory, an interpretation of the data (i.e., Berkeley doesn’t really believe in trees as we do, he just believes in ‘tree-images’ and misleadingly calls those ‘trees’). Either of these can be a legitimate way that linguistic communities change over time; sometimes we keep a term’s sense fixed and abandon it if the facts aren’t as we thought, whereas sometimes we’re more intensionally wishy-washy and allow terms to get pragmatically redefined to fit snugly into the shiny new model. Often it depends on how quickly, and how radically, our view of the world changes.
(Though actually, qualia may raise a serious problem for ostension-focused reference-fixing: It’s not clear what we’re actually ostending, if we think we’re picking out phenomenal properties but those properties are not only misconstrued, but strictly non-existent. At least verbal definitions have the advantage that we can relatively straightforwardly translate the terms involved into our new theory.)
Moreover, this assumes that you know how I’m using the language. I haven’t said whether I think ‘knowledge’ in contemporary English denotes q-knowledge (i.e., knowledge including qualia) or z-knowledge (i.e., causal/functional/behavioral knowledge, without any appeal to qualia). I think it’s perfectly plausible that it refers to q-knowledge, hence I hedge my bets when I need to speak more precisely and start introducing ‘zombified’ terms lest semantic disputes interfere in the discussion of substance. But I’m neutral both on the descriptive question of what we mean by mental terms (how ‘theory-neutral’ they really are), and on the normative question of what we ought to mean by mental terms (how ‘theory-neutral’ they should be). I’m an eliminativist on the substantive questions; on the non-substantive question of whether we should be revisionist or traditionalist in our choice of faux-mental terminology, I’m largely indifferent, as long as we’re clear and honest in whatever semantic convention we adopt.
It’s not surprising that a system should have special insight into itself. If a type of system had special insight into some other, unrelated, type of system, then that would be peculiar. If every systems had insights (panpsychism) that would also be peculiar. But a system, one capable of haing insights, having special insights into itself is not unexpected
That is not obvious. If the two kinds of stuff (or rather property) are fine-grainedly picked from some space of stuffs (or rather properties), then that would be more unlikely that just one being picked.
OTOH, if you have a just one, coarse-grained kind of stuff, and there is just one other coarse-grained kind of stuff, such that the two together cover the space of stuffs, then it is a mystery why you do not have both, ie every possible kind of stuff. A concrete example is the predominance of matter over antimatter in cosmology, which is widely interpreted as needing an explanation.
(It’s all about information and probability. Adding one fine grained kind of stuff to another means that two low probabilities get multiplies together, leading to a very low one that needs a lot of explainging. Having every logically possible kind of stuff has a high probability, because we don’t need a lot of information to pinpoint the universe).
So..if you think of Mind as some very specific thing, the Occamite objection goes through. However, modern dualists are happy that most aspects of consciousness have physical explanations. Chalmers-style dualism is about explaining qualia, phenomenal qualities. The quantitative properties (Chalmers calls them stuctural-functional) of physicalism and intrinsically qualitative properties form a dyad that covers property-space in the same way that the matter-antimatter dyad covers stuff-space. In this way, modern dualism can avoid the Copernican Objection.
(Here comes the shift from properties to aspects).
Although it does specify that the fact is outside me. If physical and mental properties are both intrinsic to the world, then the physical properties seem to be doing most of the work, and the mental ones seem redundant. However, if objectivity is seen as a perspective, ie an external perspective, it is no longer an empirical fact. It is then a tautology that the external world will seem, from the outside, to be objective, becaue objectivity just is the view from outside. And subjectivity, likewise, is the view from inside, and not any extra stuff, just another way of looking at the same stuff. There are in any case, a set of relations between a thing-and-itself, and another set between a thing-and-other-things Nothing novel is being introduced by noting the existence of inner and outer aspects. The novel content of the Dual Aspect solution lies on identifying the Objective Perspective with quantities (broadly including structures and functions) and the Subjective Perspective with qualities, so that Subjective Qualities, qualia, are just how neuronal processing seems from the inside. This point needs justication, which I believe I have, but will not nmention here.
As far as physicalism is concerned: physicalism has many meanings. Dual aspect theory is incompatible with the idea that the world is instrinsically objective and physical, since these are not intrinsic charateristics, accoding to DAT. DAT is often and rightly associated with neutral monism, the idea that the world is in itself neither mental nor physical, neither objective nor subjective. However, this in fact changes little for most physicalists: it does not suggest that there are any ghostly substances or indetectable properties. Nothing changes methodologically; naturalism, inerpreted as the investigation of the world from the objetive perspective can continue. The Strong Physicalist claim that a complete phyiscal description of the world is a complete dsecription tout court becomes problematic. Although such a description is a description of everything, it nonetheless leaves out the subjective perspectives embedded in it, which cannot be recovered just as Mary the superscientist cannot recover the subjective sensation of Red from the information she has. I believe that a correct understanding of the nature of information shows that “complete information” is a logically incoherent notion in any case, so that DAT does not entail the loss of anything that was ever available in that respect. Furthermore, the absence of complete information has little practical upshot because of the unfeasability of constructing such a complete decription in the first place. All in all, DAT means physicalism is technically false in a way that changes little in practice. The flipside of DAT is Neutral Monism. NM is an inherently attractive metaphsycis, because it means that the universe has no overall characteristic left dangling in need of an explanation—no “why physical, rather than mental?”.
As far as causality is concerned, the fact that a system’s physical or objective aspects are enough to predict its behaviour does not mean that its subjective aspects are an unnecessary multiplication of entities, since they are only a different perspective on the same reality. Causal powers are vested in the neutral reality of which the subjective and the objective are just aspects. The mental is neither causal in itself, or causally idle in itself, it is rather a persepctive on what is causally empowered. There are no grounds for saying that either set of aspects is exclusively responsible for the causal behaviour of the system, since each is only a perspective on the system.
I have avoided the Copernican problem, special pleading for human consciousness by pinning mentality, and particualrly subjectivity to a system’s internal and self-refexive relations. The counterpart to excesive anthropocentricism is insufficient anthopocentricism, ie free-wheeling panpsychism, or the Thinking Rock problem. I believe I have a way of showing that it is logically ineveritable that simple entities cannot have subjective states that are significantly different from their objective descriptions.
I’m not sure I understand what an ‘aspect’ is, in your model. I can understand a single thing having two ‘aspects’ in the sense of having two different sets of properties accessible in different viewing conditions; but you seem to object to the idea of construing mentality and physicality as distinct property classes.
I could also understand a single property or property-class having two ‘aspects’ if the property/class itself were being associated with two distinct sets of second-order properties. Perhaps “being the color of chlorophyll” and “being the color of emeralds” are two different aspects of the single property green. Similarly, then, perhaps phenomenal properties and physical properties are just two different second-order construals of the same ultimately physical, or ultimately ideal, or perhaps ultimately neutral (i.e., neither-phenomenal-nor-physical), properties.
I call the option I present in my first paragraph Property Dualism, and the option I present in my second paragraph Multi-Label Monism. (Note that these may be very different from what you mean by ‘property dualism’ and ‘neutral monism;’ some people who call themselves ‘neutral monists’ sound more to me like ‘neutral trialists,’ in that they allow mental and physical properties into their ontology in addition to some neutral substrate. True monism, whether neutral or idealistic or physicalistic, should be eliminative or reductive, not ampliative.) Is Dual Aspect Theory an intelligible third option, distinct from Property Dualism and Multi-Label Monism as I’ve distinguished them? And if so, how can I make sense of it? Can you coax me out of my parochial object/property-centric view, without just confusing me?
I’m also not sure I understand how reflexive epistemic relations work. Epistemic relations are ordinarily causal. How does reflexive causality work? And how do these ‘intrinsic’ properties causally interact with the extrinsic ones? How, for instance, does positing that Mary’s brain has an intrinsic ‘inner dimension’ of phenomenal redness Behind The Scenes somewhere help us deterministically explain why Mary’s extrinsic brain evolves into a functional state of surprise when she sees a red rose for the first time? What would the dynamics of a particle or node with interactively evolving intrinsic and extrinsic properties look like?
A third problem: You distinguish ‘aspects’ by saying that the ‘subjective perspective’ differs from the ‘objective perspective.’ But this also doesn’t help, because it sounds anthropocentric. Worse, it sounds mentalistic; I understand the mental-physical distinction precisely inasmuch as I understand the mental as perspectival, and the physical as nonperspectival. If the physical is itself ‘just a matter of perspective,’ then do we end up with a dualistic or monistic theory, or do we instead end up with a Berkeleian idealism? I assume not, and that you were speaking loosely when you mentioned ‘perspectives;’ but this is important, because what individuates ‘perspectives’ is precisely what lends content to this ‘Dual-Aspect’ view.
Yes, I didn’t consider the ‘it’s not physicalism!!’ objection very powerful to begin with. Parsimony is important, but ‘physicalism’ is not a core methodological principle, and it’s not even altogether clear what constraints physicalism entails.
It’s not surprising that an information-processing system able to create representations of its own states would be able to represent a lot of useful facts about its internal states. It is surprising if such a system is able to infallibly represent its own states to itself; and it is astounding if such a system is able to self-represent states that a third-person observer, dissecting the objective physical dynamics of the system, could never in principle fully discover from an independent vantage point. So it’s really a question of how ‘special’ we’re talking.
I’m not clear on what you mean. ‘Insight’ is, presumably, a causal relation between some representational state and the thing represented. I think I can more easily understand a system’s having ‘insight’ into something else, since it’s easier for me to model veridical other-representation than veridical self-representation. (The former, for instance, leads to no immediate problems with recursion.) But perhaps you mean something special by ‘insight.’ Perhaps by your lights, I’m just talking about outsight?
If some systems have an automatic ability to non-causally ‘self-grasp’ themselves, by what physical mechanism would only some systems have this capacity, and not all?
If you could define a thingspace that meaningfully distinguishes between and admits of both ‘subjective’ and ‘objective’ facts (or properties, or events, or states, or thingies...), and that non-question-beggingly establishes the impossibility or incoherence of any other fact-classifications of any analogous sorts, then that would be very interesting. But I think most people would resist the claim that this is the one unique parameter of this kind (whatever kind that is, exactly...) that one could imagine varying over models; and if this parameter is set to value ‘2,’ then it remains an open question why the many other strangely metaphysical or strangely anthropocentric parameters seem set to ‘1’ (or to ‘0,’ as the case may be).
But this is all very abstract. It strains comprehension just to entertain a subjective/objective distinction. To try to rigorously prove that we can open the door to this variable without allowing any other Aberrant Fundamental Categorical Variables into the clubhouse seems a little quixotic to me. But I’d be interested to see an attempt at this.
Sure, though there’s a very important disparity between observed asymmetries between actual categories of things, and imagined asymmetries between an actual category and a purely hypothetical one (or, in this case, a category with a disputed existence). In principle the reasoning should work the same, but in practice our confidence in reasoning coherently (much less accurately!) about highly abstract and possibly-not-instantiated concepts should be extremely low, given our track record.
How do we know that? If we were zombies, prima facie it seems as though we’d have no way of knowing about, or even positing in a coherent formal framework, phenomenal properties. But in that case, any analogous possible-but-not-instantiated-property-kinds that would expand the dyad into a polyad would plausibly be unknowable to us. (We’re assuming for the moment that we do have epistemic access to phenomenal and physical properties.) Perhaps all carbon atoms, for instance, have unobservable ‘carbonomenal properties,’ (Cs) which are related to phenomenal and physical properties (P1s and P2s) in the same basic way that P1s are related to P2s and Cs, and that P2s are related to P1s and Cs. Does this make sense? Does it make sense to deny this possibility (which requires both that it be intelligible and that we be able to evaluate its probability with any confidence), and thereby preserve the dyad? I am bemused.
1) If you embrace SSA, then you being you should be more likely on humans being important than on panpsychism, yes? (You may of course have good reasons for preferring SIA.)
2) Suppose again redundantly dual panpsychism. Is there any a priori reason (at this level of metaphysical fancy) to rule out that experiences could causally interact with one another in a way that is isomorphic to mechanical interactions? Then we have a sort of idealist field describable by physics, perfectly monist. Or is this an illegitimate trick?
(Full disclosure: I’d consider myself a cautious physicalist as well, although I’d say psi research constitutes a bigger portion of my doubt than the hard problem.)
The theory you propose in (2) seems close to Neutral Monism. It has fallen into disrepute (and near oblivion) but was the preferred solution to the mind-body problem of many significant philosophers of the late 19th-early 20th, in particular of Bertrand Russell (for a long period). A quote from Russell:
Ooo! Seldom do I get to hear someone else voice my version of idealism. I still have a lot of thinking to do on this, but so far it seems to me perfectly legitimate. An idealism isomorphic to mechanical interactions dissolves the Hard Problem of consciousness by denying a premise. It also does so with more elegance than reductionism since it doesn’t force us through that series of flaming hoops that orbits and (maybe) eventually collapses into dualism.
This seems more likely to me so far than all the alternatives, so I guess that means I believe it, but not with a great deal of certainty. So far every objection I’ve heard or been able to imagine has amounted to something like, “But but but the world’s just got to be made out of STUFF!!!” But I’m certainly not operating under the assumption that these are the best possible objections. I’d love to see what happens with whatever you’ve got to throw at my position.
The problem is that we already have two kinds of fundamental facts, (and I would argue we need more). Consider Eliezer’s use of “magical reality fluid” in this post. If you look at context, it’s clear that he’s trying to ask whether the inhabitants of the non-causally stimulated universes poses qualia without having to admit he cares about qualia.
Eliezer thinks we’ll someday be able to reduce or eliminate Magical Reality Fluid from our model, and I know of no argument (analogous to the Hard Problem for phenomenal properties) that would preclude this possibility without invoking qualia themselves. Personally, I’m an agnostic about Many Worlds, so I’m even less inclined than EY to think that we need Magical Reality Fluid to recover the Born probabilities.
I also don’t reify logical constructs, so I don’t believe in a bonus category of Abstract Thingies. I’m about as monistic as physicalists come. Mathematical platonists and otherwise non-monistic Serious Scientifically Minded People, I think, do have much better reason to adopt dualism than I do, since the inductive argument against Bonus Fundamental Categories is weak for them.
I could define the Hard Problem of Reality, which really is just an indirect way of talking about the Hard Problem of Consciousness.
As Eliezer discuses in the post, Reality Fluid isn’t just for Many Worlds, it also relates to questions about stimulation.
Here’s my argument for why you should.
Only as a side-effect. In all cases, I suspect it’s an idle distraction; simulation, qualia, and born-probability models do have implications for each other, but it’s unlikely that combining three tough problems into a single complicated-and-tough problem will help gin up any solutions here.
Give me an example of some logical constructs you think I should believe in. Understand that by ‘logical construct’ I mean ‘causally inert, nonspatiotemporal object.’ I’m happy to sort-of-reify spatiotemporally instantiated properties, including relational properties. For instance, a simple reason why I consistently infer that 2 + 2 = 4 is that I live in a universe with multiple contiguous spacetime regions; spacetime regions are similar to each other, hence they instantiate the same relational properties, and this makes it possible to juxtapose objects and reason with these recurrent relations (like ‘being two arbitrary temporal intervals before’ or ‘being two arbitrary spatial intervals to the left of’).
Daniel Dennett’s ‘Quining Qualia’ (http://ase.tufts.edu/cogstud/papers/quinqual.htm) is taken (’round these parts) to have laid the theory of qualia to rest. Among philosophers, the theory of qualia and the classical empiricism founded on it are also considered to be dead theories, though it’s Sellers “Empiricism and the Philosophy of Mind” (http://www.ditext.com/sellars/epm.html) that is seen to have done the killing.
I’ve not actually read this essay (will do so later today), but I disagree that most people here consider the issue of qualia and the “hard problem of consciousness” to be a solved one.
Time for a poll.
[pollid:372]
What about “I’d need to think more about this”?
I just read ‘Quining Qualia’. I do not see it as a solution to the hard problem of consciousness, at all. However, I did find it brilliant—it shifted my intuition from thinking that conscious experience is somehow magical and inexplicable to thinking that it is plausible that conscious experience could, one day, be explained physically. But to stop here would be to give a fake explanation...the problem has not yet been solved.
-- Eliezer Yudkowsky, Dissolving the Question
Also, does anyone disagree with anything that Dennett says in the paper, and, if so, what, and why?
I think I have qualia. I probably don’t have qualia as defined by Dennett, as simultaneously ineffable, intrinsic, etc, but there are nonetheless ways things seem to me.
It maybe just my opinion, but please don’t quote people and then insert edits into the quotation. Although at least you did do that with parenthesis.
By doing so you seem to say that free will and qualia are the same or interchangeable topics that share arguments for and against. But that is not the case. The question of free will is often misunderstood and is much easier to handle.
Qualia is, in my opinion, the abstract structure of consciousness. So on the underlying basic level you have physics and purely physical things, and on the more abstract level you have structure that is transitive with the basic level.
To illustrate what this means, I think Eliezer had an excellent example(though I’m not sure if his intention was similar): The spiking pattern of blue and actually seeing blue. But even the spiking pattern is far from completely reduced. But the idea is the same. On the level of consciousness you have experience which corresponds to a basic level thing. Very similar to the map and the territory analogue. Colorvision is hard to approach though, and it might be easier to start of with binary vision of 1 pixel. It’s either 1 or 0. Imagine replacing your entire visual cortex with something that only outputs 1 or 0 - though brain is not binary—your entire field of vision having only 2 distinct experienced states. Although if you do that it certainly will result into mind-projection fallacy, since you can’t actually change your visual cortex to only output 1 or 0. Anyway the rest of your consciousness has access to that information, and it’s very very much easier to see how this binary state affects the decisions you make. And it’s also much easier to do the transition from experience to physics and logic. Anyway then you can work your way back up to the normal vision by going several different pixels that are either 1 or 0.. To grayscale vision. But then colors make it much harder. But this doesn’t resolve the qualia issue—how would feel like to have a 1-bit vision? How do you produce a set of rules that is transitive with the experience of vision?
Even if you grind everything down to the finest powder it still will be hard to see where this qualia business comes from, because you exist between the lines.
I agree that that doesn’t resolve the qualia issue. To begin with, we’d need to write a SeeRed() function, that will write philosophy papers about the redness it perceives, and wonder whence it came, unless it has access to its own source code and can see inside the black box of the SeeRed() function. Even epiphenomenalists agree that this can be done, since they say consciousness has no physical effect on behavior. But here is my intuition (and pretty much every other reductionist’s, I reckon) that leads me to reject epiphenomenalism: When I say, out loud (so there is a physical effect) “Wow, this flower I am holding is beautiful!”, I am saying it because it actually looks beautiful to me! So I believe that, somehow, the perception is explainable, physically. And, at least for me, that intuition is much stronger than the intuition that conscious perception and computation are in separate magisteria.
We’ll be able to get a lot further in this discussion once someone actually writes a SeeRed() function, which both epiphenomenalists and reductionists agree can be done.
Meanwhile, dualists think writing such a SeeRed() function is impossible. Time will tell.
It’s possible for physicalism to be true, and computationalism false.
I’ll say. Solving the problem does tend to solve the problem.
I haven’t read either of those but will read them. Also I totally think there was a respectable hard problem and can only stare somewhat confused at people who don’t realize what the fuss was about. I don’t agree with what Chalmers tries to answer to his problem, but his attempt to pinpoint exactly what seems so confusing seems very spot-on. I haven’t read anything very impressive yet from Dennett on the subject; could be that I’m reading the wrong things. Gary Drescher on the other hand is excellent.
It could be that I’m atypical for LW.
EDIT: Skimmed the Dennett one, didn’t see much of anything relatively new there; the Sellers link fails.
So you do have a solution to the problem?
I’ll take a look at Drescher, I haven’t seen that one.
Try this link? http://selfpace.uconn.edu/class/percep/SellarsEmpPhilMind.pdf
Sellars is important to contemporary philosophy, to the extent that a standard course in epistemology will often end with EPM. I’m not sure it’s entirely worth your time though, because an argument against classical (not Bayesian) empiricism.
Pryor and BonJour explain Sellars better than Sellars does. See: http://www.jimpryor.net/teaching/courses/epist/notes/given.html
The basic question is over whether our beliefs are purely justified by other beliefs, or whether our (visual, auditory, etc.) perceptions themselves ‘represent the world as being a certain way’ (i.e., have ‘propositional content’) and, without being beliefs themselves, can lend some measure of support to our beliefs. Note that this is a question about representational content (intentionality) and epistemic justification, not about phenomenal content (qualia) and physicalism.
Right—to hammer on the point, the common-ish (EDIT: Looks like I was hastily generalizing) LW opinion is that there never was any “hard problem of consciousness” (EDIT: meaning one that is distinct from “easy” problems of consciousness, that is, the ones we know roughly how to go about solving). It’s just that when we meet a problem that we’re very ignorant about, a lot of people won’t go “I’m very ignorant about this,” they’ll go “This has a mysterious substance, and so why would learning more change that inherent property?”
It should be remembered though that the guy who’s famous for formulating the hard problem of consciousness is:
1) A fan of EY’s TDT, who’s made significant efforts to get the theory some academic attention. 2) A believer in the singularity, and its accompanying problems. 3) The student of Douglas Hofstrader. 4) Someone very interested in AI. 5) Someone very well versed and interested in physics and psychology. 6) A rare, but sometimes poster on LW. 7) Very likely one of the smartest people alive. etc. etc.
I think consciousness is reducible too, but David Chalmers is a serious dude, and the ‘hard problem’ is to be taken very, very seriously. It’s very easy to not see a philosophical problem, and very easy to think that the problem must be solved by psychology somewhere, much harder to actually explain a solution/dissolution.
I agree with you about how smart Chalmers is and that he does very good philosophical work. But I think you have a mistake in terminology when you say
It is an understandable mistake, because it is natural to take “the hard problem” as meaning just “understanding consciousness”, and I agree that this is a hard problem in ordinary terms and that saying “there is a reduction/dissolution” is not enough. But Chalmers introduced the distinction between the “hard problem” and the “easy problems” by saying that understanding the functional aspects of the mind, the information processing, etc, are all “easy problems”. So a functionalist/computationalist materialist, like most people on this site, cannot buy into the notion that there is a serious “hard problem” in Chalmers’ sense. This notion is defined in a way that begs the question assuming that qualia are irreducible. We should say instead that solving the “easy problems” is at the same time much less trivial than Chalmers makes it seem, and enough to fully account for consciousness.
No it isn’t. Here is what Chalmers says:
“It is undeniable that some organisms are subjects of experience. But the question of how it is that these systems are subjects of experience is perplexing. Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does.”
There is no statement of irreducubility there. There is a statement that we have “no good explanaion” and we don’t.
However, see how he contrasts it with the “easy problems” (from Consciousness and its Place in Nature—pdf):
It seems clear that for Chalmers any description in terms of behavior and cognitive function is by definition not addressing the hard problem.
But that is not to say that qualia are irreducibole things, that is to say that mechanical explanations of qualia have not worked to date
What does this mean by “why”? What evolutionary advantage is there? Well, it enables imagination, which lets us survive a wider variety of dangers. What physical mechanism is there? That’s an open problem in neurology, but they’re making progress.
I’ve read this several times, and I don’t see a hard philosophical problem.
It’s definitely a how-it-happens “why” and not how-did-it-evolve “why”
There’s more to qualia than free-floating representations. There is no reason to suppose an AI’s internal maps have phenomenal feels, no way of testing that they do, and no way of engineering them in.
It’s a hard scientific problem. How could you have a theory that tells you how the world seems to a bat on LSD? How can you write a SeeRed() function?
Presumably, the exact same way you’d write any other function.
In this case, all that matters is that instances of seeing red things correctly map to outputs expected when one sees red things as opposed to not seeing red things.
If the correct behavior is fully and coherently maintained / programmed, then you have no means of telling it apart from a human’s “redness qualia”. If prompted and sufficiently intelligent, this program will write philosophy papers about the redness it perceives, and wonder whence it came, unless it has access to its own source code and can see inside the black box of the SeeRed() function.
Of course, I’m arguing a bit by the premises here with “correct behavior” being “fully and coherently maintained”. The space of inputs and outputs to take into account in order to make a program that would convince you of its possession of the redness qualia is too vast for us at the moment.
TL;DR: It all depends on what the SeeRed() function will be used for / how we want it to behave.
False. In this case what matters is the perception of a red colour that occurs between input and ouput. That is what the Hard Problem, the problem of qualia is about.
That doesn’t mean there are no qualia (I have them so I know there are). That also doesn’t mean qualia just serendiptously arrive whenever the correct mapping from inputs to outputs is in place. You have not written a SeeRed() or solved the HP. You have just assumed that what is very possible a zombie is good enough.
None of these were among my claims. For a program to reliably pass turing-like tests for seeing redness, a GLUT or zombielike would not cut it, you’d need some sort of internal system that generates certain inner properties and behaviors, one that would be effectively indistinguishable from qualia (this is my claim), and may very well be qualia (this is not my core claim, but it is something I find plausible).
Obviously I haven’t solved the Hard Problem just by saying this. However, I do greatly dislike your apparent premise* that qualia can never be dissolved to patterns and physics and logic.
* If this isn’t among your premises or claims, then it still does appear that way, but apologies in advance for the strawmanning.
Sorry that is most definitely “serendipitously arrive”. You don’t know how to engineer the Redness in explicilty, you are just assuming it must be there if everything else is in place.
The claimis more like “hasn’t been”, and you haven’t shown me a SeeRed().
Is there a reason to suppose that anybody else’s maps have phenomenal feels, a way of testing that they do, or a way of telling the difference? Why can’t those ways be generalized to Intelligent entities in general?
Yes: naturalism. It would be naturalistcially anomalous if their brains worked very smilarly , but their phenomenology were completely different.
No. So what? Are you saying we are all p-zombies?
I don’t know about Decius, but...
I am.
I’m also saying that it doesn’t matter. The p-zombies are still conscious. They just don’t have any added “conscious” XML tags as per some imaginary, crazy-assed unnecessary definition of “consciousness”.
Tangential to that point: I think any morality system which relies on an external supernatural thinghy in order to make moral judgments or to assign any terminal value to something is broken and not worth considering.
You appear to be making an unfortunate assumption that what Chalmers and Peterdjones are talking about is crazy-assed unnecessary XML tags, as opposed to, y’know, regular old consciousness.
I’m not sure where my conception of p-zombies went wrong, then. P-zombies are assumed by the premise, if my understanding is correct, to behave physically exactly the same, down to the quantum level (and beyond if any exists), but to simply not have something being referred to as “qualia”. This seems to directly imply that the “qualia” is generated neither by the physical matter, nor by the manner in which it interacts.
Like Eliezer, I believe physics and logic are sufficient to describe eventually everything, and so qualia and consciousness must be made of this physical matter and the way it interacts. Therefore, since the p-zombies have the same matter and the same interactions, they have qualia and consciousness.
What, then, is a non-p-zombie? Well, something that has “something more” (implied: Than physics or logic) added into it. Since it’s something exceptional that isn’t part of anything else so far in the universe to my knowledge, calling it a “crazy-ass unnecessary XML tag” feels very worthy of its plausibility and comparative algorithmic complexity.
The point being that, under this conception of p-zombies and with my current (very strong) priors on the universe, non-p-zombies are either a silly mysterious question with no possible answer, or something supernatural on the same level of silly as atom-fiddling tiny green goblins and white-winged angels of Pure Mercy.
Huh...
That’s a funny way of thinking about it.
But anyway, EY’s zombies sequences was all about saying that if physics and math is everything, then p-zombies are a silly mysterious question. Because a p-zombie was supposed to be like a normal human to the atomic level, but without qualia. Which is absurd if, as we expect, qualia are within physics and math. Hence there are no p-zombies.
I guess the point is that saying there are no non-p-zombies as a result of this is totally confusing, because it totally looks like saying no-one has consciousness.
(Tangentially, it probably doesn’t help that apparently half of the philosophical world use “qualia” to mean some supernatural XML tags, while the other half use the word to mean just the-way-things-feel, aka. consciousness. You seem to get a lot of arguments between those in each of those groups, with the former group arguing that qualia are nonsense, and the latter group rebutting that “obviously we have qualia, or are you all p-zombies?!” resulting in a generally unproductive debate.)
Hah, yes. That seems to be partly a result of my inconsistent way of handling thought experiments that are broken or dissolved in the premises, as opposed to being rejected due to a later contradiction or nonexistent solution.
I have no idea what you are gettign at. Please clarify.
That has no discernable relationship to anythign I have said. Have you confused me with someone else?
I’m not sure where I implied that I’m getting at anything. We’re p-zombies, we have no additional consciousness, and it doesn’t matter because we’re still here doing things.
The tangent was just an aside remark to clarify my position, and wasn’t to target anyone.
We may already agree on the consciousness issue, I haven’t actually checked that.
I have no idea whay you mean by “additonal consciousness”—although, since you are not “getting at anything” you perhaps mean nothing.
That seems a bold and contentious claim to me. OTOH, you say you are not “getting at anything”. Who knows?
OK. “Getting at something” doens’t mean criticising someone, it means making a point.
In that sense, what I was getting at is that asking the question of whether we are p-zombies is redundant and irrelevant, since there’s no reason to want or believe in the existence of non-p-zombies.
The core of my claim is basically that our consciousness is the logic and physics that goes on in our brain, not something else that we cannot see or identify. I obviously don’t have conclusive proof or evidence of this, otherwise I’d be writing a paper and/or collecting my worldwide awards for it, but all (yes, all) other possibilities seem orders of magnitude less likely to me with my current priors and model of the world.
TL;DR: Consciousness isn’t made of ethereal acausal fluid nor of magic, but of real physics and how those real physics interact in a complicated way.
I believe in the existence of at least onen non-p-zombie, because I have at least indirect evidence of one in the form of my own qualia.
We can see and identify our consciousness from the inside. It’s self awareness. If you try to treat consciousness from the outside, you are bound to miss 99% of the point. None of this has antyhing to do with what consciousness is “made of”.
I have a question about qualia from your perspective. If Omega hits you with an epiphenomenal anti-qualia hammer that injures your qualia and only your qualia such that you essentially have no qualia (I.E, you are a P-zombie) for an hour until your qualia recovers (When you are no longer a P-Zombie), what, if anything, might that mean?
1: You’d likely notice something, because you have evidence that qualia exist. That implies you would notice if they vanished for about an hour, since you would no longer be getting that evidence for that hour
2: You’d likely not notice anything, because if you did, a P-Zombie would not be just like you.
3: Epiphenomenal anti-qualia hammers can’t exist. For instance, it might be impossible to affect your qualia and only your qualia, or perhaps it is impossible to make any reversible changes to qualia.
4: Something else?
Dunno, but try looking at this
I took a look. I found this quote:
This seems to support an answer of:
2: You’d likely not notice anything, because if you did, a P-Zombie would not be just like you.
But if that’s the case, it seems to contradict the idea of red qualia’s existence even being a useful discussion. If you don’t expect to notice when something vanishes, how do you have evidence that it exists or that it doesn’t exist?
Now, to be fair, I can think you can construct something where it is meaningful to talk about something that you have no evidence of.
If an asteroid goes outside our light cone, we might say: “We have no evidence that this asteroid still exists since to our knowledge, evidence travels at the speed of light and this is outside our light cone. However, if we can invent FTL Travel, and then follow it’s path, we would not expect it to not have winked out of existence right as it crossed our light cone, based on conservation of mass/energy.”
That sounds like a comprehensible thing to say, possibly because it is talking about something’s potential existence given the development of a future test.
And it does seem like you can also do that with Religious epiphenomenon, like souls, that we can’t see right now.
“We have no evidence that our soul still exists since to our knowledge, people are perfectly intelligible without souls and we don’t notice changes in our souls. However, if in the future we can invent soul detectors, we would expect to find souls in humans, based on religious texts.”
That makes sense. It may be wrong, but if someone says that to me, My reaction would be “Yeah, that sounds plausible.”, or perhaps “But how would you invent a soul detector?” much like my reaction would be to the FTL asteroid “Yeah, that sounds plausible.”, or perhaps “But how would you invent FTL?”
I suppose, in essence, that these can be made to pay rent in anticipated experiences, but they are only under conditional circumstances, and those conditions may be impossible.
But for qualia, does this?
“We have no evidence that our qualia still exists since to our knowledge, P-zombies are perfectly intelligible without qualia and we don’t notice changes in our qualia. However, if we can invent qualia detectors, we would expect to detect qualia in humans, based on thought experiments.”
It doesn’t in my understanding, because it seems like one of the key points of qualia is that we can notice it right now and that no on else can ever notice it. Except that according to one of its core proponents, we can’t notice it either. I mean, I can form sentences about FTL or Souls and future expectations that seem reasonable, but even those types of sentences seem to fail at talking about qualia properly.
P-zombies are behaviourally like me. That means I would notn act as if I noticed anything. OTOH qualia are part of conciouness, so my conscious awarenss would change. I would be compelled to lie, in a sense.
Would you lie then, or are you lying now? You have just said that your experience of qualia is not evidence even to yourself that you experience qualia.
Or is there a possible conscious awareness change that has zero effect? Can doublethink go to that metalevel?
I must not be working with the right / same conception of p-zombies then, because to me qualia experience provides exactly zero bayesian evidence for or against p-zombies on its own.
“A philosophical zombie or p-zombie in the philosophy of mind and perception is a hypothetical being that is indistinguishable from a normal human being except in that it lacks conscious experience, qualia, or sentience.[1] “—WP
I am of course taking a p-zombie to be lacking in qualia. I am not sure that alternatives are even coherent, since I don’t see how other aspects of consciousness could go missing without affecting behaviour.
Wait, those premises just seem wrong and contradictory.
To even work in the thought experiment, p-zombies live in a world with physics and logic identical to our own (with possibility of added components).
In principle, qualia can either be generated by physics, logic, or something else (i.e. magic), or any combination thereof.
There is no magic / something else.
We have qualia, generated apparently only by physics and/or logic.
p-zombies have the exact same physics and logic, but still no qualia.
???
My only remaining hypothesis is that p-zombies live in a world where the physics and logic are there, but there is also something else entirely magical that does not seem to exist in our universe that somehow prevents their qualia, by hypothesis. Very question-begging. Also unnecessarily complex. I am apparently incapable of working with thought experiments that defy the laws of logic by their premises.
That sounds like a serious problem. You should get that looked at.
You seem to have done a 180 shift from insiting that there are only zombies to saying there are no zombies.
I don’t know of any examples. Typically zombie gedankens do not take 3 as a premise, and conclude the oppoiste—that there is an extra non-physical ingredient as a conclusion.
Yes. My understanding of p-zombies was incorrect/different. If p-zombies have no qualia by the premises, as you’ve shown me a clear definition of, then we can’t be p-zombies. (ignoring the details and assuming your experiences are like my own, rather than the Lords of the Matrix playing tricks on me and making you pretend you have qualia; I think this is a reasonable assumption to work with)
So they write their bottom line in the premises of the thought experiment in a concealed manner? I’m almost annoyed enough to actually give them that question they’re begging for so much.
Now E.Y.’s Zombie posts are starting to make a lot more sense.
No. Leaving physicalism out as a premise is not the same as incuding non-physicalaism as a premise. Likewise, concluding non-physicalism is not assuming it.
There must be non-physical things to assume that there is any difference between “us” and “p-zombies”. This is a logical requirement. They posit that there effectively is a difference, in the premises right there, by asserting that p-zombies do not have qualia, while we do.
Premise: P-zombies have all the physical and logical stuff that we do.
Premise: P-zombies DO NOT have qualia.
Premise: We have qualia.
Implied premise: This thought experiment is logically consistent.
The only way 4 is possible is if it is also implied that:
Implied premise: Either us, or P-Zombies, have something magical that adds or removes qualia.
By the reasoning which prompts them to come up with the thought experiment in the first place, it cannot be the zombies that have an additional magical component, because this would contradict the implied premise that the thought experiment is logically consistent (and would question the usefulness and purpose of the thought experiment).
Therefore:
“Conclusion”: We have something magical that gives us qualia.
The p-zombie thought experiment is usually intended to prove that qualia is magical, yes. This is one of those unfortunate cases of philosophers reasoning from conceivability, apparently not realising that such reasoning usually only reveals stuff about their own mind.
I wouldn’t say “qualia is magic” is actually a premise, but the argument involves assuming “qualia could be magical” and then invalidly dropping a level of “could”.
In this case the “could” is an epistemic “could”—“I don’t know whether qualia is magical”. Presumably, iff qualia is magical, then p-zombies are possible (ie. exist in some possible world, modal-could), so we deduce that “it epistemic-could be the case that p-zombies modal-could exist”. Then I guess because epistemic-could and modal-could feel like the same thing¹, this gets squished down to “p-zombies modal-could exist” which implies qualia is magical.
Anyway, the above seems like a plausible explanation of the reasoning, although I haven’t actually talked to ay philosophers to ask them if this is how it went.
¹ And could actually be (partially or completely) the same thing, since unless modal realism is correct, “possible worlds” don’t actually exist anywhere. Or something. Regardless, this wouldn’t make the step taken above legal, anyway. (Note that the previous “could” there is an epistemic “could”! :p)
I had always understood that “We have something magical that gives us qualia” was one of the explicit premises of p-zombies (p-zombies being defined as that which lacks that magical quality, but appears otherwise human). One could then see p-zombies as a way to try to disprove the “something magical” hypothesis by contradiction—start with someone who doesn’t have that magical something, continue on from there, and stop once you hit a contradiction.
Nope. eg.
According to physicalism, all that exists in our world (including consciousness) is physical.
Thus, if physicalism is true, a logically-possible world in which all physical facts are the same as those of the actual world must contain everything that exists in our actual world. In particular, conscious experience must exist in such a possible world.
In fact we can conceive of a world physically indistinguishable from our world but in which there is no consciousness (a zombie world). From this (so Chalmers argues) it follows that such a world is logically possible.
Therefore, physicalism is false. (The conclusion follows from 2. and 3. by modus tollens.)
(Chalmer’s argument according to WP)
These two steps are contradictory. In the first one, you state that a world physically indistinguishable from ours must include consciousness; then in the very next point, you consider a world physically indistinguishable from ours which does not include consciousness to be logically possible—exactly what the previous step claims is not logically possible.
Or am I misunderstanding something?
The first includes “if physicalism is true”, the second doens’t.
Ah, right. Thanks, I somehow missed that.
So the second is then implicitly assuming that physicalism is not true; it seems to me that the whole argument is basically a longwinded way of saying “I can’t imagine how consciousness can possibly be physical, therefore since I am conscious, physicalism is false”.
One might as easily imagine a world physically indistinguishable from ours, but in which there is no gravity, and thence conclude that gravity is not physical but somehow magical.
For some values of “imagine”. Given relativity, it would be pretty difficult to coheretly unplug gravity from mass, space and acceleration. It would be easier under Newton. I conclude that the unpluggabiliy of qualia means we just don’t have a relativity-grade eplanation of them, an explanation that makes them deeply interwoven with other things.
That seems like a reasonable conclusion to draw.
Not really. Just postulate something which does not have the same proportionality constant relating inertia to mass.
Inertia and mass are the same thing. You probably meant “the same proportionality constant between mass and gravitational force”, that is, imagine that the value of Newton’s constant G was different.
But this (like CCC’s grandparent post introducing the gravity analogy) actually goes in Chalmers’ favor. Insofar as we can coherently imagine a different value of G with all non-gravitational facts kept fixed, the actual value of G is a new “brute fact” about the universe that we cannot reduce to non-gravitational facts. The same goes for consciousness with respect to all physical facts, according to Chalmers. He explicitly compares consciousness to fundamental physical quantities like mass and electric charge.
The problem is that one aspect of the universe being conceptually irreducible at the moment (which is all that such thought experiments prove) does not imply it might forever remain so when fundamental theory changes, as Peterdjones says. Newton could imagine inertia without gravity at all, but after Einstein we can’t. Now we are able to imagine a different value of G, but maybe later we won’t (and I can actually sketch a plausible story of how this might come to happen if anyone is interested).
No, I meant a form of matter which coexisted with current forms of matter but which was accelerated by a force disproportionately to the amount of force exerted through the gravity force. One such possibility would be something that is ‘massless’ in that it isn’t accelerated by gravity but that has electric charge.
And by definition, the value of G is equal to 1, just like every other proportionality constant. I wasn’t postulating that MG/NS^2 have a different value.
Oooh, good one. I’m trying this if someone ever seriously tries to argue p-zombies with me.
Most versions of the Zombie Argument I’ve seen don’t specify that the world be physically identical to ours, merely indistinguishable.
http://en.wikipedia.org/wiki/Proof_by_contradiction
Agreed.
I’m being told that this is not the case, but I’m struggling to understand how.
I’m curious about your definition of “magical”. Is it the same as dualism)?
Within this discussion, I’ve tried to consistently use “magic” as meaning “not physics or logic”. Essentially, things that, given a perfect model of the (physical) universe that we live in, would be considered impossible or would go against all predictions for no cause that we can attribute to physics or logic or both.
So dualism is only one example, another could be intervention by the Lords of the Matrix (depending on how you draw boundaries for “universe that we live in”), and God or ontologically basic mental entities could be others.
So the assertion “we have something magical” is equivalent to “qualia is made of nonlogics” (although “nonlogics” is arguably still much more useful than “nonapples” as a conceptspace pointer).
Technically qualia is “non-physics”. Since if a human with a brain that does thinking is physics + logic, qualia is just the logic given the physics.
Errh, yes. Thank you. I think “nonlogics” is a decent fix, in light of this.
Errr, yes..that is the intended conclusion. But I don’t think you can say an argument is question begging beccause the intended conclusion follows from the premises taken jointly.
And how, pray tell, did they reach into the vast immense space of possible hypotheses and premises, and pluck out this one specific set of premises which just so happens that if you accept it completely, it inevitably must result in the conclusion that we have something magical granting us qualia?
The begging was done while choosing the premises, not in one of the premises individually.
Premise: All Bob Chairs must have seventy three thousand legs exactly.
Premise: Things we call chairs are illusions unless they are Bob Chairs.
Premise: None of the things we call chairs have exactly seventy three thousand legs.
Therefore, all of the things we call chairs are illusions and do not exist.
I seriously don’t see how the above argument is any more reasonable and any more or less question-begging than the p-zombie argument I’ve made in the grandparent. No single premise here assumes the conclusion, right? So no problem!
ETA: Perhaps it’s more clear if I just say that in order for the premises of the grandparent to be logically valid, one must also assume as a premise that having the information patterns of the human brain without creating qualia is possible in the first place. This is the key point that is the source of the question begging: It is assumed that the brain interactions do not create qualia, implicitly as part of the premises, otherwise the statement “P-zombies have the same brain interactions that we do but no qualia” is directly equivalent to “A → B, A, ¬B”.
So for A (brain interactions identical to us), B (possess qualia), and C (has magic):
(A → B) <==> ¬B → ¬A
((C → B) OR (AC → B)) <==> ¬(A → B)
A
¬B
Refactor to one single “question-begging” premise:
((((C ->B) OR (AC → B)) → C) <==> ¬(¬B → ¬A)) AND A AND ¬B
...therefore C.
I suppose they have the ability to formulate arguments that support their views. Are you saying that the honest way to argue is to fling premises together at random and see what happens?
Joint implication by premsies is validity not petitio principi.
That is an example of a True Scotsman fallacy, or argument by tendentious redefinition. I don’t see the parallel.
Eh. I’m bad at informal fallacies, apparently.
However, all they’ve done is pick specific premises that hide clever assumptions that logically must end up with their desired conclusion, without any reason in particular to believe that their premises make any sense. See the amateur logic I did in my edits of the grandparent.
It is very much assumed, by asserting the first, third and fourth premises, that qualia does not require brain interactions, as a prerequisite for positing the existence of p-zombies in the thought experiment.
Again: not assuming physicalism it not the same as assuming non-physicalism.
They assume (correctly) that if ¬B and A, then ¬(A → B)
Then they assume ¬B and A.
...
You’ve flattened out all the stuff about conceivability and logical possibility.
I have, but unfortunately that’s mostly because I don’t know the formal nomenclature and little details of writing conceivability and possibility logical statements.
I wouldn’t really trust myself to write formal logic with conceivability and probability without missing a step or strawmanning one of the premises at some point, with my currently very minimal understanding of that stuff.
But putting in the statement that zombies have all of the physical and logical characteristics of people, but lack some other characteristic, requires that some non-physical characteristic exists. You can’t say “I don’t assume magic” and then assume a magician!
Well, I understand that if consciousness was physical, but didn’t effect our behavior, then removing that physical process would result in a zombie. That’s usually the example given, not magic.
The usual p-zombie argment in the literature does not assume consciousness is entirely physical. Which is not the same as assuming it is non physical...
Just to be clear, the fact that they talk about bridging laws or such doesn’t mean they didn’t generate the idea with magical thinking, or that is has a hope in hell of being actually true. It just means they managed to put a band-aid over that particular fallacy.
So physicalism is apriori true, even when there is no physical explanaion of some phenomenon?
No comment. That’s not what I said and I’m not saying it now. My point is that, while the p-zombie argument may have been formulated with “magical” explanations in mind, it does not directly reference them in the form usually presented.
I see little point in ignoring what an argument states explicily in favour of speculations about what the formulaters had in mind. I also think that rhetorical use of the word “magic” is mind killing. Quantum teleportation might seem magical to a 19th century physicist, but it still exists.
Which is why my point is that that the argument makes no mention of “magic”.
Removing something physical doesn’t create a p-zombie, it creates a lobotomized person. If there was a form of brain damage that could not be detected by any means and had no symptoms, would it be a possible side effect of medication?
Supposedly the argument works just as well as a counterfactual.
Compare two people who are physically identical except for one thing which doesn’t change anything else micro or macro scale. Clearly, one of them is a p-zombie, because that one lacks qualia.
I still don’t understand what the difference is between someone who lacks consciousness but is otherwise identical to someone who has consciousness.
With actual humans, p-zombies are almost certainly impossible. But imagine a world in which humans aren’t controlled by their brains; the Zombie Fairy intervenes and makes them act as she predicts they would act. Now the Zombie Fairy is so good at her job that the people of this world experience controlling their own bodies; but in actuality, they have no effect on their actions (except by coincidence.) If one of their brains was somehow altered without the Fairy’s knowledge, they would discover their strange predicament (but be unable to tell anyone—they would live out their life as a silent observer.) If one of their brains was destroyed without the Fairy’s noticing, they would continue as as a lifeless puppet, indistinguishable from regular humans—a p-zombie.
Now, it could be argued that the Fairy—who is what is usually referred to as a Zombie Master—is herself conscious, and as such these zombies are not true p-zombies. But this should give you some idea of what people are imagining when they say “p-zombie”.
That scenario sounds identical to “everybody is a p-zombie”.
Is there also a perception fairy, since perceiving the zombie fairy’s influence doesn’t create any physical changes in brain state or behavior?
It is! Unless of course you happen to be one of the poor people who exist solely to grant said zombies qualia.
Perception proceeds as normal in this counterfactual world. Of course, this world is not necessarily identical to our world, depending on how obvious the Perception Fairy is.
Does “As normal” mean that noticing the effects of the zombie fairy results in electrochemical changes in the brain that are different from those which occur in the absence of noticing those effects?
For some reason I can understand it better if I think of a sentient computer with standard input devices as things that it considers “real”, and a debugger that reads and alters memory states at will, outside the loop of what the machine can know. Assuming that such a system could be self-aware in the same sense that I think I am, how would it respond if every time it asked a class of question, the answer was modified by ‘magic’?
...yes? How would one notice something without changing brain-state to reflect that?
I think you may have misunderstood. The fairy controls the bodies, but has perfectly predicted in advance what the human would have done. Thus whatever they try to do is simultaneously achieved by the fairy; but they have no effect on their bodies. The fairy doesn’t alter their brains at all. If something else did alter their brain, but for some reason the fairy didn’t notice and update her predictions, then they would become “out of sync” with their body.
Brain state is in principle detectable. If the fairy changes brain state, the fairy is detectable by physical means and thus physical.
Oh, I see. Yes, the fairy is physical; the brains, however, could in principle be epiphenomenal (although they aren’t, in this example.)
You need to specify whether your “putting in” is assuming or concluding. In general, it would help to refer to a concrete example of a p-zombie argment from a primary source.
Defining. A p-zombie is defined by all of the primary sources as having all of the physical qualities that humans have, but lacking something that humans have.
A magician is defined as a human that can do magic. Magicians (people identical to humans but with supernatural powers) don’t prove anything about physicalism any more than p-zombies do, unless it can be shown that either are exemplified.
The literature suggests that p-zombies can be significant if they are only conceptually possible. In fact, zombie theorists like Chalmers think they are naturalistically impossible and so cannot be exemplified. You may not like arguments from conceptiual possibility, but he has argued for his views, where you have so far only expressed opinion.
Then the literature suggests that magicians can be significant if they are only conceptually possible. And the conceptual possibility of non-physicalism disproves physicalism.
The literature does not talk about magicians.
Magicians are defined as physically identical to humans and p-zombies but they have magic. Magic has no physical effects, doesn’t even trigger neurons, but humans with magic experience it and regular humans and p-zombies don’t.
So it has all of the characteristics of qualia. Any evidence for qualia is also evidence for this type of magic.
No.Qulia are not defined as epiphenomenal or non physical.
Yes. The argument of the grandparent is logically consistent AFAICT.
P-zombies are (Non-self-contradictory) IFF qualia comes from nonlogics and nonphysics.
Qualia comes from nonlogics and nonphysics IFF nonlogics and nonphysics are possible. (this is trivially obvious)
P(Magicians | “nonlogics and nonphysics are possible”) > P(Magicians | ¬”nonlogics and nonphysics are possible”)
ETA: That last one is probably misleading / badly written. Is there a proper symbol for “No definite observation of X or ¬X”, AKA the absence of this piece of evidence?
If qualia is defined such that is is conceptually possible that one person can experience qualia while a physically identical person cannot the other does not, then qualia are defined to be non physical.
No, they are just implied to be. There is an infinty of facts implied by the definition of “2” but they are not in the definiion, which is finite.
Didn’t we have his exact same argument? Even if qualia are generated by our (physical) brains, this doesn’t mean that they could counterfactually be epiphenomenal if something was reproducing the effects they have on our bodies.
The same could be said of cats: Even if cats are part of the physical universe, they could counterfactually be epiphenomenal if something was reproducing the effects they have on the world.
How does the argument apply to qualia and not to cats?
Gravity!
I think I’m seeing a pattern in this topic of discussion. And it is reminiscent of a certain single-sided geometric figure.
Well, if something is reproducing the effect of cats on the world we have no reason to posit cats as existing anyway, unless we are cats.
What about all of the observations of cats? Aren’t they adequate reason to posit cats as existing?
Um, no. Not if something is reproducing them.
Taboo ‘reproducing’.
Generating effects indistinguishable from the result of an ordinary cat—from reflected light to half-eaten mice. Of course, there are a few … extra effects in there. So you know none of you are ordinary cats.
The epiphenomenal cats, on the other hand, are completely undetectable. Except to themselves.
I’m not granting cats a point of view for this discussion: they are something that we can agree clearly exists and we can describe their boundaries with a fair degree of precision.
What do these ‘extra effects’ look like, and are they themselves proof that physicalism is wrong?
The whole point was that if the cats have a point of view, then they have the information to posit themselves; even though an outside observer wouldn’t.
Are you saying that qualia have a point of view, or are positing themselves?
It’s subjective information. I can’t exactly show my qualia to you; I can describe them, but so can a p-zombie.
Didn’t I say I wasn’t going to discuss qualia with you until you actually knew what they were? Because you’re starting to look like a troll here. Not saying you are one, but …
So, you’re saying that it is subjective whether qualia have a point of view, or the ability to posit themselves?
Because I have all of the observations needed to say that cats exist, even if they don’t technically exist. I do not have the observations needed to say that there is a non-physical component to subjective experience.
Who’s talking about non-physical components? “Qualia” has more than one meaning.
Y’know, I did say I wasn’t going to discuss qualia with you unless you knew what they were. Do some damn research, then come back here and start arguments about them.
or even if we were.
I’m very confused. Are you implying that experiencing qualia is no reason to posit that qualia exists, period?
Or maybe you’re just saying “Hey, unless the cats have conscious self-aware minds that can experience cats, then they still can’t either!”—which I took for granted and assumed the jump from there to “assuming cats have the required mental parts” was a trivial inference to make.
I just don’t see the need for the exception in MugaSofer’s statement, whether you agree with the statement itself or not.
So if something were shown to be reproducing the effect of human minds on the world, you would have no reason to posit yourself as existing anyway?
If you are an artifact of such a reproduction, would you call yourself existing in the same way as if you weren’t?
I would.
That’s a bit why I’m confused as to why you’re (it seems to me) claiming we have no reason to posit self-existence in such a case.
Maybe your objection is that we should taboo and dissolve that whole “existing” thing?
OK, it’s just that the statement “if something is reproducing the effect of cats on the world we have no reason to posit cats as existing” declares that something that is not really a “cat” the way we perceive it, but only an “effect of a cat”, then it does not “exist”. Ergo, if you are only an effect of a cat, you don’t exist as a cat.
Wouldn’t that be nice, but unfortunately EY-style realism and my version of instrumentalism seem to diverge at that definition.
Oh. Then we agree, I think, on the fundamentals of what makes a cat “exist” or not.
Does this also imply the same exist-”exist” perception problem with qualia in your model, or am I horribly misinterpreting your thoughts?
Re qualia, I don’t understand what you are asking. The term means no more to me than a subroutine in a reasonably complex computer program, if currently run on a different substrate.
And, if I understand correctly, this subroutine exists (and is felt / has effect on its host program) whether or not it “exists as qualia” in the particular sense that some clever arguer wants to define qualia as anything other than that subroutine. The fact that there is an effect of the subroutine is all that is required for the subroutine to exist in the first sense, while whether it is “the subroutine” or only a mimicking effect is only relevant for the second sense of “exist”, which is irrelevant to you.
Is this an accurate description?
Pretty much, as I don’t consider this “second sense” to be well defined.
But I specifically stated you were a cat, not an effect of a cat.
I’m not sure how to tell the difference, or even if there is one.
In this case, feel free to assume no-one ever tries to observe cat brains. The “simulation” only has to reproduce your actions, which it does with magic.
Could you taboo the bolded phrase, please?
Sure. an artifact of such a reproduction = whatever you mean by “effect of cats” in your original statement.
Oh, well there’s your problem then. You’re not part of “the effect of cats”. That’s stuff like air displacement, reflected light, purring, that sort of thing.
Where do effects of cats stop and cats begin?
If you’re using some nonstandard epistemology that doesn’t distinguish between observations that point to something and the thing itself, then nothing. Otherwise the difference between a liar and a reality warper.
Looks like we have an insurmountable inferential distance problem both ways, so I’ll stop here.
Fair enough.
Careful, effects are not the same things as observations.
Interesting point. Observations are certainly effects, but you’re right, not all effects are observations. Of course, the example wouldn’t be hurt by my specifying that they only bother faking effects that will lead to observations ;)
I think it would. I think it’s not the same example at all anymore.
Something that reproduces all effects of cats is effectively producing all the molecular interactions and neurons and flesh and blood and fur that we think are what produces our observations of cats.
On the other hand, something that only reproduces the effects that lead directly to observations is, in its simplest form, something that analyzes minds and finds out where to inject data into them to make these minds have the experiences of the presence of cats, and analyzes what other things in the world a would-be-cat would change, and just change those directly (i.e. if a cat would’ve drank milk and produced feline excrement, then milk disappears and feline excrement appears, and a human’s brain is modified such that the experience of seeing a cat drink milk and make poo is simulated).
Not unless something is somehow interacting with their neurons, which I stated isn’t happening for simplicity, and most of the time not for the blood or flesh.
Oh, I meant the interactions occur where they would if the cat was real, but these increasingly-godlike fairies are lazy and don’t bother producing them if their magic tells them it wouldn’t lead to an observation.
My (admittedly lacking) understanding of Information Theory precludes any possibility of perfectly reproducing all effects of the presence of cats throughout the universe (or multiverse or whatever) without having in some form or another a perfect model or simulation of all the individual interactions of the base elements which cats are made of. This would, as it contains the same patterns within the model which when made of “physical matter” produce cats, essentially still produce cats.
So if there’s a mechanism somewhere making sure that the reproduction is perfect, it’s almost certainly (to my knowledge) “simulating” the cats in some manner, in which case the cats are in that simulation and perceive the same experiences they would if they were “really” there in atoms instead of being in the simulation.
If you posit some kind of ontologically basic entity that somehow magically makes a universal consistency check for the exact worldstates that could plausibly be computed if the cat were present, without actually simulating any cat, then sure… but I think that’s also not the same problem anymore. And it requires accepting a magical premise.
Oh, right. Yup, anything simulating you that perfectly is gonna be conscious—but it might be using magic. For example, perhaps they pull their data out of parallel universe where you ARE real. Or maybe they use some black-swan technique you can’t even imagine. They’re fairies, for godssake. And you’re an invisible cat. Don’t fight the counterfactual.
Haha, that one made me laugh. Yes, it’s fighting the counterfactual a bit, but I think that this is one of the reasons why there was a chasm of misunderstandings in this and other sub-threads.
Anyway, I don’t see any tangible things left to discuss here.
Victory! Possibly for both sides, that could well be what’s causing the chasm.
So you’re saying we shouldn’t believe in ourselves?
To paraphrase EY, What do you think you know [about yourself], and how do you think you know it?
Oh, you mean we shouldn’t assume we’re the same as the other cats. Obviously there’s some possibility that we’re unique, but (assuming our body is “simulated” as well, obviously) it seems like all “cats” probably contain epiphenomenal cats as well. Do you think everyone else is a p-zombie? Obviously it’s a remote possibility, but...
No, I did not mean that, unless one finds some good evidence supporting this additional assumption. My point was quite the opposite, that your statement “if something is reproducing the effect of cats on the world we have no reason to posit cats as existing” does not need a qualifier.
Not sure why you bring that silly concept up…
Look, if all “cats” are actually magical fairies using their magic to reproduce the effect of cats, yet I find myself as a cat—whose effect on the world consists of a fairy pretending to be me so well even I don’t notice (except just now, obviously.). Thus, for the one epiphenomenal cat I can know about—myself—I am associated with a “cat” that perfectly duplicates my actions. I can’t check if all “cats” have similar cats attached, since they would be epiphenomenal, but it seems likely, based on myself, that there are.
Because the whole point of this cat metaphor was to make a point about p-zombies. That’s what they are. They’re p-zombies for cats instead of qualia.
Well, the point was to point out that we only think things exist because we experience them, and therefore that anything which duplicates the experience is as real as the original artifact.
Suppose there were to be no cats, but only a magical fairy which knocks things from the mantlepiece and causes us to hallucinate in a consistent manner (among other things). There is no reason to consider that world distinguishable, even in principle, from the standard model.
Now, suppose that you couldn’t see cats, but instead could see the ‘cat fairy’. What is different now, assuming that the cat fairy is working properly and providing identical sensory input as the cats?
There is no (observable) difference. That’s the point. But presumably someone found a way to check for fairies.
If there is no observable (even in principle) difference, what’s the difference? P-zombies are not intended or described as equivocal to humans.
There are two differences: the presence of the fairy (which can be observed … somehow) and the possibility of deviating from the mind. P-zombies are described as acting just like humans, but lack consciousness. “Cats” are generally like the human counterparts to p-zombies (who act just the same—by definition—but have epiphenomenal consciousness.)
TL;DR: it’s observable in principle. But I, as author, have decreed that you arn’t getting to check if your friends are cats as well as “cats”.
Y’know, I’m starting to think this may have been a poor example. It’s a little complicated.
Complicated isn’t a bad thing;
If the fairy is observable despite being in principle not observable… I break.
If it is in principle possible to experience differently from what a quantum scan of the brain and body would indicate, but behave in accordance with physicalism … how would you know if what you experienced was different from what you thought you experienced, or if what you thought was different from what you honestly claimed that you thought?
That would seem to be close to several types of abnormal brain function, where a person describes themself as not in control of their body. I think those cases are better explained by abnormal internal brain communication, but further direct evidence may show that the ‘reasoning’ and ‘acting’ portions of some person are connected similarly enough to normal brains that they should be working the same way, but aren’t. If there is a demonstrated case either of a pattern of neurons firing corresponding to similar behavior in all typical brains and a different behavior in a class of brains of people with such abnormal functioning (or in physically similar neurons firing differently under similar stimuli), then I would accept that as evidence that the fairy perceived by those people existed.
Well, it’s proving hard to explain.
It’s observable. The cats are epiphenomenal, and thus unobservable, except to themselves.
Pardon?
Well, if they can tell you what the problem is then they clearly have some control. More to the point, it is a known feature of the environment that all observed cats are actually illusions produced by fairies. It is a fact, although not generally known, that there are also epiphenomenal (although acted upon by the environment) cats; these exist in exactly the same space as the illusions and act exactly the same way. If you are a human, this is all fine and dandy, if bizarre. But if you are a sentient cat (roll with it) then you have evidence of the epiphenomenal cats, even though this evidence is inherently subjective (since presumably the illusions are also seemingly sentient, in this case.)
How could you tell if you were experiencing something differently from the way a p-zombie would (or, if you are a p-zombie, if you were experiencing something differently from the way a human would)?
In every meaningful way, the cat fairy is a cat. There is no way for an epiphenomenal sentient cat to differentiate itself from a cat fairy, nor any way for a cat fairy to differentiate itself from whatever portions of ‘cats’ it controls (without violating the constraints on cat fairy behavior). Of course, there’s also the conceivability of epiphenomenal sentient ghosts which cannot have any effect on the world but still observe. (That’s one of my death nightmares—remaining fully perceptive and cognitive but unable to act in any way.)
You seem to be somewhat confused about the notion of a p-zombie. A p-zombie is something physically identical to a human, but without consciousness. A p-zombie does not experience anything in any way at all. P-zombies are probably self-contradictory.
I am experiencing something, therefore I am not a p-zombie.
Consider the possibility that you are not experiencing everything that humans do. Can you provide any evidence, even to yourself, that you are? Could a p-zombie provide that same evidence?
How is this relevant? My point is that I’m experiencing what I’m experiencing.
And p-zombies are experiencing what they’re experiencing. You can’t use a similarity to distinguish.
P-zombies aren’t experiencing anything. By definition.
Those two statements are both tautologically true and do not contradict one another.
What would be different, to you, if you weren’t experiencing anything, but were physically identical?
I wouldn’t be experiencing anything.
I thought it had been established that wasn’t a difference.
Are you asking what I would experience? Because I wouldn’t. Not to mention that such a thing can’t happen if, as I expect, subjective experience arises from physics.
Sorry, I thought you were disagreeing with me.
It is relevant because i you cannot find any experimental differences betweenn you and a you NOT experiencing, then maybe there is no such difference.
I cannot present you with evidence that I am experiencing, except maybe by analogy with yourself. I, however, know that I experience because I experience it.
Because p-zombies aren’t conscious. By definition.
Well, the cat does have an associated cat fairy. So, since the only cat fairy who’s e-cat it could observe (its own) has one, I think it should rightly conclude that all cat fairies have cats. But yes, epiphenomenal sentient “ghosts” are possible, and indeed the p-zombie hypothesis requires that the regular humans are in fact such ghosts. They just don’t notice. Yes, there are people arguing this is true in the real world, although not all of them have worked out the implications.
What would be the subjective difference to you if you weren’t ‘conscious’?
To have a subjective anything, you have to be conscious. By definition, if you consider whether you’re a P-zombie, you’re conscious and hence not one.
Now conceive of something which is similar to consciousness, but distinct; like consciousness, it has no physical effects on the world, and like consciousness, anyone who has it experiences it in a manner distinct from their physicality. Call this ‘magic’, and people who posses it ‘magi’.
What aspect does magic lack that consciousness has, such that a p-zombie cannot consider if it is conscious, but a human can ask if they are a magi?
Who said consciousness has no effects on the physical world? Apart from those idiots making the p-zombie argument that is. Pretty much everyone here thinks that’s nonsense, including me and, statistically, probably srn347 (although you never know, I guess.)
Regarding your Magi, if it affects their brain, it’s not epiphenomenal. So there’s that.
The point I am trying to make is that P-zombies are nonsensical. I’m demonstrating that they are equally sensible as an absurd thing.
And the point I am trying to make is that p-zombies are not only a coherent idea, but compatible with human-standard brains as generally modelled on LW. That they don’t in any way demonstrate the point they were intended to make is quite another thing.
Yes, it merely requires redefining things like ‘conscious’ or ‘experience’ (whatever you decide p-zombies do not have) to be something epiphenomenal and incidentally non-existent.
Um, could you please explain this comment? I think there’s a fair chance you’ve stumbled into the middle of this discussion and don’t know what I’m actually talking about (except that it involves p-zombies, I guess.)
I know only the words spoken, not those intended. (And concluded early in the conversation that the entire subthread should be truncated and replaced with a link). So much confusion and muddled thinking!)
Seems reasonable. For reference, then, I suggested the analogous thought experiment of fairies using magic to reproduce all the effects of cats on the environment. Also, there are epiphenomenal ghost cats that occupy the same space and are otherwise identical to the fairies’ illusions, down to the subatomic level. An outside observer would, of course, have no reason to postulate these epiphenomenal cats, but if the cats themselves were somehow conscious, they would.
This was intended to help with understanding p-zombies, since it avoids the … confusing … aspects.
Like brains and rotting flesh?
Whoops. Changed it to “confusing”.
How is it that something which is physically identical to a human and has a physical difference from a human is a coherent concept?
It’s not. I meant that we can replace the soul or whatever with a neurotypical human brain and still get a coherent thought experiment.
Were you saying that the results of that experiment were completely uninteresting?
Well, I personally find it an interesting concept. It’s basically a reformulation of standard Sequences stuff, though, so it shouldn’t be surprising, at least ’round here.
How does that not apply to qualia, unless we are qualia?
We experience qualia. Just like the cats experience being cats.
EDIT: are you arguing we have insufficient evidence to posit qualia?
I experience qualia in exactly the same sense that I experience cats.
All of the evidence I have to posit qualia is due to effects that qualia have on me. Likewise for cats.
I’m pretty sure this comment means you don’t understand the concept of “qualia”.
How do you experience cats?
Unless you actually understand what “qualia” means, I’m not going to bother discussing the topic with you. If you have, if fact, done the basic research necessary to discuss p-zombies, than I’m probably misinterpreting you in some way. But I don’t think I am.
Oddly enough, I feel that if you had done the basic research and explored the same lines of though I did, you would agree with me.
My questions, by the way, aren’t rhetorical. I’m trying to pin down where your understanding differs from mine.
Neither are mine.
I’m saying that there is no difference between a p-zombie and the alternative.
Though on the other hand, we don’t have room to take everything serious dudes say seriously—too many dudes, not enough time.
If a problem happens not to exist, then I suppose one will just have to nerve onesself and not see it. Yes, there are non-hard problems of consciousness, where you explain how a certain process or feeling occurs in the brain, and sure, there are some non-hard problems I’d wave away with “well, that’s solved by psychology somewhere.” But no amount of that has any bearing on the “hard problem,” which will remain in scare quotes as befits its effective nonexistence—finding a solution to a problem that is not a problem would be silly.
(EDIT: To clarify, I am not saying qualia do not exist, I am saying some mysterious barrier of hardness around qualia does not exist.)
OK. Then demonstrate that the HP does not exist, in terms of Chalmer’s specification, by showing that we do have a good explanation.
Well, said Achilles, everybody knows that if you have A and B and “A and B imply Z,” then you have Z.
How an Algorithm Feels From Inside.
The Visual Cortex is Used to Imagine
Stimulating the Visual Cortex Makes the Blind See
This sort of thing is sufficient for me, like Achilles’ explanations were enough for Achilles. But if, say, the perception of the hard problem was causally unrelated to the actual existence of a hard problem (for epiphenominalism, this is literally what is going on), then gosh, it would seem like no matter what explanations you heard, the hard problem wouldn’t go away—so it must be either a proof of dualism or a mistake.
But not for me. Indeed. I am pretty sure none of those articles is even intended as a solution to the HP. And if they are, why not publish them is a journal and become famous?
Intended as a solution to FW.
So? Every living qualiaphile accepts some sort of relationship between brain states and qualia.
So? I said nothing about epiphenomenalism
The non-parenthetical was a throwback to a whole few posts ago, where I claimed that perception of the hard problem was often from the mind projection fallacy.
Other than that, I don’t have much to respond to here, since you’re just going “So?”
I can’t find the posting, and I don’t see how the MPF would relate to e12ism anyway.
How did you expect to convive me? I am familar with all the stuff you are quoting, and I still think there is an HP. So do many people.
For practical reasons, I think that’s fair enough...so long as we’re clear that the above is a fully general counterargument.
Right. I have not said any actual arguments against the hard problem of consciousness.
EDIT: Was true when I said it, then I replied to PeterD, not that it worked (as I noted in that very post, the direct approach has little chance against a confusion)
Argument for the importance of the HP: it is about the only thing that would motivate an educated 21st century person into doubting physcalism.
The rest mostly go, “this could only be explained by a mysterious substance, there are no mysterious substances, therefore this does not exist.”
I don’t know why you guys keep harping about substances. Substance dualism has been out of favour for a good century.
Sorry, I was misusing terminology. Any ignorance-generating / ignorance-embodying explanation (e.g.s quantum mysticism / elan vital) uses what I’m calling “mysterious substance.”
Basically I’m calling “quantum” a mysterious substance (for the quantum mystics), even though it’s not like you can bottle it.
Maybe I should have said “mysterious form?” :D
There is a Hard Prolem, becuase there is basically no (non eliminative) science or technology of qualia at all. We cna get a start on the problem of building cognition, memory and perception into an AI, but we can;t get a start on writing code for Red or Pain or Salty. You can thell there is basically no non-eliminative science or technology of qualia because the best LWers’ can quote is Dennett’s eliminative theory.
Do you have evidence of this? The PhilPapers survey suggests that only 56.5% of philosophers identify as ‘physicalists,’ and 59% think that zombies are conceivable (though most of these think zombies are nevertheless impossible). It would also help if you explained what you mean by ‘the theory of qualia.’
Sellars’ argument, I think, rests on a few confusions and shaky assumptions. I agree this argument is still extremely widely cited, but I think that serious epistemologists no longer consider it conclusive, and a number reject it outright. Jim Pryor writes:
I mentioned in a subsequent post that there was an ambiguity in my original claim. Qualia have been used by philosophers to do two different jobs: 1) as the basis of the hard problem of consciousness, and 2) as the foundation of foundationalist theories of empiricism. Sellars essay, in particular is aimed at (2), not (1), and the mention of ‘qualia’ to which I was responding was probably a case of (1). The question of physicalism and the conceivability of p-zombies isn’t directly related to the epistemic role of qualia, and one could reject classical empiricism on the basis of Sellars’ argument while still believing that the reality of irreducible qualia speak against physicalism and for the conceivability of p-zombies.
That may be, it’s a bit outside my ken. Thanks for posting the quote. I won’t go trying to defend the overall organization EPM, which is fairly labyrinthine, but I have some confidence in its critiques: I’d need more familiarity with Pryor’s work to level a serious criticism, but he on the basis of your quote he seems to me to be missing the point: Sellars is not arguing that something’s appearing to you in a certain way is a state (like a belief) which requires justification. He argues that it is not tenable to think of this state as being independent of (e.g. a foundation for) a whole battery of concepts including epistemic concepts like ‘being in standard perceptual conditions’. Looking a certain way is posterior (a sophistication of) its being that way. Looking red is posterior to simply being red. And this is an attack on the epistemic role of qualia insofar as this theory implies that ‘looking red’ is in some way fundamental and conceptually independent.
Yes, that is the argument. And I think its soundness is far from obvious, and that there’s a lot of plausibility to the alternative view. The main problem is that this notion of ‘conceptual content’ is very hard to explicate; often it seems to be unfortunately confused with the idea of linguistic content; but do we really think that the only things that should add or take away any of my credence in any belief is the words I think to myself? In any case, Pryors’ paper Is There Non-Inferential Justification? is probably the best starting point for the rival view. And he’s an exceedingly lucid thinker.
I’ll read the Pryor article, in more detail, but from your gloss and from a quick scan, I still don’t see where Pryor and Sellars are even supposed to disagree. I think, without being totally sure, that Sellars would answer the title question of Pryor’s article with an emphatic ‘yes!’. Experience of a red car justifies belief that the car is red. While experience of a red car also presupposes a battery of other concepts (including epistemic concepts), these concepts are not related to the knowledge of the redness of the car as premises to a conclusion.
Here’s a quote from EPM p148, which illustrates that the above is Sellars’ view (italics mine). Note that in the following, Sellars is sketching the view he wants to attack:
So Sellars wants to argue that empiricism has no foundation because experience (as an epistemic success term) is not possible without knowledge of a bunch of other facts. But it does not follow from this that a) Sellars thinks knowledge derived from experience is inferential, or b) Sellars thinks non-inferential knowledge as such is a problem.
But that said, I haven’t read enough of Pryor’s paper(s) to understand his critiques. I’ll take a look.
I’m not at all convinced that all LWers have been persuaded that they don’t have qualia.
Amongst some philosophers.
Hmmm. The only enthusiast for Sellars I know finds it necessary to adopt Direct Realism, which is a horribly flawed theory. In fact most of the problems with it consist of reconciling it with a naturalistic world view.
Well, it’s probably important to distinguish between to uses to which the theory of qualia is put: first as the foundation of foundationalist empiricism, and second as the basis for the ‘hard problem of consciousness’. Foundationalist theories of empiricism are largely dead, as is the idea that qualia are a source of immediate, non-conceptual knowledge. That’s the work that Sellars (a strident reductivist and naturalist) did.
Now that I read it again, I think my original post was a bit misleading because I implied that the theory of qualia as establishing the ‘hard problem’ is also a dead theory. This is not the case, and important philosophers still defend the hard problem on these grounds. Mea Culpa.
Once direct realism as an epistemic theory is properly distinguished from a psychological theory of perception, I think it becomes an extremely plausible view. I think I’d probably call myself a direct realist.
I’d have said that qualia are not a source of unprocessed knowledge, but the processing isn’t conceptual.
I take ‘conceptual’ to mean thought which is at least somewhat conscious and which probably can be represented verbally. What do you mean by the word?
I mean ‘of such a kind as to be a premise or conclusion in an inference’. I’m not sure whether I agree with your assessment or not: if by ‘non-conceptual processing’ you mean to refer to something like a physiological or neurological process, then I think I disagree (simply because physiological processes can’t be any part of an inference, even granting that often times things that are part of an inference are in some way identical to a neurological process).
I think we’re looking at qualia from different angles. I agree that the process which leads to qualia might well be understood conceptually from the outside (I think that’s what you meant). However, I don’t think there’s an accessible conceptual process by which the creation of qualia can be felt by the person having the qualia.
I don’t know what others accept as a solution to the qualia problem, but I’ve found the explanations in “How an algorithm feels from the inside” quite on spot. For me, the old sequences have solved the qualia problem, and from what I see the new sequence presupposes the same.
I’m not sure I understand what it means for an algorithm to have an inside, let alone for an algorithm to “feel” something from the inside. “Inside” is a geometrical concept, not an algorithmical one.
Please explain what the inside feeling of e.g. the Fibonacci sequence (or an algorithm calculating such) would be.
Well, that’s just the title, you know? The original article was talking about cognitive algorithms (an algorithm, not any algorithm). Unless you assume some kind of un-physical substance having a causal effect on your brain and your continued existence after death, you is what your cognitive algorithm feels when it’s run on your brain wetware.
That’s not true: every formal system that can produce a model of a subsets of its axioms might be considered as having an ‘inside’ (as in set theory: constructible model are called ‘inner model’), and that’s just one possible definition.
So what’s the difference between cognitive algorithms with the ability of “feeling from the inside” and the non-cognitive algorithms which can’t “feel from the inside”?
Please don’t construct strawmen. I never once mentioned unphysical substances having any causal effect, nor do I believe in such. Actually from my perspective it seems to me that it is you who are referring to unphysical substances called “algorithms” “models”, the “inside”, etc. All these seem to me to be on the map, not on the territory.
And to say that I am my algorithm running on my brain doesn’t help dissolve for me the question of qualia anymore than if some religious guy had said that I’m the soul controlling my body.
If I knew I would have already written an AI. This is an NP problem, easy to check, hard to find a solution for: I knew that the one running on my brain is of the kind, and the one spouting Fibonacci number is not. I can only guess that involves some kind of self-representation.
Sorry if I seemed to do so, I wasn’t attributing those beliefs to you, I was just listing the possible escape routes from the argument.
Well, if you already do not accept those concepts, you need to tell me what your basic ontology is so we can agree on definitions. I thought that we already have “algorithm” covered by “Please explain what the inside feeling of e.g. the Fibonacci sequence (or an algorithm calculating such) would be”
That’s because it was not the question that my sentence was answering. You have to admit that writing “I’m not sure I understand what it means for an algorithm to have an inside” is a rather strange way to ask “Please justify the way the sequence has in your opinion dissolved the qualia problem”. If you’re asking me that, I might just want to write an entire separate post, in the hope of being clearer and more convincing.
I think this is confusing qualia with intelligence. There’s no big confusion about how an algorithm run on hardware can produce something we identify as intelligence—there’s a big confusion about such an algorithm “feeling things from the inside”.
It seems to me that in a physical universe, the concept of “algorithms” is merely an abstract representation in our minds of groupings of physical happenings, and therefore algorithms are no more ontologically fundamental than the category of “fruits” or “dinosaurs”.
Now starting with a mathematical ontology instead, like Tegmark IV’s Mathematical Universe Hypothesis, it’s physical particles that are concrete representations of algorithms instead (very simple algorithms in the case of particles). In that ontology, where algorithms are ontologically fundamental and physical particles aren’t, you can perhaps clearly define qualia as the inputs of the much-more-complex algorithms which are our minds...
That’s sort-of the way that I would go about dissolving the issue of qualia if I could. But in a universe which is fundamentally physical it doesn’t get dissolved by positing “algorithms” because algorithms aren’t fundamentally physical...
I’m going to write a full-blown post so that I can present my view more clearly. If you want we can move the discussion there when it will be ready (I think in a couple of days).