If science had them, there would be no mileage in the philosphical project, any more than there is currently mileage in trying to found dualism on the basis that matter can’t think.
I just went to reply you but after reading back on what was said I’m seeing a different context.
My stupid comment was about popularity not about usefulness. I was rambling about general public opinion on belief systems not what the topic was really about- if philosophy could move something forward.
We have prima facie reason to accept both of these claims:
A list of all the objective, third-person, physical facts about the world does not miss any facts about the world.
Which specific qualia I’m experiencing is functionally/causally underdetermined; i.e., there doesn’t seem even in principle to be any physically exhaustive reason redness feels exactly as it does, as opposed to feeling like some alien color.
1 is physicalism; 2 is the hard problem. Giving up 1 means endorsing dualism or idealism. Giving up 2 means endorsing reductive or eliminative physicalism. All of these options are unpalatable. Reductionism without eliminating anything seems off the table, since the conceivability of zombies seems likely to be here to stay, to remain as an ‘explanatory gap.’ But eliminativism about qualia means completely overturning our assumption that whatever’s going on when we speak of ‘consciousness’ involves apprehending certain facts about mind. I think this last option is the least terrible out of a set of extremely terrible options; but I don’t think the eliminative answer to this problem is obvious, and I don’t think people who endorse other solutions are automatically crazy or unreasonable.
That said, the problem is in some ways just academic. Very few dualists these days think that mind isn’t perfectly causally correlated with matter. (They might think this correlation is an inexplicable brute fact, but fact it remains.) So none of the important work Eliezer is doing here depends on monism. Monism just simplifies matters a great deal, since it eliminates the worry that the metaphysical gap might re-introduce an epistemic gap into our model.
Which specific qualia I’m experiencing is functionally/causally underdetermined; i.e., there doesn’t seem even in principle to be any physically exhaustive reason redness feels exactly as it does, as opposed to feeling like some alien color.
If I knew how the brain worked in sufficient detail, I think I’d be able to explain why this was wrong; I’d have a theory that would predict what qualia a brain experiences based on its structure (or whatever). No, I don’t know what the theory is, but I’m pretty confident that there is one.
Can you give me an example of how, even in principle, this would work? Construct a toy universe in which there are experiences causally determined by non-experiences. How would examining anything about the non-experiences tell us that the experiences exist, or what particular way those experiences feel?
Can you give me an example of how, even in principle, this would work? Construct a toy universe in which there are experiences causally determined by non-experiences. How would examining anything about the non-experiences tell us that the experiences exist, or what particular way those experiences feel?
It sounds like you’re asking me to do what I just asked you to do. I don’t know what experiences are, except by listing synonyms or by acts of brute ostension — hey, check out that pain! look at that splotch of redness! — so if I could taboo them away, it would mean I’d already solved the hard problem. This may be an error mode of ‘tabooing’ itself; that decision procedure, applied to our most primitive and generic categories (try tabooing ‘existence’ or ‘feature’), seems to either yield uninformative lists of examples, implausible eliminativisms (what would a world without experience, without existence, or without features, look like?), or circular definitions.
But what happens when we try to taboo a term is just more introspective data; it doesn’t give us any infallible decision procedure, on its own, for what conclusion we should draw from problem cases. To assert ‘if you can’t taboo it, then it’s meaningless!’, for example, is itself to commit yourself to a highly speculative philosophical and semantic hypothesis.
Can you give me an example of how, even in principle, this would work? Construct a toy universe in which there are experiences causally determined by non-experiences. How would examining anything about the non-experiences tell us that the experiences exist, or what particular way those experiences feel?
Can you give me an example of how, even in principle, this would work? Construct a toy universe in which there are computations causally determined by non-computations. How would examining anything about the non-computations tell us that the computations exist, or what particular functions those computations are computing?
My initial response is that any physical interaction in which the state of one thing differentially tracks the states of another can be modeled as a computation. Is your suggestion that an analogous response would solve the Hard Problem, i.e., are you endorsing panpsychism (‘everything is literally conscious’)?
Can you give me an example of how, even in principle, this would work? Construct a toy universe in which there are living things causally determined by non-living things? How would examining anything about the non-living things tell us that the living things exist, or what particular way those living things are alive?
“Explain how consciousness arises from non-conscious matter” doesn’t seem any more of an impossible problem than “Explain how life arises from non-living matter”.
We can define and analyze ‘life’ without any reference to life: As high-fidelity self-replicating macromolecules that interact with their environments to assemble and direct highly responsive cellular containers around themselves. There doesn’t seem to be anything missing from our ordinary notion of life here; or anything that is missing could be easily added by sketching out more physical details.
What might a purely physical definition of consciousness that made no appeal to mental concepts look like? How could we generate a first-person facts from a complex of third-person facts?
What you described as computation could apply to literally any two things in the same causal universe. But you meant two things that track each other much more tightly than usual. It may be that a rock is literally conscious, but if so, then not very much so. So little that it really does not matter at all. Humans are much more conscious because they reflect the world much more, reflect themselves much more, and [insert solution to Hard Problem here].
It may be that a rock is literally conscious, but if so, then not very much so. So little that it really does not matter at all.
I dunno. I think if rocks are even a little bit conscious, that’s pretty freaky, and I’d like to know about it. I’d certainly like to hear more about what they’re conscious of. Are they happy? Can I alter them in some way that will maximize their experiential well-being? Given how many more rocks there are than humans, it could end up being the case that our moral algorithm is dominated by rearranging pebbles on the beach.
Humans are much more conscious because they reflect the world much more, reflect themselves much more, and [insert solution to Hard Problem here].
Hah. Luckily, true panpsychism dissolves the Hard Problem. You don’t need to account for mind in terms of non-mind, because there isn’t any non-mind to be found.
I think if rocks are even a little bit conscious, that’s pretty freaky, and I’d like to know about it.
I meant, I’m pretty sure that rocks are not conscious. It’s just that the best way I’m able to express what I mean by “consciousness” may end up apparently including rocks, without me really claiming that rocks are conscious like humans are—in the same way that your definition of computation literally includes air, but you’re not really talking about air.
Luckily, true panpsychism dissolves the Hard Problem. You don’t need to account for mind in terms of non-mind, because there isn’t any non-mind to be found.
I don’t understand this. How would saying “all is Mind” explain why qualia feel the way they do?
I’m pretty sure that rocks are not conscious. It’s just that the best way I’m able to express what I mean by “consciousness” may end up apparently including rocks, without me really claiming that rocks are conscious like humans are—in the same way that your definition of computation literally includes air, but you’re not really talking about air.
This still doesn’t really specify what your view is. Your view may be that strictly speaking nothing is conscious, but in the looser sense in which we are conscious, anything could be modeled as conscious with equal warrant. This view is a polite version of eliminativism.
Or your view may be that strictly speaking everything is conscious, but in the looser sense in which we prefer to single out human-style consciousness, we can bracket the consciousness of rocks. In that case, I’d want to hear about just what kind of consciousness rocks have. If dust specks are themselves moral patients, this could throw an interesting wrench into the ‘dust specks vs. torture’ debate. This is panpsychism.
Or maybe your view is that rocks are almost conscious, that there’s some sort of Consciousness Gap that the world crosses, Leibniz-style. In that case, I’d want an explanation of what it means for something to almost be conscious, and how you could incrementally build up to Consciousness Proper.
I don’t understand this. How would saying “all is Mind” explain why qualia feel the way they do?
The Hard Problem is not “Give a reductive account of Mind!” It’s “Explain how Mind could arise from a purely non-mental foundation!” Idealism and panpsychism dissolve the problem by denying that the foundation is non-mental; and eliminativism dissolves the problem by denying that there’s such a thing as “Mind” in the first place.
Can you give me an example of how, even in principle, this would work?
In general, I would suggest as much looking at sensory experiences that vary among humans; there’s already enough interesting material there without wondering if there are even other differences. Can we explain enough interesting things about the difference between normal hearing and pitch perfect hearing without talking about qualia?
Once we’ve done that, are we still interested in discussing qualia in color?
So your argument is “Doing arithmetic requires consciousness; and we can tell that something is doing arithmetic by looking at its hardware; so we can tell with certainty by looking at certain hardware states that the hardware is sentient”?
So your argument is “Doing arithmetic requires consciousness; and we can tell that something is doing arithmetic by looking at its hardware; so we can tell with certainty by looking at certain hardware states that the hardware is sentient”?
Well, it’s certainly possible to do arithmetic without consciousness; I’m pretty sure an abacus isn’t conscious. But there should be a way to look at a clump of matter and tell it is conscious or not (at least as well as we can tell the difference between a clump of matter that is alive and a clump of matter that isn’t).
So your argument is “We have explained some things physically before, therefore we can explain consciousness physically”?
It’s a bit stronger than that: we have explained basically everything physically, including every other example of anything that was said to be impossible to explain physically. The only difference between “explaining the difference between conscious matter and non-conscious matter” and “explaining the difference between living and non-living matter” is that we don’t yet know how to do the former.
I think we’re hitting a “one man’s modus ponens is another man’s modus tollens” here. Physicalism implies that the “hard problem of consciousness” is solvable; physicalism is true; therefore the hard problem of consciousness has a solution. That’s the simplest form of my argument.
Basically, I think that the evidence in favor of physicalism is a lot stronger than the evidence that the hard problem of consciousness isn’t solvable, but if you disagree I don’t think I can persuade you otherwise.
No abacus can do arithmetic. An abacus just sits there.
No backhoe can excavate. A backhoe just sits there.
A trained agent can use an abacus to do arithmetic, just as one can use a backhoe to excavate. Can you define “do arithmetic” in such a manner that it is at least as easy to prove that arithmetic has been done as it is to prove that excavation has been done?
I’ve watched mine for several hours, and it hasn’t.
No, you haven’t. (p=0.9)
Have you observed a calculator doing arithmetic? What would it look like?
It could look like an electronic object with a plastic shell that starts with “(23 + 54) / (47 * 12 + 76) + 1093” on the screen and some small amount of time after an apple falls from a tree and hits the “Enter” button some number appears on the screen below the earlier input, beginning with “1093.0”, with some other decimal digits following.
If the above doesn’t qualify as the calculator doing “arithmetic” then you’re just using the word in a way that is not just contrary to common usage but also a terrible way to carve reality.
I didn’t do that immediately prior to posting, but I have watched my calculator for a cumulative period of time exceeding several hours, and it has never done arithmetic. I have done arithmetic using said calculator, but that is precisely the point I was trying to make.
Does every device which looks like that do arithmetic, or only devices which could in principle be used to calculate a large number of outcomes? What about an electronic device that only alternates between displaying “(23 + 54) / (47 * 12 + 76) + 1093” and “1093.1203125″ (or “1093.15d285805de42”) and does nothing else?
Does a bucket do arithmetic because the number of pebbles which fall into the bucket, minus the number of pebbles which fall out of the bucket, is equal to the number of pebbles in the bucket? Or does the shepherd do arithmetic using the bucket as a tool?
I didn’t do that immediately prior to posting, but I have watched my calculator for a cumulative period of time exceeding several hours, and it has never done arithmetic. I have done arithmetic using said calculator, but that is precisely the point I was trying to make.
And I would make one of the following claims:
Your calculator has done arithmetic, or
You are using your calculator incorrectly (It’s not a paperweight!) Or
There is a usage of ‘arithmetic’ here that is a highly misleading way to carve reality.
Does every device which looks like that do arithmetic, or only devices which could in principle be used to calculate a large number of outcomes?
In the same way that a cardboard cutout of Decius that has a speech bubble saying “5” over its head would not be said to be doing arithmetic a device that looks like a calculator but just displays one outcome would not be said to be doing arithmetic.
I’m not sure how ‘large’ the number of outcomes must be, precisely. I can imagine particularly intelligent monkeys or particularly young children being legitimately described as doing rudimentary arithmetic despite being somewhat limited in their capability.
Does a bucket do arithmetic because the number of pebbles which fall into the bucket, minus the number of pebbles which fall out of the bucket, is equal to the number of pebbles in the bucket? Or does the shepherd do arithmetic using the bucket as a tool?
It would seem like in this case we can point to the system and say that system is doing arithmetic. The shepherd (or the shepherd’s boss) has arranged the system so that the arithmetic algorithm is somewhat messily distributed in that way. Perhaps more interesting is the case where the bucket and pebble system has been enhanced with a piece of fabric which is disrupted by passing sheep, knocking in pebbles reliably, one each time. That system can certainly be said to be “counting the damn sheep”, particularly since it so easily generalizes to counting other stuff that walks past.
But now allow me to abandon my rather strong notions that “calculators multiply stuff and mechanical sheep counters count sheep”. I’m curious just what the important abstract feature of the universe is that you are trying to highlight as the core feature of ‘arithmetic’. It seems to be something to do with active intent by a generally intelligent agent? So that whenever adding or multiplying is done we need to track down what caused said adding or multiplication to be done, tracing the causal chain back to something that qualifies as having ‘intention’ and say that the ‘arithmetic’ is being done by that agent? (Please correct me if I’m wrong here, this is just my best effort to resolve your usage into something that makes sense to me!)
It’s not a feature of arithmetic, it’s a feature of doing.
I attribute ‘doing’ an action to the user of the tool, not to the tool. It is a rare case in which I attribute an artifact as an agent; if the mechanical sheep counter provided some signal to indicate the number or presence of sheep outside the fence, I would call it a machine that counts sheep. If it was simply a mechanical system that moved pebbles into and out of a bucket, I would say that counting the sheep is done by the person who looks in the bucket.
If a calculator does arithmetic, do the components of the calculator do arithmetic, or only the calculator as a whole? Or is it the system of which does arithmetic?
I’m still looking for a definition of ‘arithmetic’ which allows me to be as sure about whether arithmetic has been done as I am sure about whether excavation has been done.
Well, you do have to press certain buttons for it to happen. ;) And it looks like voltages changing inside an integrated circuit that lead to changes in a display of some kind. Anyway, if you insist on an example of something that “does arithmetic” without any human intervention whatsoever, I can point to the arithmetic logic unit inside a plugged-in arcade machine in attract mode.
Can you define “do arithmetic” in such a manner that it is at least as easy to prove that arithmetic has been done as it is to prove that excavation has been done?
Is still somewhat important to the discussion. I can’t define arithmetic well enough to determine if it has occurred in all cases, but ‘changes on a display’ is clearly neither necessary nor sufficient.
Well, I’d say that a system is doing arithmetic if it has behavior that looks like it corresponds with the mathematical functions that define arithmetic. In other words, it takes as inputs things that are representations of such things as “2”, “3“, and “+” and returns an output that looks like “6”. In an arithmetic logic unit, the inputs and outputs that represent numbers and operations are voltages. It’s extremely difficult, but it is possible to use a microscopic probe to measure the internal voltages in an integrated circuit as it operates. (Mostly, we know what’s going on inside a chip by far more indirect means, such as the “changes on a screen” you mentioned.)
There is indeed a lot of wiggle room here; a sufficiently complicated scheme can make anything “represent” anything else, but that’s a problem beyond the scope of this comment. ;)
Note that neither an abacus nor a calculator in a vacuum satisfy that definition.
I’ll allow voltages and mental states to serve as evidence, even if they are not possible to measure directly.
Does a calculator with no labels on the buttons do arithmetic in the same sense that a standard one does?
Does the phrase “2+3=6” do arithmetic? What about the phrase “2*3=6″?
I will accept as obvious that arithmetic occurs in the case of a person using a calculator to perform arithmetic, but not obvious during precisely what periods arithmetic is occurring and not occurring.
Anyway, if you insist on an example of something that “does arithmetic” without any human intervention whatsoever, I can point to the arithmetic logic unit inside a plugged-in arcade machine in attract mode.
… which was plugged in and switched on by, well, a human.
I think the OP is using their own idiosyncratic definition of “doing” to require a conscious agent. This is more usual among those confused by free will.
The only difference between “explaining the difference between conscious matter and non-conscious matter” and “explaining the difference between living and non-living matter” is that we don’t yet know how to do the former.
It’s impossible to express a sentence like this after having fully appreciated the nature of the Hard Problem. In fact, whether you’re a dualist or a physicalist, I think a good litmus test for whether you’ve grasped just how hard the Hard Problem is is whether you see how categorically different the vitalism case is from the dualism case. See: Chalmers, Consciousness and its Place in Nature.
Physicalism implies that the “hard problem of consciousness” is solvable; physicalism is true; therefore the hard problem of consciousness has a solution.
Physicalism, plus the unsolvability of the Hard Problem (i.e., the impossibility of successful Type-C Materialism), implies that either Type-B Materialism (‘mysterianism’) or Type-A Materialism (‘eliminativism’) is correct. Type-B Materialism despairs of a solution while for some reason keeping the physicalist faith; Type-A Materialism dissolves the problem rather than solving it on its own terms.
Basically, I think that the evidence in favor of physicalism is a lot stronger than the evidence that the hard problem of consciousness isn’t solvable
The probability of physicalism would need to approach 1 in order for that to be the case.
It’s impossible to express a sentence like this after having fully appreciated the nature of the Hard Problem. In fact, whether you’re a dualist or a physicalist, I think a good litmus test for whether you’ve grasped just how hard the Hard Problem is is whether you see how categorically different the vitalism case is from the dualism case. See: Chalmers, Consciousness and its Place in Nature.
::follows link::
Call me the Type-C Materialist subspecies of eliminativist, then. I think that a sufficient understanding of the brain will make the solution obvious; the reason we don’t have a “functional” explanation of subjective experience is not because the solution doesn’t exist, but that we don’t know how to do it.
Van Gulick (1993) suggests that conceivability arguments are question-begging, since once we have a good explanation of consciousness, zombies and the like will no longer be conceivable.
A list of all the objective, third-person, physical facts about the world does not miss any facts about the world.
What’s your reason for believing this? The standard empiricist argument against zombies is that they don’t constrain anticipated experience.
One problem with this line of thought is that we’ve just thrown out the very concept of “experience” which is the basis of empiricism. The other problem is that the statement is false: the question of whether I will become a zombie tomorrow does constrain my anticipated experiences; specifically, it tells me whether I should anticipate having any.
I’m not a positivist, and I don’t argue like one. I think nearly all the arguments against the possibility of zombies are very silly, and I agree there’s good prima facie evidence for dualism (though I think that in the final analysis the weight of evidence still favors physicalism). Indeed, it’s a good thing I don’t think zombies are impossible, since I think that we are zombies.
What’s your reason for believing this?
My reason is twofold: Copernican, and Occamite.
Copernican reasoning: Most of the universe does not consist of humans, or anything human-like; so it would be very surprising to learn that the most fundamental metaphysical distinction between facts (‘subjective’ v. ‘objective,’ or ‘mental’ v. ‘physical,’ or ‘point-of-view-bearing’ v. ’point-of-view-lacking, ’or what-have-you) happens to coincide with the parts of the universe that bear human-like things, and the parts that lack human-like things. Are we really that special? Is it really more likely that we would happen to gain perfect, sparkling insight into a secret Hidden Side to reality, than that our brains would misrepresent their own ways of representing themselves to themselves?
Occamite reasoning: One can do away with the Copernican thought by endorsing panpsychism; but this worsens the bite from the principle of parsimony. A universe with two kinds of fundamental fact is less likely, relative to the space of all the models, then one with one kind (or with many, many more than two kinds). It is a striking empirical fact that, consciousness aside, we seem to be able to understand the whole rest of reality with a single grammatical kind of description—the impersonal, ‘objective’ kind, which states a fact without specifying for whom the fact is. The world didn’t need to turn out to be that way, just as it didn’t need to look causally structured. This should give us reason to think that there may not be distinctions between fundamental kinds of facts, rather than that we happen to have lucked out and ended up in one of the universes with very few distinctions of this sort.
Neither of these considerations, of course, is conclusive. But they give us some reason to at least take seriously physicalist hypotheses, and to weight their theoretical costs and benefits against the dualists’.
One problem with this line of thought is that we’ve just thrown out the very concept of “experience” which is the basis of empiricism.
We’ve thrown out the idea of subjective experience, of pure, ineffable ‘feels,’ of qualia. But we retain any functionally specifiable analog of such experience. In place of qualitative red, we get zombie-red, i.e., causal/functional-red. In place of qualitative knowledge, we get zombie-knowledge.
And since most dualists already accepted the causal/functional/physical process in question (they couldn’t even motivate the zombie argument if they didn’t consider the physical causally adequate), there can be no parsimony argument against the physicalists’ posits; the only argument will have to be a defense of the claim that there is some sort of basic, epistemically infallible acquaintance relation between the contents of experience and (themselves? a Self??...). But making such an argument, without begging the question against eliminativism, is actually quite difficult.
In place of qualitative red, we get zombie-red, i.e., causal/functional-red. In place of qualitative knowledge, we get zombie-knowledge.
At this point, you’re just using the language wrong. “knowledge” refers to what you’re calling “zombie-knowledge”—whenever we point to an instance of knowledge, we mean whatever it is humans are doing. So “humans are zombies” doesn’t work, unless you can point to some sort of non-human non-zombies that somehow gave us zombies the words and concepts of non-zombies.
At this point, you’re just using the language wrong.
That assumes a determinate answer to the question ‘what’s the right way to use language?’ in this case. But the facts on the ground may underdetermine whether it’s ‘right’ to treat definitions more ostensively (i.e., if Berkeley turns out to be right, then when I say ‘tree’ I’m picking out an image in my mind, not a non-existent material plant Out There), or ‘right’ to treat definitions as embedded in a theory, an interpretation of the data (i.e., Berkeley doesn’t really believe in trees as we do, he just believes in ‘tree-images’ and misleadingly calls those ‘trees’). Either of these can be a legitimate way that linguistic communities change over time; sometimes we keep a term’s sense fixed and abandon it if the facts aren’t as we thought, whereas sometimes we’re more intensionally wishy-washy and allow terms to get pragmatically redefined to fit snugly into the shiny new model. Often it depends on how quickly, and how radically, our view of the world changes.
(Though actually, qualia may raise a serious problem for ostension-focused reference-fixing: It’s not clear what we’re actually ostending, if we think we’re picking out phenomenal properties but those properties are not only misconstrued, but strictly non-existent. At least verbal definitions have the advantage that we can relatively straightforwardly translate the terms involved into our new theory.)
Moreover, this assumes that you know how I’m using the language. I haven’t said whether I think ‘knowledge’ in contemporary English denotes q-knowledge (i.e., knowledge including qualia) or z-knowledge (i.e., causal/functional/behavioral knowledge, without any appeal to qualia). I think it’s perfectly plausible that it refers to q-knowledge, hence I hedge my bets when I need to speak more precisely and start introducing ‘zombified’ terms lest semantic disputes interfere in the discussion of substance. But I’m neutral both on the descriptive question of what we mean by mental terms (how ‘theory-neutral’ they really are), and on the normative question of what we ought to mean by mental terms (how ‘theory-neutral’ they should be). I’m an eliminativist on the substantive questions; on the non-substantive question of whether we should be revisionist or traditionalist in our choice of faux-mental terminology, I’m largely indifferent, as long as we’re clear and honest in whatever semantic convention we adopt.
Copernican reasoning: Most of the universe does not consist of humans, or anything human-like; so it would be very surprising to learn that the most fundamental metaphysical distinction between facts (‘subjective’ v. ‘objective,’ or ‘mental’ v. ‘physical,’ or ‘point-of-view-bearing’ v. ’point-of-view-lacking, ’or what-have-you) happens to coincide with the parts of the universe that bear human-like things, and the parts that lack human-like things. Are we really that special? Is it really more likely that we would happen to gain perfect, sparkling insight into a secret Hidden Side to reality, than that our brains would misrepresent their own ways of representing themselves to themselves?
It’s not surprising that a system should have special insight into itself. If a type of system had special insight into some other, unrelated, type of system, then that would be peculiar. If every systems had insights
(panpsychism) that would also be peculiar. But a system, one capable of haing insights, having special insights into itself is not unexpected
Occamite reasoning: One can do away with the Copernican thought by endorsing panpsychism; but this worsens the bite from the principle of parsimony. A universe with two kinds of fundamental fact is less likely, relative to the space of all the models, then one with one kind (or with many, many more than two kinds).
That is not obvious. If the two kinds of stuff (or rather property) are fine-grainedly picked from some
space of stuffs (or rather properties), then that would be more unlikely that just one being picked.
OTOH, if you have a just one, coarse-grained kind of stuff, and there is just one other coarse-grained
kind of stuff, such that the two together cover the space of stuffs, then it is a mystery
why you do not have both, ie every possible kind of stuff. A concrete example is the predominance
of matter over antimatter in cosmology, which is widely interpreted as needing an explanation.
(It’s all about information and probability. Adding one fine grained kind of stuff to another
means that two low probabilities get multiplies together, leading to a very low one that
needs a lot of explainging. Having every logically possible kind of stuff has a high probability,
because we don’t need a lot of information to pinpoint the universe).
So..if you think of Mind as some very specific thing, the Occamite objection goes through. However,
modern dualists are happy that most aspects of consciousness have physical explanations. Chalmers-style
dualism is about explaining qualia, phenomenal qualities. The quantitative properties (Chalmers calls them stuctural-functional) of physicalism and intrinsically qualitative properties form a dyad that covers
property-space in the same way that the matter-antimatter dyad covers stuff-space. In this way, modern dualism can avoid the Copernican Objection.
It is a striking empirical fact that, consciousness aside, we seem to be able to understand the whole rest of reality with a single grammatical kind of description—the impersonal, ‘objective’ kind, which states a fact without specifying for whom the fact is.
(Here comes the shift from properties to aspects).
Although it does specify that the fact is outside me. If physical and mental properties are both
intrinsic to the world, then the physical properties seem to be doing most of the work, and the mental
ones seem redundant. However, if objectivity is seen as a perspective, ie an external perspective, it is no
longer an empirical fact. It is then a tautology that the external world will seem, from the outside, to be
objective, becaue objectivity just is the view from outside. And subjectivity, likewise, is the view from inside, and not any extra stuff, just another way of looking at the same stuff. There are in any case, a set of relations between a thing-and-itself, and another set between a thing-and-other-things Nothing
novel is being introduced by noting the existence of inner and outer aspects. The novel content of the
Dual Aspect solution lies on identifying the Objective Perspective with quantities (broadly including structures and functions) and the Subjective Perspective with qualities, so that Subjective Qualities, qualia, are just how neuronal processing seems from the inside. This point needs justication, which I believe I have, but will
not nmention here.
As far as physicalism is concerned: physicalism has many meanings. Dual aspect theory is incompatible
with the idea that the world is instrinsically objective and physical, since these are not intrinsic
charateristics, accoding to DAT. DAT is often and rightly associated with neutral monism, the idea
that the world is in itself neither mental nor physical, neither objective nor subjective. However,
this in fact changes little for most physicalists: it does not suggest that there are any ghostly substances
or indetectable properties. Nothing changes methodologically; naturalism, inerpreted as the investigation
of the world from the objetive perspective can continue. The Strong Physicalist claim that a complete
phyiscal description of the world is a complete dsecription tout court becomes problematic. Although
such a description is a description of everything, it nonetheless leaves out the subjective perspectives
embedded in it, which cannot be recovered just as Mary the superscientist cannot recover the subjective sensation of Red from the information she has. I believe that a correct understanding of the nature of information shows that “complete information” is a logically incoherent notion in any case, so that DAT does not entail the loss of anything that was ever available in that respect. Furthermore, the absence of complete information has little practical upshot because of the unfeasability of constructing such a complete decription in the first place. All in all, DAT means physicalism is technically false in a way that changes little in practice. The flipside of DAT is Neutral Monism. NM is an inherently attractive metaphsycis, because it means that the universe has no overall characteristic left dangling in need of an
explanation—no “why physical, rather than mental?”.
As far as causality is concerned, the fact that a system’s physical or objective aspects are
enough to predict its behaviour does not mean that its subjective aspects are an unnecessary multiplication
of entities, since they are only a different perspective on the same reality. Causal powers are vested in the neutral reality of which the subjective and the objective are just aspects. The mental is neither causal in itself, or causally idle in itself, it is rather a persepctive on what is causally empowered. There are no grounds for saying that either set of aspects is
exclusively responsible for the causal behaviour of the system, since each is only a perspective on
the system.
I have avoided the Copernican problem, special pleading for human consciousness by pinning mentality, and particualrly subjectivity to a system’s internal and self-refexive relations. The counterpart to excesive anthropocentricism is insufficient anthopocentricism, ie free-wheeling panpsychism, or the Thinking Rock problem.
I believe I have a way of showing that it is logically ineveritable that simple entities cannot have subjective
states that are significantly different from their objective descriptions.
Nothing novel is being introduced by noting the existence of inner and outer aspects.
I’m not sure I understand what an ‘aspect’ is, in your model. I can understand a single thing having two ‘aspects’ in the sense of having two different sets of properties accessible in different viewing conditions; but you seem to object to the idea of construing mentality and physicality as distinct property classes.
I could also understand a single property or property-class having two ‘aspects’ if the property/class itself were being associated with two distinct sets of second-order properties. Perhaps “being the color of chlorophyll” and “being the color of emeralds” are two different aspects of the single property green. Similarly, then, perhaps phenomenal properties and physical properties are just two different second-order construals of the same ultimately physical, or ultimately ideal, or perhaps ultimately neutral (i.e., neither-phenomenal-nor-physical), properties.
I call the option I present in my first paragraph Property Dualism, and the option I present in my second paragraph Multi-Label Monism. (Note that these may be very different from what you mean by ‘property dualism’ and ‘neutral monism;’ some people who call themselves ‘neutral monists’ sound more to me like ‘neutral trialists,’ in that they allow mental and physical properties into their ontology in addition to some neutral substrate. True monism, whether neutral or idealistic or physicalistic, should be eliminative or reductive, not ampliative.) Is Dual Aspect Theory an intelligible third option, distinct from Property Dualism and Multi-Label Monism as I’ve distinguished them? And if so, how can I make sense of it? Can you coax me out of my parochial object/property-centric view, without just confusing me?
I’m also not sure I understand how reflexive epistemic relations work. Epistemic relations are ordinarily causal. How does reflexive causality work? And how do these ‘intrinsic’ properties causally interact with the extrinsic ones? How, for instance, does positing that Mary’s brain has an intrinsic ‘inner dimension’ of phenomenal redness Behind The Scenes somewhere help us deterministically explain why Mary’s extrinsic brain evolves into a functional state of surprise when she sees a red rose for the first time? What would the dynamics of a particle or node with interactively evolving intrinsic and extrinsic properties look like?
A third problem: You distinguish ‘aspects’ by saying that the ‘subjective perspective’ differs from the ‘objective perspective.’ But this also doesn’t help, because it sounds anthropocentric. Worse, it sounds mentalistic; I understand the mental-physical distinction precisely inasmuch as I understand the mental as perspectival, and the physical as nonperspectival. If the physical is itself ‘just a matter of perspective,’ then do we end up with a dualistic or monistic theory, or do we instead end up with a Berkeleian idealism? I assume not, and that you were speaking loosely when you mentioned ‘perspectives;’ but this is important, because what individuates ‘perspectives’ is precisely what lends content to this ‘Dual-Aspect’ view.
All in all, DAT means physicalism is technically false in a way that changes little in practice.
Yes, I didn’t consider the ‘it’s not physicalism!!’ objection very powerful to begin with. Parsimony is important, but ‘physicalism’ is not a core methodological principle, and it’s not even altogether clear what constraints physicalism entails.
It’s not surprising that a system should have special insight into itself.
It’s not surprising that an information-processing system able to create representations of its own states would be able to represent a lot of useful facts about its internal states. It is surprising if such a system is able to infallibly represent its own states to itself; and it is astounding if such a system is able to self-represent states that a third-person observer, dissecting the objective physical dynamics of the system, could never in principle fully discover from an independent vantage point. So it’s really a question of how ‘special’ we’re talking.
If a type of system had special insight into some other, unrelated, type of system, then that would be peculiar.
I’m not clear on what you mean. ‘Insight’ is, presumably, a causal relation between some representational state and the thing represented. I think I can more easily understand a system’s having ‘insight’ into something else, since it’s easier for me to model veridical other-representation than veridical self-representation. (The former, for instance, leads to no immediate problems with recursion.) But perhaps you mean something special by ‘insight.’ Perhaps by your lights, I’m just talking about outsight?
If every systems had insights (panpsychism) that would also be peculiar.
If some systems have an automatic ability to non-causally ‘self-grasp’ themselves, by what physical mechanism would only some systems have this capacity, and not all?
if you have a just one, coarse-grained kind of stuff, and there is just one other coarse-grained kind of stuff, such that the two together cover the space of stuffs, then it is a mystery why you do not have both, ie every possible kind of stuff. A concrete example is the predominance of matter over antimatter in cosmology, which is widely interpreted as needing an explanation.
If you could define a thingspace that meaningfully distinguishes between and admits of both ‘subjective’ and ‘objective’ facts (or properties, or events, or states, or thingies...), and that non-question-beggingly establishes the impossibility or incoherence of any other fact-classifications of any analogous sorts, then that would be very interesting. But I think most people would resist the claim that this is the one unique parameter of this kind (whatever kind that is, exactly...) that one could imagine varying over models; and if this parameter is set to value ‘2,’ then it remains an open question why the many other strangely metaphysical or strangely anthropocentric parameters seem set to ‘1’ (or to ‘0,’ as the case may be).
But this is all very abstract. It strains comprehension just to entertain a subjective/objective distinction. To try to rigorously prove that we can open the door to this variable without allowing any other Aberrant Fundamental Categorical Variables into the clubhouse seems a little quixotic to me. But I’d be interested to see an attempt at this.
A concrete example is the predominance of matter over antimatter in cosmology, which is widely interpreted as needing an explanation.
Sure, though there’s a very important disparity between observed asymmetries between actual categories of things, and imagined asymmetries between an actual category and a purely hypothetical one (or, in this case, a category with a disputed existence). In principle the reasoning should work the same, but in practice our confidence in reasoning coherently (much less accurately!) about highly abstract and possibly-not-instantiated concepts should be extremely low, given our track record.
The quantitative properties (Chalmers calls them stuctural-functional) of physicalism and intrinsically qualitative properties form a dyad that covers property-space
How do we know that? If we were zombies, prima facie it seems as though we’d have no way of knowing about, or even positing in a coherent formal framework, phenomenal properties. But in that case, any analogous possible-but-not-instantiated-property-kinds that would expand the dyad into a polyad would plausibly be unknowable to us. (We’re assuming for the moment that we do have epistemic access to phenomenal and physical properties.) Perhaps all carbon atoms, for instance, have unobservable ‘carbonomenal properties,’ (Cs) which are related to phenomenal and physical properties (P1s and P2s) in the same basic way that P1s are related to P2s and Cs, and that P2s are related to P1s and Cs. Does this make sense? Does it make sense to deny this possibility (which requires both that it be intelligible and that we be able to evaluate its probability with any confidence), and thereby preserve the dyad? I am bemused.
1) If you embrace SSA, then you being you should be more likely on humans being important than on panpsychism, yes? (You may of course have good reasons for preferring SIA.)
2) Suppose again redundantly dual panpsychism. Is there any a priori reason (at this level of metaphysical fancy) to rule out that experiences could causally interact with one another in a way that is isomorphic to mechanical interactions? Then we have a sort of idealist field describable by physics, perfectly monist. Or is this an illegitimate trick?
(Full disclosure: I’d consider myself a cautious physicalist as well, although I’d say psi research constitutes a bigger portion of my doubt than the hard problem.)
The theory you propose in (2) seems close to Neutral Monism. It has fallen into disrepute (and near oblivion) but was the preferred solution to the mind-body problem of many significant philosophers of the late 19th-early 20th, in particular of Bertrand Russell (for a long period). A quote from Russell:
We shall seek to construct a metaphysics of matter which shall make the gulf between physics and perception as small, and the inferences involved in the causal theory of perception as little dubious, as possible. We do not want the percept to appear mysteriously at the end of a causal chain composed of events of a totally different nature; if we can construct a theory of the physical world which makes its events continuous with perception, we have improved the metaphysical status of physics, even if we cannot prove more than that our theory is possible.
Ooo! Seldom do I get to hear someone else voice my version of idealism. I still have a lot of thinking to do on this, but so far it seems to me perfectly legitimate. An idealism isomorphic to mechanical interactions dissolves the Hard Problem of consciousness by denying a premise. It also does so with more elegance than reductionism since it doesn’t force us through that series of flaming hoops that orbits and (maybe) eventually collapses into dualism.
This seems more likely to me so far than all the alternatives, so I guess that means I believe it, but not with a great deal of certainty. So far every objection I’ve heard or been able to imagine has amounted to something like, “But but but the world’s just got to be made out of STUFF!!!” But I’m certainly not operating under the assumption that these are the best possible objections. I’d love to see what happens with whatever you’ve got to throw at my position.
Occamite reasoning: One can do away with the Copernican thought by endorsing panpsychism; but this worsens the bite from the principle of parsimony. A universe with two kinds of fundamental fact is less likely, relative to the space of all the models, then one with one kind (or with many, many more than two kinds). It is a striking empirical fact that, consciousness aside, we seem to be able to understand the whole rest of reality with a single grammatical kind of description—the impersonal, ‘objective’ kind, which states a fact without specifying for whom the fact is. The world didn’t need to turn out to be that way, just as it didn’t need to look causally structured. This should give us reason to think that there may not be distinctions between fundamental kinds of facts, rather than that we happen to have lucked out and ended up in one of the universes with very few distinctions of this sort.
The problem is that we already have two kinds of fundamental facts, (and I would argue we need more). Consider Eliezer’s use of “magical reality fluid” in this post. If you look at context, it’s clear that he’s trying to ask whether the inhabitants of the non-causally stimulated universes poses qualia without having to admit he cares about qualia.
Eliezer thinks we’ll someday be able to reduce or eliminate Magical Reality Fluid from our model, and I know of no argument (analogous to the Hard Problem for phenomenal properties) that would preclude this possibility without invoking qualia themselves. Personally, I’m an agnostic about Many Worlds, so I’m even less inclined than EY to think that we need Magical Reality Fluid to recover the Born probabilities.
I also don’t reify logical constructs, so I don’t believe in a bonus category of Abstract Thingies. I’m about as monistic as physicalists come. Mathematical platonists and otherwise non-monistic Serious Scientifically Minded People, I think, do have much better reason to adopt dualism than I do, since the inductive argument against Bonus Fundamental Categories is weak for them.
Eliezer thinks we’ll someday be able to reduce or eliminate Magical Reality Fluid from our model, and I know of no argument (analogous to the Hard Problem for phenomenal properties) that would preclude this possibility without invoking qualia themselves.
I could define the Hard Problem of Reality, which really is just an indirect way of talking about the Hard Problem of Consciousness.
Personally, I’m an agnostic about Many Worlds, so I’m even less inclined than EY to think that we need Magical Reality Fluid to recover the Born probabilities.
As Eliezer discuses in the post, Reality Fluid isn’t just for Many Worlds, it also relates to questions about stimulation.
As Eliezer discuses in the post, Reality Fluid isn’t just for Many Worlds, it also relates to questions about [simulation].
Only as a side-effect. In all cases, I suspect it’s an idle distraction; simulation, qualia, and born-probability models do have implications for each other, but it’s unlikely that combining three tough problems into a single complicated-and-tough problem will help gin up any solutions here.
Here’s my argument for why you should.
Give me an example of some logical constructs you think I should believe in. Understand that by ‘logical construct’ I mean ‘causally inert, nonspatiotemporal object.’ I’m happy to sort-of-reify spatiotemporally instantiated properties, including relational properties. For instance, a simple reason why I consistently infer that 2 + 2 = 4 is that I live in a universe with multiple contiguous spacetime regions; spacetime regions are similar to each other, hence they instantiate the same relational properties, and this makes it possible to juxtapose objects and reason with these recurrent relations (like ‘being two arbitrary temporal intervals before’ or ‘being two arbitrary spatial intervals to the left of’).
“Qualia” is something our brains do. We don’t know how our brains do it, but it’s pretty clear by now that our brains are indeed what does it.
That’s about 10% of a solution. The “how” is enough to keep most contemorary dualism afloat.
Aren’t the details of the “how” more a question of science than philosophy?
If science had them, there would be no mileage in the philosphical project, any more than there is currently mileage in trying to found dualism on the basis that matter can’t think.
There is mileage in philosophy? Says you. Are you talking about in context of general population of a country? Of “intellectuals? Your mates?
If philosophy has mileage (compared to science) then so does any other religion. I guess that’s all dualism is though.
Eh?
I just went to reply you but after reading back on what was said I’m seeing a different context. My stupid comment was about popularity not about usefulness. I was rambling about general public opinion on belief systems not what the topic was really about- if philosophy could move something forward.
We have prima facie reason to accept both of these claims:
A list of all the objective, third-person, physical facts about the world does not miss any facts about the world.
Which specific qualia I’m experiencing is functionally/causally underdetermined; i.e., there doesn’t seem even in principle to be any physically exhaustive reason redness feels exactly as it does, as opposed to feeling like some alien color.
1 is physicalism; 2 is the hard problem. Giving up 1 means endorsing dualism or idealism. Giving up 2 means endorsing reductive or eliminative physicalism. All of these options are unpalatable. Reductionism without eliminating anything seems off the table, since the conceivability of zombies seems likely to be here to stay, to remain as an ‘explanatory gap.’ But eliminativism about qualia means completely overturning our assumption that whatever’s going on when we speak of ‘consciousness’ involves apprehending certain facts about mind. I think this last option is the least terrible out of a set of extremely terrible options; but I don’t think the eliminative answer to this problem is obvious, and I don’t think people who endorse other solutions are automatically crazy or unreasonable.
That said, the problem is in some ways just academic. Very few dualists these days think that mind isn’t perfectly causally correlated with matter. (They might think this correlation is an inexplicable brute fact, but fact it remains.) So none of the important work Eliezer is doing here depends on monism. Monism just simplifies matters a great deal, since it eliminates the worry that the metaphysical gap might re-introduce an epistemic gap into our model.
If I knew how the brain worked in sufficient detail, I think I’d be able to explain why this was wrong; I’d have a theory that would predict what qualia a brain experiences based on its structure (or whatever). No, I don’t know what the theory is, but I’m pretty confident that there is one.
Can you give me an example of how, even in principle, this would work? Construct a toy universe in which there are experiences causally determined by non-experiences. How would examining anything about the non-experiences tell us that the experiences exist, or what particular way those experiences feel?
Taboo experiences.
It sounds like you’re asking me to do what I just asked you to do. I don’t know what experiences are, except by listing synonyms or by acts of brute ostension — hey, check out that pain! look at that splotch of redness! — so if I could taboo them away, it would mean I’d already solved the hard problem. This may be an error mode of ‘tabooing’ itself; that decision procedure, applied to our most primitive and generic categories (try tabooing ‘existence’ or ‘feature’), seems to either yield uninformative lists of examples, implausible eliminativisms (what would a world without experience, without existence, or without features, look like?), or circular definitions.
But what happens when we try to taboo a term is just more introspective data; it doesn’t give us any infallible decision procedure, on its own, for what conclusion we should draw from problem cases. To assert ‘if you can’t taboo it, then it’s meaningless!’, for example, is itself to commit yourself to a highly speculative philosophical and semantic hypothesis.
Can you give me an example of how, even in principle, this would work? Construct a toy universe in which there are computations causally determined by non-computations. How would examining anything about the non-computations tell us that the computations exist, or what particular functions those computations are computing?
My initial response is that any physical interaction in which the state of one thing differentially tracks the states of another can be modeled as a computation. Is your suggestion that an analogous response would solve the Hard Problem, i.e., are you endorsing panpsychism (‘everything is literally conscious’)?
Sorry, bad example… Let’s try again.
Can you give me an example of how, even in principle, this would work? Construct a toy universe in which there are living things causally determined by non-living things? How would examining anything about the non-living things tell us that the living things exist, or what particular way those living things are alive?
“Explain how consciousness arises from non-conscious matter” doesn’t seem any more of an impossible problem than “Explain how life arises from non-living matter”.
We can define and analyze ‘life’ without any reference to life: As high-fidelity self-replicating macromolecules that interact with their environments to assemble and direct highly responsive cellular containers around themselves. There doesn’t seem to be anything missing from our ordinary notion of life here; or anything that is missing could be easily added by sketching out more physical details.
What might a purely physical definition of consciousness that made no appeal to mental concepts look like? How could we generate a first-person facts from a complex of third-person facts?
What you described as computation could apply to literally any two things in the same causal universe. But you meant two things that track each other much more tightly than usual. It may be that a rock is literally conscious, but if so, then not very much so. So little that it really does not matter at all. Humans are much more conscious because they reflect the world much more, reflect themselves much more, and [insert solution to Hard Problem here].
I dunno. I think if rocks are even a little bit conscious, that’s pretty freaky, and I’d like to know about it. I’d certainly like to hear more about what they’re conscious of. Are they happy? Can I alter them in some way that will maximize their experiential well-being? Given how many more rocks there are than humans, it could end up being the case that our moral algorithm is dominated by rearranging pebbles on the beach.
Hah. Luckily, true panpsychism dissolves the Hard Problem. You don’t need to account for mind in terms of non-mind, because there isn’t any non-mind to be found.
I meant, I’m pretty sure that rocks are not conscious. It’s just that the best way I’m able to express what I mean by “consciousness” may end up apparently including rocks, without me really claiming that rocks are conscious like humans are—in the same way that your definition of computation literally includes air, but you’re not really talking about air.
I don’t understand this. How would saying “all is Mind” explain why qualia feel the way they do?
This still doesn’t really specify what your view is. Your view may be that strictly speaking nothing is conscious, but in the looser sense in which we are conscious, anything could be modeled as conscious with equal warrant. This view is a polite version of eliminativism.
Or your view may be that strictly speaking everything is conscious, but in the looser sense in which we prefer to single out human-style consciousness, we can bracket the consciousness of rocks. In that case, I’d want to hear about just what kind of consciousness rocks have. If dust specks are themselves moral patients, this could throw an interesting wrench into the ‘dust specks vs. torture’ debate. This is panpsychism.
Or maybe your view is that rocks are almost conscious, that there’s some sort of Consciousness Gap that the world crosses, Leibniz-style. In that case, I’d want an explanation of what it means for something to almost be conscious, and how you could incrementally build up to Consciousness Proper.
The Hard Problem is not “Give a reductive account of Mind!” It’s “Explain how Mind could arise from a purely non-mental foundation!” Idealism and panpsychism dissolve the problem by denying that the foundation is non-mental; and eliminativism dissolves the problem by denying that there’s such a thing as “Mind” in the first place.
In general, I would suggest as much looking at sensory experiences that vary among humans; there’s already enough interesting material there without wondering if there are even other differences. Can we explain enough interesting things about the difference between normal hearing and pitch perfect hearing without talking about qualia?
Once we’ve done that, are we still interested in discussing qualia in color?
http://lesswrong.com/lw/p5/brain_breakthrough_its_made_of_neurons/
http://lesswrong.com/lw/p3/angry_atoms/
So your argument is “Doing arithmetic requires consciousness; and we can tell that something is doing arithmetic by looking at its hardware; so we can tell with certainty by looking at certain hardware states that the hardware is sentient”?
So your argument is “We have explained some things physically before, therefore we can explain consciousness physically”?
So your argument is “Mental states have physical causes, so they must be identical with certain brain-states”?
Set aside whether any of these would satisfy a dualist or agnostic; should they satisfy one?
Well, it’s certainly possible to do arithmetic without consciousness; I’m pretty sure an abacus isn’t conscious. But there should be a way to look at a clump of matter and tell it is conscious or not (at least as well as we can tell the difference between a clump of matter that is alive and a clump of matter that isn’t).
It’s a bit stronger than that: we have explained basically everything physically, including every other example of anything that was said to be impossible to explain physically. The only difference between “explaining the difference between conscious matter and non-conscious matter” and “explaining the difference between living and non-living matter” is that we don’t yet know how to do the former.
I think we’re hitting a “one man’s modus ponens is another man’s modus tollens” here. Physicalism implies that the “hard problem of consciousness” is solvable; physicalism is true; therefore the hard problem of consciousness has a solution. That’s the simplest form of my argument.
Basically, I think that the evidence in favor of physicalism is a lot stronger than the evidence that the hard problem of consciousness isn’t solvable, but if you disagree I don’t think I can persuade you otherwise.
No abacus can do arithmetic. An abacus just sits there.
No backhoe can excavate. A backhoe just sits there.
A trained agent can use an abacus to do arithmetic, just as one can use a backhoe to excavate. Can you define “do arithmetic” in such a manner that it is at least as easy to prove that arithmetic has been done as it is to prove that excavation has been done?
Does a calculator do arithmetic?
I’ve watched mine for several hours, and it hasn’t. Have you observed a calculator doing arithmetic? What would it look like?
No, you haven’t. (p=0.9)
It could look like an electronic object with a plastic shell that starts with “(23 + 54) / (47 * 12 + 76) + 1093” on the screen and some small amount of time after an apple falls from a tree and hits the “Enter” button some number appears on the screen below the earlier input, beginning with “1093.0”, with some other decimal digits following.
If the above doesn’t qualify as the calculator doing “arithmetic” then you’re just using the word in a way that is not just contrary to common usage but also a terrible way to carve reality.
Upvoted for this alone.
I didn’t do that immediately prior to posting, but I have watched my calculator for a cumulative period of time exceeding several hours, and it has never done arithmetic. I have done arithmetic using said calculator, but that is precisely the point I was trying to make.
Does every device which looks like that do arithmetic, or only devices which could in principle be used to calculate a large number of outcomes? What about an electronic device that only alternates between displaying “(23 + 54) / (47 * 12 + 76) + 1093” and “1093.1203125″ (or “1093.15d285805de42”) and does nothing else?
Does a bucket do arithmetic because the number of pebbles which fall into the bucket, minus the number of pebbles which fall out of the bucket, is equal to the number of pebbles in the bucket? Or does the shepherd do arithmetic using the bucket as a tool?
And I would make one of the following claims:
Your calculator has done arithmetic, or
You are using your calculator incorrectly (It’s not a paperweight!) Or
There is a usage of ‘arithmetic’ here that is a highly misleading way to carve reality.
In the same way that a cardboard cutout of Decius that has a speech bubble saying “5” over its head would not be said to be doing arithmetic a device that looks like a calculator but just displays one outcome would not be said to be doing arithmetic.
I’m not sure how ‘large’ the number of outcomes must be, precisely. I can imagine particularly intelligent monkeys or particularly young children being legitimately described as doing rudimentary arithmetic despite being somewhat limited in their capability.
It would seem like in this case we can point to the system and say that system is doing arithmetic. The shepherd (or the shepherd’s boss) has arranged the system so that the arithmetic algorithm is somewhat messily distributed in that way. Perhaps more interesting is the case where the bucket and pebble system has been enhanced with a piece of fabric which is disrupted by passing sheep, knocking in pebbles reliably, one each time. That system can certainly be said to be “counting the damn sheep”, particularly since it so easily generalizes to counting other stuff that walks past.
But now allow me to abandon my rather strong notions that “calculators multiply stuff and mechanical sheep counters count sheep”. I’m curious just what the important abstract feature of the universe is that you are trying to highlight as the core feature of ‘arithmetic’. It seems to be something to do with active intent by a generally intelligent agent? So that whenever adding or multiplying is done we need to track down what caused said adding or multiplication to be done, tracing the causal chain back to something that qualifies as having ‘intention’ and say that the ‘arithmetic’ is being done by that agent? (Please correct me if I’m wrong here, this is just my best effort to resolve your usage into something that makes sense to me!)
It’s not a feature of arithmetic, it’s a feature of doing.
I attribute ‘doing’ an action to the user of the tool, not to the tool. It is a rare case in which I attribute an artifact as an agent; if the mechanical sheep counter provided some signal to indicate the number or presence of sheep outside the fence, I would call it a machine that counts sheep. If it was simply a mechanical system that moved pebbles into and out of a bucket, I would say that counting the sheep is done by the person who looks in the bucket.
If a calculator does arithmetic, do the components of the calculator do arithmetic, or only the calculator as a whole? Or is it the system of which does arithmetic?
I’m still looking for a definition of ‘arithmetic’ which allows me to be as sure about whether arithmetic has been done as I am sure about whether excavation has been done.
Well, you do have to press certain buttons for it to happen. ;) And it looks like voltages changing inside an integrated circuit that lead to changes in a display of some kind. Anyway, if you insist on an example of something that “does arithmetic” without any human intervention whatsoever, I can point to the arithmetic logic unit inside a plugged-in arcade machine in attract mode.
And if you don’t want to call what an arithmetic logic unit does when it takes a set of inputs and returns a set of outputs “doing arithmetic”, I’d have to respond that we’re now arguing about whether trees that fall in a forest with no people make a sound and aren’t going to get anywhere. :P
Well, yeah. My question:
Is still somewhat important to the discussion. I can’t define arithmetic well enough to determine if it has occurred in all cases, but ‘changes on a display’ is clearly neither necessary nor sufficient.
Well, I’d say that a system is doing arithmetic if it has behavior that looks like it corresponds with the mathematical functions that define arithmetic. In other words, it takes as inputs things that are representations of such things as “2”, “3“, and “+” and returns an output that looks like “6”. In an arithmetic logic unit, the inputs and outputs that represent numbers and operations are voltages. It’s extremely difficult, but it is possible to use a microscopic probe to measure the internal voltages in an integrated circuit as it operates. (Mostly, we know what’s going on inside a chip by far more indirect means, such as the “changes on a screen” you mentioned.)
There is indeed a lot of wiggle room here; a sufficiently complicated scheme can make anything “represent” anything else, but that’s a problem beyond the scope of this comment. ;)
edit: I’m an idiot, 2 + 3 = 5. :(
Note that neither an abacus nor a calculator in a vacuum satisfy that definition.
I’ll allow voltages and mental states to serve as evidence, even if they are not possible to measure directly.
Does a calculator with no labels on the buttons do arithmetic in the same sense that a standard one does?
Does the phrase “2+3=6” do arithmetic? What about the phrase “2*3=6″?
I will accept as obvious that arithmetic occurs in the case of a person using a calculator to perform arithmetic, but not obvious during precisely what periods arithmetic is occurring and not occurring.
… which was plugged in and switched on by, well, a human.
I think the OP is using their own idiosyncratic definition of “doing” to require a conscious agent. This is more usual among those confused by free will.
It’s impossible to express a sentence like this after having fully appreciated the nature of the Hard Problem. In fact, whether you’re a dualist or a physicalist, I think a good litmus test for whether you’ve grasped just how hard the Hard Problem is is whether you see how categorically different the vitalism case is from the dualism case. See: Chalmers, Consciousness and its Place in Nature.
Physicalism, plus the unsolvability of the Hard Problem (i.e., the impossibility of successful Type-C Materialism), implies that either Type-B Materialism (‘mysterianism’) or Type-A Materialism (‘eliminativism’) is correct. Type-B Materialism despairs of a solution while for some reason keeping the physicalist faith; Type-A Materialism dissolves the problem rather than solving it on its own terms.
The probability of physicalism would need to approach 1 in order for that to be the case.
::follows link::
Call me the Type-C Materialist subspecies of eliminativist, then. I think that a sufficient understanding of the brain will make the solution obvious; the reason we don’t have a “functional” explanation of subjective experience is not because the solution doesn’t exist, but that we don’t know how to do it.
This is where I think we’ll end up.
It’s a lot closer to 1 than a clever-sounding impossibility argument. See: http://lesswrong.com/lw/ph/can_you_prove_two_particles_are_identical/
What’s your reason for believing this? The standard empiricist argument against zombies is that they don’t constrain anticipated experience.
One problem with this line of thought is that we’ve just thrown out the very concept of “experience” which is the basis of empiricism. The other problem is that the statement is false: the question of whether I will become a zombie tomorrow does constrain my anticipated experiences; specifically, it tells me whether I should anticipate having any.
I’m not a positivist, and I don’t argue like one. I think nearly all the arguments against the possibility of zombies are very silly, and I agree there’s good prima facie evidence for dualism (though I think that in the final analysis the weight of evidence still favors physicalism). Indeed, it’s a good thing I don’t think zombies are impossible, since I think that we are zombies.
My reason is twofold: Copernican, and Occamite.
Copernican reasoning: Most of the universe does not consist of humans, or anything human-like; so it would be very surprising to learn that the most fundamental metaphysical distinction between facts (‘subjective’ v. ‘objective,’ or ‘mental’ v. ‘physical,’ or ‘point-of-view-bearing’ v. ’point-of-view-lacking, ’or what-have-you) happens to coincide with the parts of the universe that bear human-like things, and the parts that lack human-like things. Are we really that special? Is it really more likely that we would happen to gain perfect, sparkling insight into a secret Hidden Side to reality, than that our brains would misrepresent their own ways of representing themselves to themselves?
Occamite reasoning: One can do away with the Copernican thought by endorsing panpsychism; but this worsens the bite from the principle of parsimony. A universe with two kinds of fundamental fact is less likely, relative to the space of all the models, then one with one kind (or with many, many more than two kinds). It is a striking empirical fact that, consciousness aside, we seem to be able to understand the whole rest of reality with a single grammatical kind of description—the impersonal, ‘objective’ kind, which states a fact without specifying for whom the fact is. The world didn’t need to turn out to be that way, just as it didn’t need to look causally structured. This should give us reason to think that there may not be distinctions between fundamental kinds of facts, rather than that we happen to have lucked out and ended up in one of the universes with very few distinctions of this sort.
Neither of these considerations, of course, is conclusive. But they give us some reason to at least take seriously physicalist hypotheses, and to weight their theoretical costs and benefits against the dualists’.
We’ve thrown out the idea of subjective experience, of pure, ineffable ‘feels,’ of qualia. But we retain any functionally specifiable analog of such experience. In place of qualitative red, we get zombie-red, i.e., causal/functional-red. In place of qualitative knowledge, we get zombie-knowledge.
And since most dualists already accepted the causal/functional/physical process in question (they couldn’t even motivate the zombie argument if they didn’t consider the physical causally adequate), there can be no parsimony argument against the physicalists’ posits; the only argument will have to be a defense of the claim that there is some sort of basic, epistemically infallible acquaintance relation between the contents of experience and (themselves? a Self??...). But making such an argument, without begging the question against eliminativism, is actually quite difficult.
At this point, you’re just using the language wrong. “knowledge” refers to what you’re calling “zombie-knowledge”—whenever we point to an instance of knowledge, we mean whatever it is humans are doing. So “humans are zombies” doesn’t work, unless you can point to some sort of non-human non-zombies that somehow gave us zombies the words and concepts of non-zombies.
That assumes a determinate answer to the question ‘what’s the right way to use language?’ in this case. But the facts on the ground may underdetermine whether it’s ‘right’ to treat definitions more ostensively (i.e., if Berkeley turns out to be right, then when I say ‘tree’ I’m picking out an image in my mind, not a non-existent material plant Out There), or ‘right’ to treat definitions as embedded in a theory, an interpretation of the data (i.e., Berkeley doesn’t really believe in trees as we do, he just believes in ‘tree-images’ and misleadingly calls those ‘trees’). Either of these can be a legitimate way that linguistic communities change over time; sometimes we keep a term’s sense fixed and abandon it if the facts aren’t as we thought, whereas sometimes we’re more intensionally wishy-washy and allow terms to get pragmatically redefined to fit snugly into the shiny new model. Often it depends on how quickly, and how radically, our view of the world changes.
(Though actually, qualia may raise a serious problem for ostension-focused reference-fixing: It’s not clear what we’re actually ostending, if we think we’re picking out phenomenal properties but those properties are not only misconstrued, but strictly non-existent. At least verbal definitions have the advantage that we can relatively straightforwardly translate the terms involved into our new theory.)
Moreover, this assumes that you know how I’m using the language. I haven’t said whether I think ‘knowledge’ in contemporary English denotes q-knowledge (i.e., knowledge including qualia) or z-knowledge (i.e., causal/functional/behavioral knowledge, without any appeal to qualia). I think it’s perfectly plausible that it refers to q-knowledge, hence I hedge my bets when I need to speak more precisely and start introducing ‘zombified’ terms lest semantic disputes interfere in the discussion of substance. But I’m neutral both on the descriptive question of what we mean by mental terms (how ‘theory-neutral’ they really are), and on the normative question of what we ought to mean by mental terms (how ‘theory-neutral’ they should be). I’m an eliminativist on the substantive questions; on the non-substantive question of whether we should be revisionist or traditionalist in our choice of faux-mental terminology, I’m largely indifferent, as long as we’re clear and honest in whatever semantic convention we adopt.
It’s not surprising that a system should have special insight into itself. If a type of system had special insight into some other, unrelated, type of system, then that would be peculiar. If every systems had insights (panpsychism) that would also be peculiar. But a system, one capable of haing insights, having special insights into itself is not unexpected
That is not obvious. If the two kinds of stuff (or rather property) are fine-grainedly picked from some space of stuffs (or rather properties), then that would be more unlikely that just one being picked.
OTOH, if you have a just one, coarse-grained kind of stuff, and there is just one other coarse-grained kind of stuff, such that the two together cover the space of stuffs, then it is a mystery why you do not have both, ie every possible kind of stuff. A concrete example is the predominance of matter over antimatter in cosmology, which is widely interpreted as needing an explanation.
(It’s all about information and probability. Adding one fine grained kind of stuff to another means that two low probabilities get multiplies together, leading to a very low one that needs a lot of explainging. Having every logically possible kind of stuff has a high probability, because we don’t need a lot of information to pinpoint the universe).
So..if you think of Mind as some very specific thing, the Occamite objection goes through. However, modern dualists are happy that most aspects of consciousness have physical explanations. Chalmers-style dualism is about explaining qualia, phenomenal qualities. The quantitative properties (Chalmers calls them stuctural-functional) of physicalism and intrinsically qualitative properties form a dyad that covers property-space in the same way that the matter-antimatter dyad covers stuff-space. In this way, modern dualism can avoid the Copernican Objection.
(Here comes the shift from properties to aspects).
Although it does specify that the fact is outside me. If physical and mental properties are both intrinsic to the world, then the physical properties seem to be doing most of the work, and the mental ones seem redundant. However, if objectivity is seen as a perspective, ie an external perspective, it is no longer an empirical fact. It is then a tautology that the external world will seem, from the outside, to be objective, becaue objectivity just is the view from outside. And subjectivity, likewise, is the view from inside, and not any extra stuff, just another way of looking at the same stuff. There are in any case, a set of relations between a thing-and-itself, and another set between a thing-and-other-things Nothing novel is being introduced by noting the existence of inner and outer aspects. The novel content of the Dual Aspect solution lies on identifying the Objective Perspective with quantities (broadly including structures and functions) and the Subjective Perspective with qualities, so that Subjective Qualities, qualia, are just how neuronal processing seems from the inside. This point needs justication, which I believe I have, but will not nmention here.
As far as physicalism is concerned: physicalism has many meanings. Dual aspect theory is incompatible with the idea that the world is instrinsically objective and physical, since these are not intrinsic charateristics, accoding to DAT. DAT is often and rightly associated with neutral monism, the idea that the world is in itself neither mental nor physical, neither objective nor subjective. However, this in fact changes little for most physicalists: it does not suggest that there are any ghostly substances or indetectable properties. Nothing changes methodologically; naturalism, inerpreted as the investigation of the world from the objetive perspective can continue. The Strong Physicalist claim that a complete phyiscal description of the world is a complete dsecription tout court becomes problematic. Although such a description is a description of everything, it nonetheless leaves out the subjective perspectives embedded in it, which cannot be recovered just as Mary the superscientist cannot recover the subjective sensation of Red from the information she has. I believe that a correct understanding of the nature of information shows that “complete information” is a logically incoherent notion in any case, so that DAT does not entail the loss of anything that was ever available in that respect. Furthermore, the absence of complete information has little practical upshot because of the unfeasability of constructing such a complete decription in the first place. All in all, DAT means physicalism is technically false in a way that changes little in practice. The flipside of DAT is Neutral Monism. NM is an inherently attractive metaphsycis, because it means that the universe has no overall characteristic left dangling in need of an explanation—no “why physical, rather than mental?”.
As far as causality is concerned, the fact that a system’s physical or objective aspects are enough to predict its behaviour does not mean that its subjective aspects are an unnecessary multiplication of entities, since they are only a different perspective on the same reality. Causal powers are vested in the neutral reality of which the subjective and the objective are just aspects. The mental is neither causal in itself, or causally idle in itself, it is rather a persepctive on what is causally empowered. There are no grounds for saying that either set of aspects is exclusively responsible for the causal behaviour of the system, since each is only a perspective on the system.
I have avoided the Copernican problem, special pleading for human consciousness by pinning mentality, and particualrly subjectivity to a system’s internal and self-refexive relations. The counterpart to excesive anthropocentricism is insufficient anthopocentricism, ie free-wheeling panpsychism, or the Thinking Rock problem. I believe I have a way of showing that it is logically ineveritable that simple entities cannot have subjective states that are significantly different from their objective descriptions.
I’m not sure I understand what an ‘aspect’ is, in your model. I can understand a single thing having two ‘aspects’ in the sense of having two different sets of properties accessible in different viewing conditions; but you seem to object to the idea of construing mentality and physicality as distinct property classes.
I could also understand a single property or property-class having two ‘aspects’ if the property/class itself were being associated with two distinct sets of second-order properties. Perhaps “being the color of chlorophyll” and “being the color of emeralds” are two different aspects of the single property green. Similarly, then, perhaps phenomenal properties and physical properties are just two different second-order construals of the same ultimately physical, or ultimately ideal, or perhaps ultimately neutral (i.e., neither-phenomenal-nor-physical), properties.
I call the option I present in my first paragraph Property Dualism, and the option I present in my second paragraph Multi-Label Monism. (Note that these may be very different from what you mean by ‘property dualism’ and ‘neutral monism;’ some people who call themselves ‘neutral monists’ sound more to me like ‘neutral trialists,’ in that they allow mental and physical properties into their ontology in addition to some neutral substrate. True monism, whether neutral or idealistic or physicalistic, should be eliminative or reductive, not ampliative.) Is Dual Aspect Theory an intelligible third option, distinct from Property Dualism and Multi-Label Monism as I’ve distinguished them? And if so, how can I make sense of it? Can you coax me out of my parochial object/property-centric view, without just confusing me?
I’m also not sure I understand how reflexive epistemic relations work. Epistemic relations are ordinarily causal. How does reflexive causality work? And how do these ‘intrinsic’ properties causally interact with the extrinsic ones? How, for instance, does positing that Mary’s brain has an intrinsic ‘inner dimension’ of phenomenal redness Behind The Scenes somewhere help us deterministically explain why Mary’s extrinsic brain evolves into a functional state of surprise when she sees a red rose for the first time? What would the dynamics of a particle or node with interactively evolving intrinsic and extrinsic properties look like?
A third problem: You distinguish ‘aspects’ by saying that the ‘subjective perspective’ differs from the ‘objective perspective.’ But this also doesn’t help, because it sounds anthropocentric. Worse, it sounds mentalistic; I understand the mental-physical distinction precisely inasmuch as I understand the mental as perspectival, and the physical as nonperspectival. If the physical is itself ‘just a matter of perspective,’ then do we end up with a dualistic or monistic theory, or do we instead end up with a Berkeleian idealism? I assume not, and that you were speaking loosely when you mentioned ‘perspectives;’ but this is important, because what individuates ‘perspectives’ is precisely what lends content to this ‘Dual-Aspect’ view.
Yes, I didn’t consider the ‘it’s not physicalism!!’ objection very powerful to begin with. Parsimony is important, but ‘physicalism’ is not a core methodological principle, and it’s not even altogether clear what constraints physicalism entails.
It’s not surprising that an information-processing system able to create representations of its own states would be able to represent a lot of useful facts about its internal states. It is surprising if such a system is able to infallibly represent its own states to itself; and it is astounding if such a system is able to self-represent states that a third-person observer, dissecting the objective physical dynamics of the system, could never in principle fully discover from an independent vantage point. So it’s really a question of how ‘special’ we’re talking.
I’m not clear on what you mean. ‘Insight’ is, presumably, a causal relation between some representational state and the thing represented. I think I can more easily understand a system’s having ‘insight’ into something else, since it’s easier for me to model veridical other-representation than veridical self-representation. (The former, for instance, leads to no immediate problems with recursion.) But perhaps you mean something special by ‘insight.’ Perhaps by your lights, I’m just talking about outsight?
If some systems have an automatic ability to non-causally ‘self-grasp’ themselves, by what physical mechanism would only some systems have this capacity, and not all?
If you could define a thingspace that meaningfully distinguishes between and admits of both ‘subjective’ and ‘objective’ facts (or properties, or events, or states, or thingies...), and that non-question-beggingly establishes the impossibility or incoherence of any other fact-classifications of any analogous sorts, then that would be very interesting. But I think most people would resist the claim that this is the one unique parameter of this kind (whatever kind that is, exactly...) that one could imagine varying over models; and if this parameter is set to value ‘2,’ then it remains an open question why the many other strangely metaphysical or strangely anthropocentric parameters seem set to ‘1’ (or to ‘0,’ as the case may be).
But this is all very abstract. It strains comprehension just to entertain a subjective/objective distinction. To try to rigorously prove that we can open the door to this variable without allowing any other Aberrant Fundamental Categorical Variables into the clubhouse seems a little quixotic to me. But I’d be interested to see an attempt at this.
Sure, though there’s a very important disparity between observed asymmetries between actual categories of things, and imagined asymmetries between an actual category and a purely hypothetical one (or, in this case, a category with a disputed existence). In principle the reasoning should work the same, but in practice our confidence in reasoning coherently (much less accurately!) about highly abstract and possibly-not-instantiated concepts should be extremely low, given our track record.
How do we know that? If we were zombies, prima facie it seems as though we’d have no way of knowing about, or even positing in a coherent formal framework, phenomenal properties. But in that case, any analogous possible-but-not-instantiated-property-kinds that would expand the dyad into a polyad would plausibly be unknowable to us. (We’re assuming for the moment that we do have epistemic access to phenomenal and physical properties.) Perhaps all carbon atoms, for instance, have unobservable ‘carbonomenal properties,’ (Cs) which are related to phenomenal and physical properties (P1s and P2s) in the same basic way that P1s are related to P2s and Cs, and that P2s are related to P1s and Cs. Does this make sense? Does it make sense to deny this possibility (which requires both that it be intelligible and that we be able to evaluate its probability with any confidence), and thereby preserve the dyad? I am bemused.
1) If you embrace SSA, then you being you should be more likely on humans being important than on panpsychism, yes? (You may of course have good reasons for preferring SIA.)
2) Suppose again redundantly dual panpsychism. Is there any a priori reason (at this level of metaphysical fancy) to rule out that experiences could causally interact with one another in a way that is isomorphic to mechanical interactions? Then we have a sort of idealist field describable by physics, perfectly monist. Or is this an illegitimate trick?
(Full disclosure: I’d consider myself a cautious physicalist as well, although I’d say psi research constitutes a bigger portion of my doubt than the hard problem.)
The theory you propose in (2) seems close to Neutral Monism. It has fallen into disrepute (and near oblivion) but was the preferred solution to the mind-body problem of many significant philosophers of the late 19th-early 20th, in particular of Bertrand Russell (for a long period). A quote from Russell:
Ooo! Seldom do I get to hear someone else voice my version of idealism. I still have a lot of thinking to do on this, but so far it seems to me perfectly legitimate. An idealism isomorphic to mechanical interactions dissolves the Hard Problem of consciousness by denying a premise. It also does so with more elegance than reductionism since it doesn’t force us through that series of flaming hoops that orbits and (maybe) eventually collapses into dualism.
This seems more likely to me so far than all the alternatives, so I guess that means I believe it, but not with a great deal of certainty. So far every objection I’ve heard or been able to imagine has amounted to something like, “But but but the world’s just got to be made out of STUFF!!!” But I’m certainly not operating under the assumption that these are the best possible objections. I’d love to see what happens with whatever you’ve got to throw at my position.
The problem is that we already have two kinds of fundamental facts, (and I would argue we need more). Consider Eliezer’s use of “magical reality fluid” in this post. If you look at context, it’s clear that he’s trying to ask whether the inhabitants of the non-causally stimulated universes poses qualia without having to admit he cares about qualia.
Eliezer thinks we’ll someday be able to reduce or eliminate Magical Reality Fluid from our model, and I know of no argument (analogous to the Hard Problem for phenomenal properties) that would preclude this possibility without invoking qualia themselves. Personally, I’m an agnostic about Many Worlds, so I’m even less inclined than EY to think that we need Magical Reality Fluid to recover the Born probabilities.
I also don’t reify logical constructs, so I don’t believe in a bonus category of Abstract Thingies. I’m about as monistic as physicalists come. Mathematical platonists and otherwise non-monistic Serious Scientifically Minded People, I think, do have much better reason to adopt dualism than I do, since the inductive argument against Bonus Fundamental Categories is weak for them.
I could define the Hard Problem of Reality, which really is just an indirect way of talking about the Hard Problem of Consciousness.
As Eliezer discuses in the post, Reality Fluid isn’t just for Many Worlds, it also relates to questions about stimulation.
Here’s my argument for why you should.
Only as a side-effect. In all cases, I suspect it’s an idle distraction; simulation, qualia, and born-probability models do have implications for each other, but it’s unlikely that combining three tough problems into a single complicated-and-tough problem will help gin up any solutions here.
Give me an example of some logical constructs you think I should believe in. Understand that by ‘logical construct’ I mean ‘causally inert, nonspatiotemporal object.’ I’m happy to sort-of-reify spatiotemporally instantiated properties, including relational properties. For instance, a simple reason why I consistently infer that 2 + 2 = 4 is that I live in a universe with multiple contiguous spacetime regions; spacetime regions are similar to each other, hence they instantiate the same relational properties, and this makes it possible to juxtapose objects and reason with these recurrent relations (like ‘being two arbitrary temporal intervals before’ or ‘being two arbitrary spatial intervals to the left of’).