I’m pretty sure you don’t think that qualia are reified in the brain—that a surgeon could go in with tongs and pull out a little lump of qualia
I do think that qualia are reified in the brain. I do not think that a surgeon could go in with tongs and remove them any more than he could in with tongs and remove your recognition of your grandmother.
If qualia and other mental phenomena are not computational, then what are they?
They’re a physical effect caused by the operation of a brain, just as gravity is a physical effect of mass and temperature is a physical effect of Brownian motion. See here and here for one reason why I think the computational view falls somewhere in between problematic and not-even-wrong, inclusive.
ETA: The “grandmother cell” might have been a poorly chosen counterexample, since apparently there’s some research that sort of actually supports that notion with respect to face recognition. I learned the phrase as identifying a fallacy. Feel free to mentally substitute some other complex idea that is clearly not embodied in any discrete piece of the brain.
Where they find apparent “Jennifer Anniston” and “Halle Berry” cells. The former is a little bit muddled as it doesn’t fire when a picture contains both her and Brad Pitt. The latter fires for both pictures of her, and the text of her name.
Do you mean, “know enough to tell for sure whether a given complex idea is embodied in any discrete piece of the brain?”. No, but we know for sure that some must exist which are not, because conceptspace is bigger than thingspace.
“know enough to tell for sure whether a given complex idea is embodied in any discrete piece of the brain?”.
Depending on various details, this might well be impossible. Rice’s theorem comes to mind—if it’s impossible to perfectly determine any interesting property for arbitrary Turing machines, that doesn’t bode well for similar questions for Turing-equivalent substrates.
Brains, like PCs, aren’t actually Turing-equivalent: they only have finite storage. To actually be equivalent to a Turing machine, they’d need something equivalent to a Turing machine’s infinite tape. There’s nothing analogous to Rice’s theorem or the halting theorem which holds for finite state machines. All those problems are decidable. Of course, decidable doesn’t mean tractable.
There’s nothing analogous to Rice’s theorem or the halting theorem which holds for finite state machines.
It is true that you can run finite state machines until they either terminate or start looping or run past the Busy Beaver for that length of tape; but while you may avoid Rice’s theorem by pointing out that ‘actually brains are just FSMs’, you replace it with another question, ‘are they FSMs decidable within the length of tape available to us?’
Given how fast the Busy Beaver grows, the answer is almost surely no—there is no runnable algorithm. Leading to the dilemma that either there are insufficient resources (per above), or it’s impossible in principle (if there are unbounded resources there likely are unbounded brains and Rice’s theorem applies again).
(I know you understand this because you pointed out ‘Of course, decidable doesn’t mean tractable.’ but it’s not obvious to a lot of people and is worth noting.)
This is just a pedantic technical correction since we agree on all the practical implications, but nothing involving FSMs grows nearly as fast as Busy Beaver. The relevant complexity class for the hardest problems concerning FSMs, such as determining whether two regular expressions represent the same language, is the class of EXPSPACE-complete problems. This is as opposed to R for decidable problems, and RE and co-RE for semidecidable problems like the halting problem. Those classes are way, WAY bigger than EXPSPACE.
Do you mean, “know enough to tell for sure whether a given complex idea is embodied in any discrete piece of the brain?”
Yes
No, but we know for sure that some must exist which are not, because conceptspace is bigger than thingspace.
Potential, easily accessible concept space, not necessarily actually used concept space. Even granting the brain using some concepts without corresponding discrete anatomy I don’t see how they can serve as a replacement in your argument when we can’t identify them.
The only role that this example-of-an-idea is playing in my argument is as an analogy to illustrate what I mean when I assert that qualia physically exist in the brain without there being such thing as a “qualia cell”. You clearly already understand this concept, so is my particular choice of analogy so terribly important that it’s necessary to nitpick over this?
The very same uncertainty would also apply to qualia (assuming that even is a meaningful concept), only worse because we understand them even less. If we can’t answer the question of whether a particular concept is embedded in discrete anatomy, how could we possibly answer that question for qualia when we can’t even verify their existence in the first place?
They’re a physical effect caused by the operation of a brain
You haven’t excluded a computational explanation of qualia by saying this. You haven’t even argued against it! Computations are physical phenomena that have meaningful consequences.
“Mental phenomena are a physical effect caused by the operation of a brain.”
“The image on my computer monitor is a physical effect caused by the operation of the computer.”
I’m starting to think you’re confused as a result of using language in a way that allows you to claim computations “don’t exist,” while qualia do.
As to your linked comment: ISTM that qualia are what an experience feels like from the inside. Maybe it’s just me, but qualia don’t seem especially difficult to explain or understand. I don’t think qualia would even be regarded as worth talking about, except that confused dualists try to use them against materialism.
I do think that qualia are reified in the brain. I do not think that a surgeon could go in with tongs and remove them any more than he could in with tongs and remove your recognition of your grandmother.
They’re a physical effect caused by the operation of a brain, just as gravity is a physical effect of mass and temperature is a physical effect of Brownian motion. See here and here for one reason why I think the computational view falls somewhere in between problematic and not-even-wrong, inclusive.
ETA: The “grandmother cell” might have been a poorly chosen counterexample, since apparently there’s some research that sort of actually supports that notion with respect to face recognition. I learned the phrase as identifying a fallacy. Feel free to mentally substitute some other complex idea that is clearly not embodied in any discrete piece of the brain.
See for instance this report
http://www.scientificamerican.com/article.cfm?id=one-face-one-neuron
on this paper
http://www.nature.com/nature/journal/v435/n7045/full/nature03687.html
Where they find apparent “Jennifer Anniston” and “Halle Berry” cells. The former is a little bit muddled as it doesn’t fire when a picture contains both her and Brad Pitt. The latter fires for both pictures of her, and the text of her name.
Do we know enough to tell for sure?
Do you mean, “know enough to tell for sure whether a given complex idea is embodied in any discrete piece of the brain?”. No, but we know for sure that some must exist which are not, because conceptspace is bigger than thingspace.
Depending on various details, this might well be impossible. Rice’s theorem comes to mind—if it’s impossible to perfectly determine any interesting property for arbitrary Turing machines, that doesn’t bode well for similar questions for Turing-equivalent substrates.
Brains, like PCs, aren’t actually Turing-equivalent: they only have finite storage. To actually be equivalent to a Turing machine, they’d need something equivalent to a Turing machine’s infinite tape. There’s nothing analogous to Rice’s theorem or the halting theorem which holds for finite state machines. All those problems are decidable. Of course, decidable doesn’t mean tractable.
It is true that you can run finite state machines until they either terminate or start looping or run past the Busy Beaver for that length of tape; but while you may avoid Rice’s theorem by pointing out that ‘actually brains are just FSMs’, you replace it with another question, ‘are they FSMs decidable within the length of tape available to us?’
Given how fast the Busy Beaver grows, the answer is almost surely no—there is no runnable algorithm. Leading to the dilemma that either there are insufficient resources (per above), or it’s impossible in principle (if there are unbounded resources there likely are unbounded brains and Rice’s theorem applies again).
(I know you understand this because you pointed out ‘Of course, decidable doesn’t mean tractable.’ but it’s not obvious to a lot of people and is worth noting.)
This is just a pedantic technical correction since we agree on all the practical implications, but nothing involving FSMs grows nearly as fast as Busy Beaver. The relevant complexity class for the hardest problems concerning FSMs, such as determining whether two regular expressions represent the same language, is the class of EXPSPACE-complete problems. This is as opposed to R for decidable problems, and RE and co-RE for semidecidable problems like the halting problem. Those classes are way, WAY bigger than EXPSPACE.
Yes
Potential, easily accessible concept space, not necessarily actually used concept space. Even granting the brain using some concepts without corresponding discrete anatomy I don’t see how they can serve as a replacement in your argument when we can’t identify them.
The only role that this example-of-an-idea is playing in my argument is as an analogy to illustrate what I mean when I assert that qualia physically exist in the brain without there being such thing as a “qualia cell”. You clearly already understand this concept, so is my particular choice of analogy so terribly important that it’s necessary to nitpick over this?
The very same uncertainty would also apply to qualia (assuming that even is a meaningful concept), only worse because we understand them even less. If we can’t answer the question of whether a particular concept is embedded in discrete anatomy, how could we possibly answer that question for qualia when we can’t even verify their existence in the first place?
You haven’t excluded a computational explanation of qualia by saying this. You haven’t even argued against it! Computations are physical phenomena that have meaningful consequences.
“Mental phenomena are a physical effect caused by the operation of a brain.”
“The image on my computer monitor is a physical effect caused by the operation of the computer.”
I’m starting to think you’re confused as a result of using language in a way that allows you to claim computations “don’t exist,” while qualia do.
As to your linked comment: ISTM that qualia are what an experience feels like from the inside. Maybe it’s just me, but qualia don’t seem especially difficult to explain or understand. I don’t think qualia would even be regarded as worth talking about, except that confused dualists try to use them against materialism.