Do you mean, “know enough to tell for sure whether a given complex idea is embodied in any discrete piece of the brain?”. No, but we know for sure that some must exist which are not, because conceptspace is bigger than thingspace.
“know enough to tell for sure whether a given complex idea is embodied in any discrete piece of the brain?”.
Depending on various details, this might well be impossible. Rice’s theorem comes to mind—if it’s impossible to perfectly determine any interesting property for arbitrary Turing machines, that doesn’t bode well for similar questions for Turing-equivalent substrates.
Brains, like PCs, aren’t actually Turing-equivalent: they only have finite storage. To actually be equivalent to a Turing machine, they’d need something equivalent to a Turing machine’s infinite tape. There’s nothing analogous to Rice’s theorem or the halting theorem which holds for finite state machines. All those problems are decidable. Of course, decidable doesn’t mean tractable.
There’s nothing analogous to Rice’s theorem or the halting theorem which holds for finite state machines.
It is true that you can run finite state machines until they either terminate or start looping or run past the Busy Beaver for that length of tape; but while you may avoid Rice’s theorem by pointing out that ‘actually brains are just FSMs’, you replace it with another question, ‘are they FSMs decidable within the length of tape available to us?’
Given how fast the Busy Beaver grows, the answer is almost surely no—there is no runnable algorithm. Leading to the dilemma that either there are insufficient resources (per above), or it’s impossible in principle (if there are unbounded resources there likely are unbounded brains and Rice’s theorem applies again).
(I know you understand this because you pointed out ‘Of course, decidable doesn’t mean tractable.’ but it’s not obvious to a lot of people and is worth noting.)
This is just a pedantic technical correction since we agree on all the practical implications, but nothing involving FSMs grows nearly as fast as Busy Beaver. The relevant complexity class for the hardest problems concerning FSMs, such as determining whether two regular expressions represent the same language, is the class of EXPSPACE-complete problems. This is as opposed to R for decidable problems, and RE and co-RE for semidecidable problems like the halting problem. Those classes are way, WAY bigger than EXPSPACE.
Do you mean, “know enough to tell for sure whether a given complex idea is embodied in any discrete piece of the brain?”
Yes
No, but we know for sure that some must exist which are not, because conceptspace is bigger than thingspace.
Potential, easily accessible concept space, not necessarily actually used concept space. Even granting the brain using some concepts without corresponding discrete anatomy I don’t see how they can serve as a replacement in your argument when we can’t identify them.
The only role that this example-of-an-idea is playing in my argument is as an analogy to illustrate what I mean when I assert that qualia physically exist in the brain without there being such thing as a “qualia cell”. You clearly already understand this concept, so is my particular choice of analogy so terribly important that it’s necessary to nitpick over this?
The very same uncertainty would also apply to qualia (assuming that even is a meaningful concept), only worse because we understand them even less. If we can’t answer the question of whether a particular concept is embedded in discrete anatomy, how could we possibly answer that question for qualia when we can’t even verify their existence in the first place?
Do we know enough to tell for sure?
Do you mean, “know enough to tell for sure whether a given complex idea is embodied in any discrete piece of the brain?”. No, but we know for sure that some must exist which are not, because conceptspace is bigger than thingspace.
Depending on various details, this might well be impossible. Rice’s theorem comes to mind—if it’s impossible to perfectly determine any interesting property for arbitrary Turing machines, that doesn’t bode well for similar questions for Turing-equivalent substrates.
Brains, like PCs, aren’t actually Turing-equivalent: they only have finite storage. To actually be equivalent to a Turing machine, they’d need something equivalent to a Turing machine’s infinite tape. There’s nothing analogous to Rice’s theorem or the halting theorem which holds for finite state machines. All those problems are decidable. Of course, decidable doesn’t mean tractable.
It is true that you can run finite state machines until they either terminate or start looping or run past the Busy Beaver for that length of tape; but while you may avoid Rice’s theorem by pointing out that ‘actually brains are just FSMs’, you replace it with another question, ‘are they FSMs decidable within the length of tape available to us?’
Given how fast the Busy Beaver grows, the answer is almost surely no—there is no runnable algorithm. Leading to the dilemma that either there are insufficient resources (per above), or it’s impossible in principle (if there are unbounded resources there likely are unbounded brains and Rice’s theorem applies again).
(I know you understand this because you pointed out ‘Of course, decidable doesn’t mean tractable.’ but it’s not obvious to a lot of people and is worth noting.)
This is just a pedantic technical correction since we agree on all the practical implications, but nothing involving FSMs grows nearly as fast as Busy Beaver. The relevant complexity class for the hardest problems concerning FSMs, such as determining whether two regular expressions represent the same language, is the class of EXPSPACE-complete problems. This is as opposed to R for decidable problems, and RE and co-RE for semidecidable problems like the halting problem. Those classes are way, WAY bigger than EXPSPACE.
Yes
Potential, easily accessible concept space, not necessarily actually used concept space. Even granting the brain using some concepts without corresponding discrete anatomy I don’t see how they can serve as a replacement in your argument when we can’t identify them.
The only role that this example-of-an-idea is playing in my argument is as an analogy to illustrate what I mean when I assert that qualia physically exist in the brain without there being such thing as a “qualia cell”. You clearly already understand this concept, so is my particular choice of analogy so terribly important that it’s necessary to nitpick over this?
The very same uncertainty would also apply to qualia (assuming that even is a meaningful concept), only worse because we understand them even less. If we can’t answer the question of whether a particular concept is embedded in discrete anatomy, how could we possibly answer that question for qualia when we can’t even verify their existence in the first place?