The idea is not that state machines can’t have qualia. Something with qualia will still be a state machine. But you couldn’t know that something had qualia, if you just had the state machine description and no preexisting concept of qualia.
If a certain bunch of electrons are what’s conscious in the brain, my point is that the “electrons” are actually qualia and that this isn’t part of our physics concept of what an electron is; and that you—or a Friendly AI—couldn’t arrive at this “discovery” by reasoning just within physical and computational ontologies.
you—or a Friendly AI—couldn’t arrive at this “discovery” by reasoning just within physical and computational ontologies.
Could an AI just look at the physical causes of humans saying “I think I have qualia”? Why wouldn’t these electrons be a central cause, if they’re the key to qualia?
Please expand the word “qualia”, and please explain how you see that the presence or absence of these phenomena will make an observable difference in the problem you are addressing.
See this discussion. Physical theories of human identity must equate the world of appearances, which is the only world that we actually know about, with some part of a posited world of “physical entities”. Everything from the world of appearances is a quale, but an AI with a computational-materialist philosophy only “knows” various hypotheses about what the physical entities are. The most it could do is develop a concept like “the type of physical entity which causes a human to talk about appearances”, but it still won’t spontaneously attach the right significance to such concepts (e.g. to a concept of pain).
I have agreed elsewhere that it is—remotely! - possible that an appropriately guided AI could solve the hard problems of consciousness and ethics before humans did, e.g. by establishing a fantastically detailed causal model of human thought, and contemplating the deliberations of a philosophical sim-human. But when even the humans guiding the AI abandon their privileged epistemic access to phenomenological facts, and personally imitate the AI’s limitations by restricting themselves to computational epistemology, then the project is doomed.
The idea is not that state machines can’t have qualia. Something with qualia will still be a state machine. But you couldn’t know that something had qualia, if you just had the state machine description and no preexisting concept of qualia.
If a certain bunch of electrons are what’s conscious in the brain, my point is that the “electrons” are actually qualia and that this isn’t part of our physics concept of what an electron is; and that you—or a Friendly AI—couldn’t arrive at this “discovery” by reasoning just within physical and computational ontologies.
Could an AI just look at the physical causes of humans saying “I think I have qualia”? Why wouldn’t these electrons be a central cause, if they’re the key to qualia?
Please expand the word “qualia”, and please explain how you see that the presence or absence of these phenomena will make an observable difference in the problem you are addressing.
See this discussion. Physical theories of human identity must equate the world of appearances, which is the only world that we actually know about, with some part of a posited world of “physical entities”. Everything from the world of appearances is a quale, but an AI with a computational-materialist philosophy only “knows” various hypotheses about what the physical entities are. The most it could do is develop a concept like “the type of physical entity which causes a human to talk about appearances”, but it still won’t spontaneously attach the right significance to such concepts (e.g. to a concept of pain).
I have agreed elsewhere that it is—remotely! - possible that an appropriately guided AI could solve the hard problems of consciousness and ethics before humans did, e.g. by establishing a fantastically detailed causal model of human thought, and contemplating the deliberations of a philosophical sim-human. But when even the humans guiding the AI abandon their privileged epistemic access to phenomenological facts, and personally imitate the AI’s limitations by restricting themselves to computational epistemology, then the project is doomed.