I’m hearing an invocation of the Anti-Zombie Principle here, i.e.: “If simulations of human philosophers of mind will talk about consciousness, they will do so for the same reasons that human philosophers do, namely, that they actually have consciousness to talk about” …
I’m hearing an invocation of the Anti-Zombie Principle here, i.e.: “If simulations of human philosophers of mind will talk about consciousness, they will do so for the same reasons that human philosophers do,
Yes.
namely, that they actually have consciousness to talk about” …
Okay, to clarify: If ‘consciousness’ refers to anything, it refers to something possessed both by human philosophers and accurate simulations of human philosophers. So one of the following must be true: ① human philosophers can’t be accurately simulated, ② simulated human philosophers have consciousness, or ③ ‘consciousness’ doesn’t refer to anything.
Dualists needn’t grant your first sentence, claiming epiphenomena. I am talking about whether mystical mind features would screw up the ability of an AI to carry out our aims, not arguing for physicalism (here).
I’m hearing an invocation of the Anti-Zombie Principle here, i.e.: “If simulations of human philosophers of mind will talk about consciousness, they will do so for the same reasons that human philosophers do, namely, that they actually have consciousness to talk about” …
Yes.
Not necessarily, in the mystical sense.
Okay, to clarify: If ‘consciousness’ refers to anything, it refers to something possessed both by human philosophers and accurate simulations of human philosophers. So one of the following must be true: ① human philosophers can’t be accurately simulated, ② simulated human philosophers have consciousness, or ③ ‘consciousness’ doesn’t refer to anything.
Dualists needn’t grant your first sentence, claiming epiphenomena. I am talking about whether mystical mind features would screw up the ability of an AI to carry out our aims, not arguing for physicalism (here).