What do you mean by “black box” ? If the AI (or alien or uplifted dolphin or whatever) tells me that it has subjective experiences, why shouldn’t I take it at its word ?
Oh, I am not denying that they exist, just saying I don’t know a solid theory of subjective experience. I think there was something about Bayesian {Predictive world model, planning engine, utility function, magic AI algorithm} AIs would not have philosophy.
I think there was something about Bayesian {Predictive world model, planning engine, utility function, magic AI algorithm} AIs would not have philosophy.
Sorry, I have trouble parsing this sentence. But in general, I don’t think we need a detailed theory of subjective experiences (assuming that it even makes sense to conceive of such a theory) in order to determine whether some entity is sentient—as long as that entity is also sapient, and capable of communication. If that’s the case, then we can just ask it, and trust its word. If that’s not the case, then I agree, we have a problem.
And we return to the back box of subjective experience.
What do you mean by “black box” ? If the AI (or alien or uplifted dolphin or whatever) tells me that it has subjective experiences, why shouldn’t I take it at its word ?
Oh, I am not denying that they exist, just saying I don’t know a solid theory of subjective experience. I think there was something about Bayesian {Predictive world model, planning engine, utility function, magic AI algorithm} AIs would not have philosophy.
Sorry, I have trouble parsing this sentence. But in general, I don’t think we need a detailed theory of subjective experiences (assuming that it even makes sense to conceive of such a theory) in order to determine whether some entity is sentient—as long as that entity is also sapient, and capable of communication. If that’s the case, then we can just ask it, and trust its word. If that’s not the case, then I agree, we have a problem.