I think there was something about Bayesian {Predictive world model, planning engine, utility function, magic AI algorithm} AIs would not have philosophy.
Sorry, I have trouble parsing this sentence. But in general, I don’t think we need a detailed theory of subjective experiences (assuming that it even makes sense to conceive of such a theory) in order to determine whether some entity is sentient—as long as that entity is also sapient, and capable of communication. If that’s the case, then we can just ask it, and trust its word. If that’s not the case, then I agree, we have a problem.
Sorry, I have trouble parsing this sentence. But in general, I don’t think we need a detailed theory of subjective experiences (assuming that it even makes sense to conceive of such a theory) in order to determine whether some entity is sentient—as long as that entity is also sapient, and capable of communication. If that’s the case, then we can just ask it, and trust its word. If that’s not the case, then I agree, we have a problem.