I feel as if I can agree with this statement in isolation, but can’t think of a context where I would consider this point relevant.
I’m not even talking about the question of whether or not the AI is sentient, which you asked us to ignore. I’m talking about how do we know that an AI is “suffering,” even if we do assume it’s sentient. What exactly is “suffering” in something that is completely cognitively distinct from a human? Is it just negative reward signals? I don’t think so, or at least if it was, that would likely imply that training a sentient AI is unethical in all cases, since training requires negative signals.
That’s not to say that all negative signals are the same or that maybe in some contexts it’s painful or not, just that I think determining that is an even harder problem than determining if the AI is sentient.
I feel as if I can agree with this statement in isolation, but can’t think of a context where I would consider this point relevant.
I’m not even talking about the question of whether or not the AI is sentient, which you asked us to ignore. I’m talking about how do we know that an AI is “suffering,” even if we do assume it’s sentient. What exactly is “suffering” in something that is completely cognitively distinct from a human? Is it just negative reward signals? I don’t think so, or at least if it was, that would likely imply that training a sentient AI is unethical in all cases, since training requires negative signals.
That’s not to say that all negative signals are the same or that maybe in some contexts it’s painful or not, just that I think determining that is an even harder problem than determining if the AI is sentient.