A digital intelligence need not be sentient, though, so long as it has a human-level capacity to achieve goals in a wide variety of environments.
This feels like presupposing that the idea of “being sentient” makes sense in the context of this discussion, which is a can of worms that I think shouldn’t be opened. Better to excuse the distinction as irrelevant (which it is, for this purpose) if it’s mentioned at all.
This feels like presupposing that the idea of “being sentient” makes sense in the context of this discussion, which is a can of worms that I think shouldn’t be opened. Better to excuse the distinction as irrelevant (which it is, for this purpose) if it’s mentioned at all.