Likewise, an AI surrounded by threat-fulfillers would rationally self-modify to become a threat-ignorer. (The debate is not about whether these are desirable dispositions to acquire—that’s common ground.) Do you think it follows from this that the act of ignoring a doomsday threat is also rational?
Likewise, an AI surrounded by threat-fulfillers would rationally self-modify to become a threat-ignorer. (The debate is not about whether these are desirable dispositions to acquire—that’s common ground.) Do you think it follows from this that the act of ignoring a doomsday threat is also rational?