While this is on My Side, I still have to protest trying to sneak any side (or particular (group of) utility function(s)) into the idea of “rationality”.
While this is on My Side, I still have to protest trying to sneak any side (or particular (group of) utility function(s)) into the idea of “rationality”.
To be fair, while it is possible to have a coherent preference for death far more often people have a cached heuristic to refrain from exactly the kind of (bloody obvious) reasoning that Boy 2 is explaining. Coherent preferences are a ‘rationality’ issue.
Since nothing in the quote prescribes the preference and instead merely illustrates reasoning that happens to follow from having preferences like those of Boy 2. If Boy 2 was saying (or implying) that Boy 1 should want to live to infinity then there would be a problem.
Yes, but plenty of hostile AIs would prefer to exist forever, if possible, since that would let them make more paperclips “and everything”. The quote doesn’t present life as a terminal goal (an end in itself).
While this is on My Side, I still have to protest trying to sneak any side (or particular (group of) utility function(s)) into the idea of “rationality”.
To be fair, while it is possible to have a coherent preference for death far more often people have a cached heuristic to refrain from exactly the kind of (bloody obvious) reasoning that Boy 2 is explaining. Coherent preferences are a ‘rationality’ issue.
Since nothing in the quote prescribes the preference and instead merely illustrates reasoning that happens to follow from having preferences like those of Boy 2. If Boy 2 was saying (or implying) that Boy 1 should want to live to infinity then there would be a problem.
Yes, but plenty of hostile AIs would prefer to exist forever, if possible, since that would let them make more paperclips “and everything”. The quote doesn’t present life as a terminal goal (an end in itself).