Teacher: So if you could live to be any age you like, what would it be?
Boy 2: Infinity.
Teacher: Infinity, you would live for ever? Why would you like to live for ever?
Boy 2: Because you just know a lot of people and make lots of new friends because you could travel to lots of countries and everything and meet loads of new animals and everything.
While this is on My Side, I still have to protest trying to sneak any side (or particular (group of) utility function(s)) into the idea of “rationality”.
While this is on My Side, I still have to protest trying to sneak any side (or particular (group of) utility function(s)) into the idea of “rationality”.
To be fair, while it is possible to have a coherent preference for death far more often people have a cached heuristic to refrain from exactly the kind of (bloody obvious) reasoning that Boy 2 is explaining. Coherent preferences are a ‘rationality’ issue.
Since nothing in the quote prescribes the preference and instead merely illustrates reasoning that happens to follow from having preferences like those of Boy 2. If Boy 2 was saying (or implying) that Boy 1 should want to live to infinity then there would be a problem.
Yes, but plenty of hostile AIs would prefer to exist forever, if possible, since that would let them make more paperclips “and everything”. The quote doesn’t present life as a terminal goal (an end in itself).
Teacher: So if you could live to be any age you like, what would it be?
Boy 2: Infinity.
Teacher: Infinity, you would live for ever? Why would you like to live for ever?
Boy 2: Because you just know a lot of people and make lots of new friends because you could travel to lots of countries and everything and meet loads of new animals and everything.
--Until (documentary)
http://mosaicscience.com/extra/until-transcript
While this is on My Side, I still have to protest trying to sneak any side (or particular (group of) utility function(s)) into the idea of “rationality”.
To be fair, while it is possible to have a coherent preference for death far more often people have a cached heuristic to refrain from exactly the kind of (bloody obvious) reasoning that Boy 2 is explaining. Coherent preferences are a ‘rationality’ issue.
Since nothing in the quote prescribes the preference and instead merely illustrates reasoning that happens to follow from having preferences like those of Boy 2. If Boy 2 was saying (or implying) that Boy 1 should want to live to infinity then there would be a problem.
Yes, but plenty of hostile AIs would prefer to exist forever, if possible, since that would let them make more paperclips “and everything”. The quote doesn’t present life as a terminal goal (an end in itself).
From the same source:
:|
hate to break it to you, kid...