Suppose we build a robot that takes a census of currently existing people, and a list of possible actions, and then takes the action that causes the biggest increase in utility of currently existing people.
You come to this robot before your example starts, and ask “Do you want to precommit to action vx, since that results in higher total utility?”
And the robot replies, “Does taking this action of precommitment cause the biggest increase in utility of currently existing people?”
“No, but you see, in one time step there’s this Bob guy who’ll pop into being, and if you add in his utilities from the beginning, by the end you’ll wish you’d precommitted.”
“Will wishing that I’d precommitted be the action that causes the biggest increase in utility of currently existing people?”
You shake your head. “No...”
“Then I can’t really see why I’d do such a thing.”
And the robot replies, “Does taking this action of precommitment cause the biggest increase in utility of currently existing people?”
I’d say yes. It gives an additional 1 utility to currently existing people, since it ensures that the robot will make a choice that people like later on.
Are you only counting the amount they value the world as it currently is? For example, if someone wants to be buried when they die, the robot wouldn’t arrange it, because by the time it happens they won’t be in a state to appreciate it?
Suppose we build a robot that takes a census of currently existing people, and a list of possible actions, and then takes the action that causes the biggest increase in utility of currently existing people.
You come to this robot before your example starts, and ask “Do you want to precommit to action vx, since that results in higher total utility?”
And the robot replies, “Does taking this action of precommitment cause the biggest increase in utility of currently existing people?”
“No, but you see, in one time step there’s this Bob guy who’ll pop into being, and if you add in his utilities from the beginning, by the end you’ll wish you’d precommitted.”
“Will wishing that I’d precommitted be the action that causes the biggest increase in utility of currently existing people?”
You shake your head. “No...”
“Then I can’t really see why I’d do such a thing.”
I’d say yes. It gives an additional 1 utility to currently existing people, since it ensures that the robot will make a choice that people like later on.
Are you only counting the amount they value the world as it currently is? For example, if someone wants to be buried when they die, the robot wouldn’t arrange it, because by the time it happens they won’t be in a state to appreciate it?
Ooooh. Okay, I see what you mean now—for some reason I’d interpreted you as saying almost the opposite.
Yup, I was wrong.