Alice has utility function A. Bob will have utility function B, but he hasn’t been born yet.
You can make choices u or v, then once Bob is born, you get another choice between x and y.
A(u) = 1, A(v) = 0, A(x) = 1, A(y) = 0
B(u) = 0, B(v) = 2, B(x) = 0, B(y) = 2
If you can’t precommit, you’ll do u the first time, for 1 util under A, and y the second, for 2 util under A+B (compared to 1 util for x).
If you can precommit, then you know if you don’t, you’ll pick uy. Precommitting to ux gives you +1 util under A, and since you’re still operating under A, that’s what you’ll do.
While I’m at it, you can also get into prisoner’s dilemma with your future self, as follows:
A(u) = 1, A(v) = 0, A(x) = 2, A(y) = 0
B(u) = −1, B(v) = 2, B(x) = −2, B(y) = 1
Note that this gives:
A+B(u) = 0, A+B(v) = 2, A+B(x) = 0, A+B(y) = 1
Now, under A, you’d want u for 1 util, and once Bob is born, under A+B you’d want y for 1 util.
But if you instead took vx, that would be worth 2 util for A and 2 util for A+B. So vx is better than uv both from Alice’s perspective and Alice+Bob’s perspective. Certainly that would be a better option.
Suppose we build a robot that takes a census of currently existing people, and a list of possible actions, and then takes the action that causes the biggest increase in utility of currently existing people.
You come to this robot before your example starts, and ask “Do you want to precommit to action vx, since that results in higher total utility?”
And the robot replies, “Does taking this action of precommitment cause the biggest increase in utility of currently existing people?”
“No, but you see, in one time step there’s this Bob guy who’ll pop into being, and if you add in his utilities from the beginning, by the end you’ll wish you’d precommitted.”
“Will wishing that I’d precommitted be the action that causes the biggest increase in utility of currently existing people?”
You shake your head. “No...”
“Then I can’t really see why I’d do such a thing.”
And the robot replies, “Does taking this action of precommitment cause the biggest increase in utility of currently existing people?”
I’d say yes. It gives an additional 1 utility to currently existing people, since it ensures that the robot will make a choice that people like later on.
Are you only counting the amount they value the world as it currently is? For example, if someone wants to be buried when they die, the robot wouldn’t arrange it, because by the time it happens they won’t be in a state to appreciate it?
Nope.
Why not?
Let me try making this more explicit.
Alice has utility function A. Bob will have utility function B, but he hasn’t been born yet.
You can make choices u or v, then once Bob is born, you get another choice between x and y.
A(u) = 1, A(v) = 0, A(x) = 1, A(y) = 0
B(u) = 0, B(v) = 2, B(x) = 0, B(y) = 2
If you can’t precommit, you’ll do u the first time, for 1 util under A, and y the second, for 2 util under A+B (compared to 1 util for x).
If you can precommit, then you know if you don’t, you’ll pick uy. Precommitting to ux gives you +1 util under A, and since you’re still operating under A, that’s what you’ll do.
While I’m at it, you can also get into prisoner’s dilemma with your future self, as follows:
A(u) = 1, A(v) = 0, A(x) = 2, A(y) = 0
B(u) = −1, B(v) = 2, B(x) = −2, B(y) = 1
Note that this gives:
A+B(u) = 0, A+B(v) = 2, A+B(x) = 0, A+B(y) = 1
Now, under A, you’d want u for 1 util, and once Bob is born, under A+B you’d want y for 1 util.
But if you instead took vx, that would be worth 2 util for A and 2 util for A+B. So vx is better than uv both from Alice’s perspective and Alice+Bob’s perspective. Certainly that would be a better option.
Suppose we build a robot that takes a census of currently existing people, and a list of possible actions, and then takes the action that causes the biggest increase in utility of currently existing people.
You come to this robot before your example starts, and ask “Do you want to precommit to action vx, since that results in higher total utility?”
And the robot replies, “Does taking this action of precommitment cause the biggest increase in utility of currently existing people?”
“No, but you see, in one time step there’s this Bob guy who’ll pop into being, and if you add in his utilities from the beginning, by the end you’ll wish you’d precommitted.”
“Will wishing that I’d precommitted be the action that causes the biggest increase in utility of currently existing people?”
You shake your head. “No...”
“Then I can’t really see why I’d do such a thing.”
I’d say yes. It gives an additional 1 utility to currently existing people, since it ensures that the robot will make a choice that people like later on.
Are you only counting the amount they value the world as it currently is? For example, if someone wants to be buried when they die, the robot wouldn’t arrange it, because by the time it happens they won’t be in a state to appreciate it?
Ooooh. Okay, I see what you mean now—for some reason I’d interpreted you as saying almost the opposite.
Yup, I was wrong.