Perfect decision-makers, with perfect information, should always be able to take the optimal outcome in any situation. Likewise, perfect decision-makers with limited information should always be able to choose the outcome with the best expected payoff under strict Bayesian reasoning.
However, when the actor’s decision-making process becomes part of the situation under consideration, as happens when Katemega scrutinises Joe’s potential for leaving her in the future, then the perfect decision-maker is only able to choose the optimal outcome if he is also capable of perfect self-modification. Without that ability, he’s vulnerable to his own choices and preferences changing in the future, which he can’t control right now.
I’d also like to draw a distinction between a practical pre-commitment (of the form “leaving this marriage will cause me -X utilons due to financial penalty or cognitive dissonance for breaking my vows”), and an actual self-modification to a mind state where “I promised I would never leave Kate, but I’m going to do it anyway now” is not actually an option. I don’t think humans are capable of the latter. An AI might be, I don’t know.
Also, what about decisions Joe made in the past (for example, deciding when he was eighteen that there was no way he was ever going to get married, because being single was too much fun)? If you want your present state to influence your future state strongly, you have to accept the influence of your past state on your present state just as strongly, and you can’t just say “Oh, but I’m older and wiser now” in one instance but not the other.
Without the ability to self-modify into a truly sincere state wherein he’ll never leave Kate no matter what, Joe can’t be completely sincere, and (by the assumptions of the problem) Kate will sense this and his chances of his proposal being accepted will diminish. And there’s nothing he can do about that.
I have to note that an agent using one of the new decision theories sometimes discussed around here, like UDT, wouldn’t leave Katemega and wouldn’t need self-modification or precommitment to not leave her.
Perfect decision-makers, with perfect information, should always be able to take the optimal outcome in any situation. Likewise, perfect decision-makers with limited information should always be able to choose the outcome with the best expected payoff under strict Bayesian reasoning.
However, when the actor’s decision-making process becomes part of the situation under consideration, as happens when Katemega scrutinises Joe’s potential for leaving her in the future, then the perfect decision-maker is only able to choose the optimal outcome if he is also capable of perfect self-modification. Without that ability, he’s vulnerable to his own choices and preferences changing in the future, which he can’t control right now.
I’d also like to draw a distinction between a practical pre-commitment (of the form “leaving this marriage will cause me -X utilons due to financial penalty or cognitive dissonance for breaking my vows”), and an actual self-modification to a mind state where “I promised I would never leave Kate, but I’m going to do it anyway now” is not actually an option. I don’t think humans are capable of the latter. An AI might be, I don’t know.
Also, what about decisions Joe made in the past (for example, deciding when he was eighteen that there was no way he was ever going to get married, because being single was too much fun)? If you want your present state to influence your future state strongly, you have to accept the influence of your past state on your present state just as strongly, and you can’t just say “Oh, but I’m older and wiser now” in one instance but not the other.
Without the ability to self-modify into a truly sincere state wherein he’ll never leave Kate no matter what, Joe can’t be completely sincere, and (by the assumptions of the problem) Kate will sense this and his chances of his proposal being accepted will diminish. And there’s nothing he can do about that.
I have to note that an agent using one of the new decision theories sometimes discussed around here, like UDT, wouldn’t leave Katemega and wouldn’t need self-modification or precommitment to not leave her.