I don’t understand why “Yes” is the right answer. It seems to me that an agent that self-modified to answer “Yes” to this sort of question in the future but said “No” this time would generate more utility than an agent that already implemented the policy of saying yes.
If that first agent (that answers no, then self-modifies to answer yes) had been in the situation where the coin had fell heads, then it would not have got the million dollars; whereas an agent that can “retroactively precommit” to answer yes would have got the million dollars. So having a “retroactively precommit” algorithm seems like a better choice than having a “answer what gets the biggest reward, and then self-modify for future cases” algorithm.
If that first agent (that answers no, then self-modifies to answer yes) had been in the situation where the coin had fell heads, then it would not have got the million dollars; whereas an agent that can “retroactively precommit” to answer yes would have got the million dollars.
But we know that didn’t happen. Why do we care about utility we know we can’t obtain?
So having a “retroactively precommit” algorithm seems like a better choice than having a “answer what gets the biggest reward, and then self-modify for future cases” algorithm.
For what goal is this a better choice? Utility generation?
If that first agent (that answers no, then self-modifies to answer yes) had been in the situation where the coin had fell heads, then it would not have got the million dollars; whereas an agent that can “retroactively precommit” to answer yes would have got the million dollars. So having a “retroactively precommit” algorithm seems like a better choice than having a “answer what gets the biggest reward, and then self-modify for future cases” algorithm.
But we know that didn’t happen. Why do we care about utility we know we can’t obtain?
For what goal is this a better choice? Utility generation?