If that first agent (that answers no, then self-modifies to answer yes) had been in the situation where the coin had fell heads, then it would not have got the million dollars; whereas an agent that can “retroactively precommit” to answer yes would have got the million dollars.
But we know that didn’t happen. Why do we care about utility we know we can’t obtain?
So having a “retroactively precommit” algorithm seems like a better choice than having a “answer what gets the biggest reward, and then self-modify for future cases” algorithm.
For what goal is this a better choice? Utility generation?
But we know that didn’t happen. Why do we care about utility we know we can’t obtain?
For what goal is this a better choice? Utility generation?