Well, as previously stated, my view is that the scenario as stated (single-shot with no precommitment) is not the most helpful hypothetical for designing a decision theory. An iterated version would actually be more relevant, since we want to design an AI that can make more than one decision. And in the iterated version, the tension is largely resolved, because there is a clear motivation to stick with the decision: we still hope for the next coin to come down heads.
Are you actually trying to understand? At some point you’ll predictably approach death, and predictably assign a vanishing probability to another offer or coin-flip coming after a certain point. Your present self should know this. Omega knows it by assumption.
I’m pretty sure that decision theories are not designed on that basis. We don’t want an AI to start making different decisions based on the probability of an upcoming decommission. We don’t want it to become nihilistic and stop making decisions because it predicted the heat death of the universe and decided that all paths have zero value. If death is actually tied to the decision in some way, then sure, take that into account, but otherwise, I don’t think a decision theory should have “death is inevitably coming for us all” as a factor.
I’m pretty sure that decision theories are not designed on that basis.
You are wrong. In fact, this is a totally standard thing to consider, and “avoid back-chaining defection in games of fixed length” is a known problem, with various known strategies.
Well, as previously stated, my view is that the scenario as stated (single-shot with no precommitment) is not the most helpful hypothetical for designing a decision theory. An iterated version would actually be more relevant, since we want to design an AI that can make more than one decision. And in the iterated version, the tension is largely resolved, because there is a clear motivation to stick with the decision: we still hope for the next coin to come down heads.
Are you actually trying to understand? At some point you’ll predictably approach death, and predictably assign a vanishing probability to another offer or coin-flip coming after a certain point. Your present self should know this. Omega knows it by assumption.
I’m pretty sure that decision theories are not designed on that basis. We don’t want an AI to start making different decisions based on the probability of an upcoming decommission. We don’t want it to become nihilistic and stop making decisions because it predicted the heat death of the universe and decided that all paths have zero value. If death is actually tied to the decision in some way, then sure, take that into account, but otherwise, I don’t think a decision theory should have “death is inevitably coming for us all” as a factor.
You are wrong. In fact, this is a totally standard thing to consider, and “avoid back-chaining defection in games of fixed length” is a known problem, with various known strategies.