It seems to me that an agent that self-modified to answer “Yes” to this sort of question in the future but said “No” this time
This strategy is not reflectively consistent. From the new TDT PDF:
A decision algorithm is reflectively inconsistent whenever an agent using that algorithm wishes she possessed a different decision algorithm.
If an agent implemented your strategy, they would change decision strategies every time they come across a predictor that flipped heads.
This strategy is not reflectively consistent. From the new TDT PDF:
If an agent implemented your strategy, they would change decision strategies every time they come across a predictor that flipped heads.