The model is fully specified (again, sorry if this isn’t clear from the post). And in the model we can make perfectly precise the idea of an agent re-assessing their commitments from the perspective of a more-aware prior. Such an agent would disagree that they have lost value by revising their policy. Again, I’m not sure exactly where you are disagreeing with this. (You say something about giving too much weight to a crazy opponent — I’m not sure what “too much” means here.)
Re: conservation of expected evidence, the EA-OMU agent doesn’t expect to increase their chances of facing a crazy opponent. Indeed, they aren’t even aware of the possibility of crazy opponents at the beginning of the game, so I’m not sure what that would mean. (They may be aware that their awareness might grow in the future, but this doesn’t mean they expect their assessments of the expected value of different policies to change.) Maybe you misunderstand what we mean by “unawareness”?
The missing part is the ACTUAL distribution of normal vs crazy opponents (note that “crazy” is perfectly interchangeable with “normal, who was able to commit first”), and the loss that comes from failing to commit against a normal opponent. Or the reasoning that a normal opponent will see it as commitment, even when it’s not truly a commitment if the opponent turns out to be crazy.
Anyway, interesting discussion. I’m not certain I understand where we differ on it’s applicability, but I think we’ve hashed it out as much as possible. I’ll continue reading and thinking—feel free to respond or rebut, but I’m unlikely to comment further. Thanks!
The model is fully specified (again, sorry if this isn’t clear from the post). And in the model we can make perfectly precise the idea of an agent re-assessing their commitments from the perspective of a more-aware prior. Such an agent would disagree that they have lost value by revising their policy. Again, I’m not sure exactly where you are disagreeing with this. (You say something about giving too much weight to a crazy opponent — I’m not sure what “too much” means here.)
Re: conservation of expected evidence, the EA-OMU agent doesn’t expect to increase their chances of facing a crazy opponent. Indeed, they aren’t even aware of the possibility of crazy opponents at the beginning of the game, so I’m not sure what that would mean. (They may be aware that their awareness might grow in the future, but this doesn’t mean they expect their assessments of the expected value of different policies to change.) Maybe you misunderstand what we mean by “unawareness”?
The missing part is the ACTUAL distribution of normal vs crazy opponents (note that “crazy” is perfectly interchangeable with “normal, who was able to commit first”), and the loss that comes from failing to commit against a normal opponent. Or the reasoning that a normal opponent will see it as commitment, even when it’s not truly a commitment if the opponent turns out to be crazy.
Anyway, interesting discussion. I’m not certain I understand where we differ on it’s applicability, but I think we’ve hashed it out as much as possible. I’ll continue reading and thinking—feel free to respond or rebut, but I’m unlikely to comment further. Thanks!