Well, de facto they always converge, mugging or not, and I’m not going to take as normative a formalism where they do diverge. edit: e.g. instead I can adopt speed prior, it’s far less insane than incompetent people make it out to be—code size penalty for optimizing out the unseen is very significant. Or if I don’t like speed prior (and other such “solutions”), I can simply be sane and conclude that we don’t have a working formalism. Prescriptivism is silly when it is unclear how to decide efficiently under bounded computing power.
I can simply be sane and conclude that we don’t have a working formalism.
That’s generally what you do when you find a paradox that you can’t solve. I’m not suggesting that you actually conclude that you can’t make a decision.
Of course. And on the practical level, if I want other agents to provide me with more accurate information (something that has high utility scaled by all potential unlikely scenarios), I must try to make production of falsehoods non-profitable.
Well, de facto they always converge, mugging or not, and I’m not going to take as normative a formalism where they do diverge. edit: e.g. instead I can adopt speed prior, it’s far less insane than incompetent people make it out to be—code size penalty for optimizing out the unseen is very significant. Or if I don’t like speed prior (and other such “solutions”), I can simply be sane and conclude that we don’t have a working formalism. Prescriptivism is silly when it is unclear how to decide efficiently under bounded computing power.
That’s generally what you do when you find a paradox that you can’t solve. I’m not suggesting that you actually conclude that you can’t make a decision.
Of course. And on the practical level, if I want other agents to provide me with more accurate information (something that has high utility scaled by all potential unlikely scenarios), I must try to make production of falsehoods non-profitable.