A better decision procedure is possible, or better diagnostics of issues that arise in particular procedures.
Can we move on to more interesting questions?
More specifically, obviously breaking normal operational conditions in various ways is useful in highlighting what’s normally broken in much more subtle ways. Mind-reading Omegas that perfectly predict your future decisions are shadowed by human minds that weakly and noisily guess future decisions of others. If these aren’t accounted for in decision theory, there is a systematic leak in the abstraction, which won’t be visible directly, without either much better tools or such thought experiments. This effect is ubiquitous and can’t be addressed merely by giving up in the instances where it’s recognizably there. Normal procedure should get good enough to be able to cope.
Ahh, that’s an interesting take—I don’t agree AT ALL, which I generally expect means I’m misunderstanding something. In what way are there active agents with more knowledge of my relevant future decisions than I have, and with sufficient predictive ability that even randomization is not effective.
The given thought experiment is effectively either “Omega reads my mind, and finds I don’t have a probability, because I recognize the paradox”, or “Omega takes my statement, then sets the probability to half of what I say”. I’m getting a coinflip, yay!
When other people have even the slightest sense of your future decisions or beliefs, but you don’t account for that effect (when deciding or reasoning formally, rather than intuitively), then your decision procedure would be systematically wrong in that respect, built on invalid premises. It would only be slightly wrong to the extent that others’ edge on knowing your future decisions is weak, but wrong nonetheless. So it’s worth building theory that accounts for that effect. You might take away that predictive ability by “giving up” in various ways, but not when you’re reasoning and deciding routinely, which is when the ability of others to predict your decisions and beliefs is still there to some extent.
I suspect our disagreement may be on whether this post (or most of the Omega-style examples) is a useful extension of that. Comparing “very good prediction that you know about but can’t react to, because you’re a much weaker predictor than Omega” with “weak prediction that you don’t know about, but could counter-model if you bothered” seems like a largely different scenario to me. To the point that I don’t think you can learn much about one from the other.
“weak prediction that you don’t know about, but could counter-model if you bothered”
That’s why the distinction between reasoning intuitively and reasoning formally. If it’s an explicit premise of the theory that built the procedure that this doesn’t happen, the formal procedure won’t be allowing you to “counter-model if you bothered”. An intuitive workaround doesn’t fix the issue in the theory.
A better decision procedure is possible, or better diagnostics of issues that arise in particular procedures.
More specifically, obviously breaking normal operational conditions in various ways is useful in highlighting what’s normally broken in much more subtle ways. Mind-reading Omegas that perfectly predict your future decisions are shadowed by human minds that weakly and noisily guess future decisions of others. If these aren’t accounted for in decision theory, there is a systematic leak in the abstraction, which won’t be visible directly, without either much better tools or such thought experiments. This effect is ubiquitous and can’t be addressed merely by giving up in the instances where it’s recognizably there. Normal procedure should get good enough to be able to cope.
Is a better decision procedure possible? I don’t see it in this thought experiment. A pointer to such a thing would help a lot.
I agree in many cases, but breaking fundamental decision causality in adversarial ways is not among those cases.
My point is that it’s already broken in essentially this exact way all the time, just much less acutely.
Ahh, that’s an interesting take—I don’t agree AT ALL, which I generally expect means I’m misunderstanding something. In what way are there active agents with more knowledge of my relevant future decisions than I have, and with sufficient predictive ability that even randomization is not effective.
The given thought experiment is effectively either “Omega reads my mind, and finds I don’t have a probability, because I recognize the paradox”, or “Omega takes my statement, then sets the probability to half of what I say”. I’m getting a coinflip, yay!
When other people have even the slightest sense of your future decisions or beliefs, but you don’t account for that effect (when deciding or reasoning formally, rather than intuitively), then your decision procedure would be systematically wrong in that respect, built on invalid premises. It would only be slightly wrong to the extent that others’ edge on knowing your future decisions is weak, but wrong nonetheless. So it’s worth building theory that accounts for that effect. You might take away that predictive ability by “giving up” in various ways, but not when you’re reasoning and deciding routinely, which is when the ability of others to predict your decisions and beliefs is still there to some extent.
I suspect our disagreement may be on whether this post (or most of the Omega-style examples) is a useful extension of that. Comparing “very good prediction that you know about but can’t react to, because you’re a much weaker predictor than Omega” with “weak prediction that you don’t know about, but could counter-model if you bothered” seems like a largely different scenario to me. To the point that I don’t think you can learn much about one from the other.
That’s why the distinction between reasoning intuitively and reasoning formally. If it’s an explicit premise of the theory that built the procedure that this doesn’t happen, the formal procedure won’t be allowing you to “counter-model if you bothered”. An intuitive workaround doesn’t fix the issue in the theory.