I suspect our disagreement may be on whether this post (or most of the Omega-style examples) is a useful extension of that. Comparing “very good prediction that you know about but can’t react to, because you’re a much weaker predictor than Omega” with “weak prediction that you don’t know about, but could counter-model if you bothered” seems like a largely different scenario to me. To the point that I don’t think you can learn much about one from the other.
“weak prediction that you don’t know about, but could counter-model if you bothered”
That’s why the distinction between reasoning intuitively and reasoning formally. If it’s an explicit premise of the theory that built the procedure that this doesn’t happen, the formal procedure won’t be allowing you to “counter-model if you bothered”. An intuitive workaround doesn’t fix the issue in the theory.
I suspect our disagreement may be on whether this post (or most of the Omega-style examples) is a useful extension of that. Comparing “very good prediction that you know about but can’t react to, because you’re a much weaker predictor than Omega” with “weak prediction that you don’t know about, but could counter-model if you bothered” seems like a largely different scenario to me. To the point that I don’t think you can learn much about one from the other.
That’s why the distinction between reasoning intuitively and reasoning formally. If it’s an explicit premise of the theory that built the procedure that this doesn’t happen, the formal procedure won’t be allowing you to “counter-model if you bothered”. An intuitive workaround doesn’t fix the issue in the theory.