I think the question you want is more like: “how can one have well-calibrated strong probabilities?”. Or maybe “correct”. I don’t think you need the word “rationally” here, and it’s almost never helpful at the object level—it’s a tool for meta-level discussions, training habits, discussing patterns, and so on.
To answer the object-level question… well, do you have well-calibrated beliefs in other domains? Did you test that? What do you think you know about your belief calibration, and how do you think you know it?
Personally, I think you mostly get there by looking at the argument structure. You can start with “well, I don’t know anything about proposition P, so it gets a 50%”, but as soon as you start looking at the details that probability shifts. What paths lead there, what don’t? If you keep coming up with complex conjunctive arguments against, and multiple-path disjunctive arguments for, the probability rapidly goes up, and can go up quite high. And that’s true even if you don’t know much about the details of those arguments, if you have any confidence at all that the process producing those is only somewhat biased. When you do have the ability to evaluate those in detail, you can get fairly high confidence.
That said, my current way of expressing my confidence on this topic is more like “on my main line scenarios...” or “conditional on no near-term giant surprises...” or “if we keep on with business as usual...”. I like the conditional predictions a lot more, partly because I feel more confident in them and partly because conditional predictions are the correct way to provide inputs to policy decisions. Different policies have different results, even if I’m not confident in our ability to enact the good ones.
I used that description very much intentionally. As in, use your best Bayesian estimate, formal or intuitive.
As to the object level, “pre-paradigmatic” is essential. The field is full of unknown unknowns. Or, as you say “conditional on no near-term giant surprises...”—and there have been “giant surprises” in both directions recently, and likely will be more soon. It seems folly to be very confident in any specific outcome at this point.
Taboo “rationally”.
I think the question you want is more like: “how can one have well-calibrated strong probabilities?”. Or maybe “correct”. I don’t think you need the word “rationally” here, and it’s almost never helpful at the object level—it’s a tool for meta-level discussions, training habits, discussing patterns, and so on.
To answer the object-level question… well, do you have well-calibrated beliefs in other domains? Did you test that? What do you think you know about your belief calibration, and how do you think you know it?
Personally, I think you mostly get there by looking at the argument structure. You can start with “well, I don’t know anything about proposition P, so it gets a 50%”, but as soon as you start looking at the details that probability shifts. What paths lead there, what don’t? If you keep coming up with complex conjunctive arguments against, and multiple-path disjunctive arguments for, the probability rapidly goes up, and can go up quite high. And that’s true even if you don’t know much about the details of those arguments, if you have any confidence at all that the process producing those is only somewhat biased. When you do have the ability to evaluate those in detail, you can get fairly high confidence.
That said, my current way of expressing my confidence on this topic is more like “on my main line scenarios...” or “conditional on no near-term giant surprises...” or “if we keep on with business as usual...”. I like the conditional predictions a lot more, partly because I feel more confident in them and partly because conditional predictions are the correct way to provide inputs to policy decisions. Different policies have different results, even if I’m not confident in our ability to enact the good ones.
I used that description very much intentionally. As in, use your best Bayesian estimate, formal or intuitive.
As to the object level, “pre-paradigmatic” is essential. The field is full of unknown unknowns. Or, as you say “conditional on no near-term giant surprises...”—and there have been “giant surprises” in both directions recently, and likely will be more soon. It seems folly to be very confident in any specific outcome at this point.