I think anytime you say “what is the probability that”, as if it were an objective fact or measure, rather than an agent’s tool for prediction, framed as “what is this agent’s probability assignment over …”, you’re somewhat outside of a bayesean framework.
In my view, those are incomplete propositions—your probability assignment may be a convenience in making predictions, but it’s not directly updatable. Bayesean calculations are about how to predict evidence, and how to update on that evidence. “what is the chance that this decider can solve the halting program for this program in that timeframe” is something that can use evidence to update. likewise “what is the chance that I will measure this constant next week and have it off by more than 10% from last week”.
“What is true of the universe, in an unobservable way” is not really a question for Bayes-style probability calculations. That doesn’t keep agents from having beliefs, just that there’s no general mechanism for correctly making them better.
I think anytime you say “what is the probability that”, as if it were an objective fact or measure, rather than an agent’s tool for prediction, framed as “what is this agent’s probability assignment over …”, you’re somewhat outside of a bayesean framework.
In my view, those are incomplete propositions—your probability assignment may be a convenience in making predictions, but it’s not directly updatable. Bayesean calculations are about how to predict evidence, and how to update on that evidence. “what is the chance that this decider can solve the halting program for this program in that timeframe” is something that can use evidence to update. likewise “what is the chance that I will measure this constant next week and have it off by more than 10% from last week”.
“What is true of the universe, in an unobservable way” is not really a question for Bayes-style probability calculations. That doesn’t keep agents from having beliefs, just that there’s no general mechanism for correctly making them better.