The obviousness of Bayesian conditionalization seems beside the point. Which is that it constrains beliefs and need not be derived from the set of decisions that seem reasonable.
Your link seems to only suggest that using Bayesian conditionalization in the context of a poor decision theory doesn’t give you the results you want. Which doesn’t say much about Bayesian conditionalization. Am I missing something?
“So I’m not quite sure what you’re asking here...”
It is possible for things to be more important than an unquantified increase in productivity on anthropics. I’m also curious whether you think it has other implications.
I think the important point is that Bayesian conditionalization is a consequences of a decision theory that, naturally stated, does not invoke Bayesian conditionalization.
That being:
Consider the set of all strategies mapping situations to actions. Play the one which maximizes your expected utility from a state of no information.
The obviousness of Bayesian conditionalization seems beside the point. Which is that it constrains beliefs and need not be derived from the set of decisions that seem reasonable.
Your link seems to only suggest that using Bayesian conditionalization in the context of a poor decision theory doesn’t give you the results you want. Which doesn’t say much about Bayesian conditionalization. Am I missing something?
“So I’m not quite sure what you’re asking here...”
It is possible for things to be more important than an unquantified increase in productivity on anthropics. I’m also curious whether you think it has other implications.
I think the important point is that Bayesian conditionalization is a consequences of a decision theory that, naturally stated, does not invoke Bayesian conditionalization.
That being:
Consider the set of all strategies mapping situations to actions. Play the one which maximizes your expected utility from a state of no information.
Bayesian conditionalization can be derived from Dutch book arguments, which are (hypothetical) decisions...