I agree it is troubling if your beliefs don’t have any consequences for conceivable decisions, though as far as I know, not fatal.
This alone doesn’t seem reason to study anthropics alongside decision theory any more than it is to study biology alongside decision theory. Most of how to make decisions is agreed upon, so it is rare that a belief would only have any consequences under a certain decision theory. There may be other reasons to consider them together however.
As far as the second type of constraint—choosing theories based on their consequences—I don’t know why I would expect my intuitions about which decisions I should make to be that reliable relative to my knowledge and intuitions about information theory, probability theory, logic etc. It seems I’m much more motivated to make certain actions than to have correct abstract beliefs about probability (I’d be more wary of abstract beliefs about traditionally emotive topic such as love or gods). If I had a candidate theory which suggested ‘unreasonable decisions’, I would probably keep looking, but this is mostly because I am (embarrassingly) motivated to justify certain decisions, not because of the small amount of evidence that my intuitions could give me on a topic they are probably not honed for.
I’m not sure why you think there are no constraints on beliefs unless they are paired with a decision theory. Could you elaborate? e.g. why is Bayesian conditionalization not a constraint on the set of beliefs you hold? Could you give me an example of a legitimate constraint?
I haven’t participated in UDT-anthropics discussions because working on my current projects seems more productive than looking into all of the research others are doing on topics which may prove useful. If you think this warrants more attention though, I’m listening—what are the most important implications of getting the right decision theory, other than building super-AIs?
I don’t want to spend too much time trying to convince you, since I think people should mostly follow their own instincts (if they have strong instincts) when choosing what research directions to pursue. I was mainly curious if you had already looked into UDT and found it wanting for some reason. But I’ll try to answer your questions.
why is Bayesian conditionalization not a constraint on the set of beliefs you hold?
What justifies Bayesian conditionalization? Is Bayesian conditionalization so obviously correct that it should be considered an axiom?
It turns out that Bayesian updating is appropriate only under certain conditions (which in particular are not satisfied in situations with indexical uncertainty), but this is not easy to see except in the context of decision theory. See Why (and why not) Bayesian Updating?
what are the most important implications of getting the right decision theory, other than building super-AIs?
I’ve already mentioned that I found it productive to consider anthropic reasoning from a decision theoretic perspective (btw, that, not super-AIs, was in fact my original motivation for studying decision theory). So I’m not quite sure what you’re asking here...
The obviousness of Bayesian conditionalization seems beside the point. Which is that it constrains beliefs and need not be derived from the set of decisions that seem reasonable.
Your link seems to only suggest that using Bayesian conditionalization in the context of a poor decision theory doesn’t give you the results you want. Which doesn’t say much about Bayesian conditionalization. Am I missing something?
“So I’m not quite sure what you’re asking here...”
It is possible for things to be more important than an unquantified increase in productivity on anthropics. I’m also curious whether you think it has other implications.
I think the important point is that Bayesian conditionalization is a consequences of a decision theory that, naturally stated, does not invoke Bayesian conditionalization.
That being:
Consider the set of all strategies mapping situations to actions. Play the one which maximizes your expected utility from a state of no information.
“I’m not sure why you think there are no constraints on beliefs unless they are paired with a decision theory.”
Because any change to your probability theory can be undone by a change in decision theory, resulting in the same behaviour in the end. The behaviour is where everything pays rent, so its the combination that matters :-)
I agree it is troubling if your beliefs don’t have any consequences for conceivable decisions, though as far as I know, not fatal.
This alone doesn’t seem reason to study anthropics alongside decision theory any more than it is to study biology alongside decision theory. Most of how to make decisions is agreed upon, so it is rare that a belief would only have any consequences under a certain decision theory. There may be other reasons to consider them together however.
As far as the second type of constraint—choosing theories based on their consequences—I don’t know why I would expect my intuitions about which decisions I should make to be that reliable relative to my knowledge and intuitions about information theory, probability theory, logic etc. It seems I’m much more motivated to make certain actions than to have correct abstract beliefs about probability (I’d be more wary of abstract beliefs about traditionally emotive topic such as love or gods). If I had a candidate theory which suggested ‘unreasonable decisions’, I would probably keep looking, but this is mostly because I am (embarrassingly) motivated to justify certain decisions, not because of the small amount of evidence that my intuitions could give me on a topic they are probably not honed for.
I’m not sure why you think there are no constraints on beliefs unless they are paired with a decision theory. Could you elaborate? e.g. why is Bayesian conditionalization not a constraint on the set of beliefs you hold? Could you give me an example of a legitimate constraint?
I haven’t participated in UDT-anthropics discussions because working on my current projects seems more productive than looking into all of the research others are doing on topics which may prove useful. If you think this warrants more attention though, I’m listening—what are the most important implications of getting the right decision theory, other than building super-AIs?
I don’t want to spend too much time trying to convince you, since I think people should mostly follow their own instincts (if they have strong instincts) when choosing what research directions to pursue. I was mainly curious if you had already looked into UDT and found it wanting for some reason. But I’ll try to answer your questions.
What justifies Bayesian conditionalization? Is Bayesian conditionalization so obviously correct that it should be considered an axiom?
It turns out that Bayesian updating is appropriate only under certain conditions (which in particular are not satisfied in situations with indexical uncertainty), but this is not easy to see except in the context of decision theory. See Why (and why not) Bayesian Updating?
I’ve already mentioned that I found it productive to consider anthropic reasoning from a decision theoretic perspective (btw, that, not super-AIs, was in fact my original motivation for studying decision theory). So I’m not quite sure what you’re asking here...
The obviousness of Bayesian conditionalization seems beside the point. Which is that it constrains beliefs and need not be derived from the set of decisions that seem reasonable.
Your link seems to only suggest that using Bayesian conditionalization in the context of a poor decision theory doesn’t give you the results you want. Which doesn’t say much about Bayesian conditionalization. Am I missing something?
“So I’m not quite sure what you’re asking here...”
It is possible for things to be more important than an unquantified increase in productivity on anthropics. I’m also curious whether you think it has other implications.
I think the important point is that Bayesian conditionalization is a consequences of a decision theory that, naturally stated, does not invoke Bayesian conditionalization.
That being:
Consider the set of all strategies mapping situations to actions. Play the one which maximizes your expected utility from a state of no information.
Bayesian conditionalization can be derived from Dutch book arguments, which are (hypothetical) decisions...
“I’m not sure why you think there are no constraints on beliefs unless they are paired with a decision theory.”
Because any change to your probability theory can be undone by a change in decision theory, resulting in the same behaviour in the end. The behaviour is where everything pays rent, so its the combination that matters :-)