Anthropic reasoning can’t exist apart from a decision theory, otherwise
there is no constraint on what reasoning process you can use. You might as
well believe anything if it has no effect on your actions.
Think of it as an extension of Eliezer’s “make beliefs pay rent in anticipated experiences”. I think beliefs should pay rent in decision making.
Katja, I’m not sure if this is something that has persuasive power for you, but it’s an idea that has brought a lot of clarity to me regarding anthropic reasoning and has led to the UDT approach to anthropics, which several other LWers also seem to find promising. I believe anthropic reasoning is a specialization of yours, but you have mostly stayed out of the UDT-anthropics discussions. May I ask why?
Are you saying that beliefs should be constrained by having to give rise to the decisions that seem reasonable, or should be constrained by having to give rise to some decisions at all?
You could start with the latter to begin with. But surely if you have a candidate anthropic reasoning theory, but the only way to fit it into a decision theory produces unreasonable decisions, you’d want to keep looking?
I agree it is troubling if your beliefs don’t have any consequences for conceivable decisions, though as far as I know, not fatal.
This alone doesn’t seem reason to study anthropics alongside decision theory any more than it is to study biology alongside decision theory. Most of how to make decisions is agreed upon, so it is rare that a belief would only have any consequences under a certain decision theory. There may be other reasons to consider them together however.
As far as the second type of constraint—choosing theories based on their consequences—I don’t know why I would expect my intuitions about which decisions I should make to be that reliable relative to my knowledge and intuitions about information theory, probability theory, logic etc. It seems I’m much more motivated to make certain actions than to have correct abstract beliefs about probability (I’d be more wary of abstract beliefs about traditionally emotive topic such as love or gods). If I had a candidate theory which suggested ‘unreasonable decisions’, I would probably keep looking, but this is mostly because I am (embarrassingly) motivated to justify certain decisions, not because of the small amount of evidence that my intuitions could give me on a topic they are probably not honed for.
I’m not sure why you think there are no constraints on beliefs unless they are paired with a decision theory. Could you elaborate? e.g. why is Bayesian conditionalization not a constraint on the set of beliefs you hold? Could you give me an example of a legitimate constraint?
I haven’t participated in UDT-anthropics discussions because working on my current projects seems more productive than looking into all of the research others are doing on topics which may prove useful. If you think this warrants more attention though, I’m listening—what are the most important implications of getting the right decision theory, other than building super-AIs?
I don’t want to spend too much time trying to convince you, since I think people should mostly follow their own instincts (if they have strong instincts) when choosing what research directions to pursue. I was mainly curious if you had already looked into UDT and found it wanting for some reason. But I’ll try to answer your questions.
why is Bayesian conditionalization not a constraint on the set of beliefs you hold?
What justifies Bayesian conditionalization? Is Bayesian conditionalization so obviously correct that it should be considered an axiom?
It turns out that Bayesian updating is appropriate only under certain conditions (which in particular are not satisfied in situations with indexical uncertainty), but this is not easy to see except in the context of decision theory. See Why (and why not) Bayesian Updating?
what are the most important implications of getting the right decision theory, other than building super-AIs?
I’ve already mentioned that I found it productive to consider anthropic reasoning from a decision theoretic perspective (btw, that, not super-AIs, was in fact my original motivation for studying decision theory). So I’m not quite sure what you’re asking here...
The obviousness of Bayesian conditionalization seems beside the point. Which is that it constrains beliefs and need not be derived from the set of decisions that seem reasonable.
Your link seems to only suggest that using Bayesian conditionalization in the context of a poor decision theory doesn’t give you the results you want. Which doesn’t say much about Bayesian conditionalization. Am I missing something?
“So I’m not quite sure what you’re asking here...”
It is possible for things to be more important than an unquantified increase in productivity on anthropics. I’m also curious whether you think it has other implications.
I think the important point is that Bayesian conditionalization is a consequences of a decision theory that, naturally stated, does not invoke Bayesian conditionalization.
That being:
Consider the set of all strategies mapping situations to actions. Play the one which maximizes your expected utility from a state of no information.
“I’m not sure why you think there are no constraints on beliefs unless they are paired with a decision theory.”
Because any change to your probability theory can be undone by a change in decision theory, resulting in the same behaviour in the end. The behaviour is where everything pays rent, so its the combination that matters :-)
I used to have the same viewpoint as your 2001 quote, but I think I’m giving it up. CDT,EDT, and TDT theorists agree that a coin flip is 50-50, so probability in general doesn’t seem to be too dependent on decision theory.
I still agree that when you’re confused, retreating to decisions helps. It can help you decide that it’s okay to walk in the garage with the invisible dragon, and that it’s okay for your friends to head out on a space expedition beyond the cosmological horizon. Once you’ve decided this, however, ideas like “there is no dragon” and “my friends still exist” kinda drop out of the analysis, and you can have your (non)existing invisibles back.
In the case of indexical probabilities, it’s less obvious what it even means to say “I am this copy”, but I don’t think it’s nonsense. I changed my mind when JGWeissman mentioned that all of the situations where you decide to say “1/2″ in the sleeping beauty problem are one’s where you have precisely enough evidence to shift your prior from 2⁄3 to 1⁄2.
I wrote in 2001:
Think of it as an extension of Eliezer’s “make beliefs pay rent in anticipated experiences”. I think beliefs should pay rent in decision making.
Katja, I’m not sure if this is something that has persuasive power for you, but it’s an idea that has brought a lot of clarity to me regarding anthropic reasoning and has led to the UDT approach to anthropics, which several other LWers also seem to find promising. I believe anthropic reasoning is a specialization of yours, but you have mostly stayed out of the UDT-anthropics discussions. May I ask why?
Are you saying that beliefs should be constrained by having to give rise to the decisions that seem reasonable, or should be constrained by having to give rise to some decisions at all?
You could start with the latter to begin with. But surely if you have a candidate anthropic reasoning theory, but the only way to fit it into a decision theory produces unreasonable decisions, you’d want to keep looking?
I agree it is troubling if your beliefs don’t have any consequences for conceivable decisions, though as far as I know, not fatal.
This alone doesn’t seem reason to study anthropics alongside decision theory any more than it is to study biology alongside decision theory. Most of how to make decisions is agreed upon, so it is rare that a belief would only have any consequences under a certain decision theory. There may be other reasons to consider them together however.
As far as the second type of constraint—choosing theories based on their consequences—I don’t know why I would expect my intuitions about which decisions I should make to be that reliable relative to my knowledge and intuitions about information theory, probability theory, logic etc. It seems I’m much more motivated to make certain actions than to have correct abstract beliefs about probability (I’d be more wary of abstract beliefs about traditionally emotive topic such as love or gods). If I had a candidate theory which suggested ‘unreasonable decisions’, I would probably keep looking, but this is mostly because I am (embarrassingly) motivated to justify certain decisions, not because of the small amount of evidence that my intuitions could give me on a topic they are probably not honed for.
I’m not sure why you think there are no constraints on beliefs unless they are paired with a decision theory. Could you elaborate? e.g. why is Bayesian conditionalization not a constraint on the set of beliefs you hold? Could you give me an example of a legitimate constraint?
I haven’t participated in UDT-anthropics discussions because working on my current projects seems more productive than looking into all of the research others are doing on topics which may prove useful. If you think this warrants more attention though, I’m listening—what are the most important implications of getting the right decision theory, other than building super-AIs?
I don’t want to spend too much time trying to convince you, since I think people should mostly follow their own instincts (if they have strong instincts) when choosing what research directions to pursue. I was mainly curious if you had already looked into UDT and found it wanting for some reason. But I’ll try to answer your questions.
What justifies Bayesian conditionalization? Is Bayesian conditionalization so obviously correct that it should be considered an axiom?
It turns out that Bayesian updating is appropriate only under certain conditions (which in particular are not satisfied in situations with indexical uncertainty), but this is not easy to see except in the context of decision theory. See Why (and why not) Bayesian Updating?
I’ve already mentioned that I found it productive to consider anthropic reasoning from a decision theoretic perspective (btw, that, not super-AIs, was in fact my original motivation for studying decision theory). So I’m not quite sure what you’re asking here...
The obviousness of Bayesian conditionalization seems beside the point. Which is that it constrains beliefs and need not be derived from the set of decisions that seem reasonable.
Your link seems to only suggest that using Bayesian conditionalization in the context of a poor decision theory doesn’t give you the results you want. Which doesn’t say much about Bayesian conditionalization. Am I missing something?
“So I’m not quite sure what you’re asking here...”
It is possible for things to be more important than an unquantified increase in productivity on anthropics. I’m also curious whether you think it has other implications.
I think the important point is that Bayesian conditionalization is a consequences of a decision theory that, naturally stated, does not invoke Bayesian conditionalization.
That being:
Consider the set of all strategies mapping situations to actions. Play the one which maximizes your expected utility from a state of no information.
Bayesian conditionalization can be derived from Dutch book arguments, which are (hypothetical) decisions...
“I’m not sure why you think there are no constraints on beliefs unless they are paired with a decision theory.”
Because any change to your probability theory can be undone by a change in decision theory, resulting in the same behaviour in the end. The behaviour is where everything pays rent, so its the combination that matters :-)
I used to have the same viewpoint as your 2001 quote, but I think I’m giving it up. CDT,EDT, and TDT theorists agree that a coin flip is 50-50, so probability in general doesn’t seem to be too dependent on decision theory.
I still agree that when you’re confused, retreating to decisions helps. It can help you decide that it’s okay to walk in the garage with the invisible dragon, and that it’s okay for your friends to head out on a space expedition beyond the cosmological horizon. Once you’ve decided this, however, ideas like “there is no dragon” and “my friends still exist” kinda drop out of the analysis, and you can have your (non)existing invisibles back.
In the case of indexical probabilities, it’s less obvious what it even means to say “I am this copy”, but I don’t think it’s nonsense. I changed my mind when JGWeissman mentioned that all of the situations where you decide to say “1/2″ in the sleeping beauty problem are one’s where you have precisely enough evidence to shift your prior from 2⁄3 to 1⁄2.
Just because it all adds up to normalcy doesn’t mean that it is irrelevant.
It is more elegant to just have a decision theory than to have a decision theory and a rule for updating, and it deals better with corner cases.