It is an anthropic problem. Agents who don’t get to make decisions by definition don’t really exist in the ontology of decision theory. As a decision theoretic agent being told you are not the decider is equivalent to dying.
Depends of what you mean by “Anthropic problem”. The first google result for that term right now is this post, so the term doesn’t seem to have a widely-agreed upon meaning, though there is some interesting discussion on Wikipedia.
Maybe we could distinguish
“Anthropic reasoning”, where your reasoning needs to take into account not only the facts you observed (i.e. standard bayesian reasoning) but also the fact that you are there to take the decision period.
“Anthropic scenarios” (ugh), where the existence of agents comes into account (ike the sleeping beauty problem, our universe, etc.)
Anthropic scenarios feature outlandish situations (teleporters, the sleeping beauty) or are somewhat hard to reproduce (the existence of our universe). So making scenarios that aren’t outlandish anthropic scenarios but still require anthropic reasoning is nice for intuition (especially in an area like this where everybody’s intuition starts breaking down), even if it doesn’t change anything from a pure decision theory point of view.
I’m not very happy with this decomposition; seems to me “is this an anthropic problem?” can be answered by “Well it does require anthropic reasoning but doesn’t require outlandish scenarios like most similar problems”, but there may be a better way of putting it.
It is a nice feature that Psy-kosh’s problem that it pumps the confusing intuitions we see in scenarios like the Sleep Beauty problem without recourse to memory erasing drugs or teleporters—I think it tells us something important about this class of problem. But mathematically the problem is equivalent to one where the coin-flip doesn’t make nine people deciders but copies you nine times- I don’t think there is a good justification for labeling these problems differently.
The interesting question is what this example tells us about the nature of this class of problem- and I’m having trouble putting my finger on just what that is.
So I have an idea- which is either going to make perfect sense to people right away or it’s going to have to wait for a post from me (on stuff I’ve said I’d write a post on forever). The short and sweet of it is: There is only one decision theoretic agent in this problem (never nine) and that agent gets no new information with which to update on. I need to sleep but I’ll start writing in the morning.
It is an anthropic problem. Agents who don’t get to make decisions by definition don’t really exist in the ontology of decision theory. As a decision theoretic agent being told you are not the decider is equivalent to dying.
Or more precisely it is equivalent to falling into a crack in spacetime without Amelia Pond having a crush on you. ;)
Depends of what you mean by “Anthropic problem”. The first google result for that term right now is this post, so the term doesn’t seem to have a widely-agreed upon meaning, though there is some interesting discussion on Wikipedia.
Maybe we could distinguish
“Anthropic reasoning”, where your reasoning needs to take into account not only the facts you observed (i.e. standard bayesian reasoning) but also the fact that you are there to take the decision period.
“Anthropic scenarios” (ugh), where the existence of agents comes into account (ike the sleeping beauty problem, our universe, etc.)
Anthropic scenarios feature outlandish situations (teleporters, the sleeping beauty) or are somewhat hard to reproduce (the existence of our universe). So making scenarios that aren’t outlandish anthropic scenarios but still require anthropic reasoning is nice for intuition (especially in an area like this where everybody’s intuition starts breaking down), even if it doesn’t change anything from a pure decision theory point of view.
I’m not very happy with this decomposition; seems to me “is this an anthropic problem?” can be answered by “Well it does require anthropic reasoning but doesn’t require outlandish scenarios like most similar problems”, but there may be a better way of putting it.
It is a nice feature that Psy-kosh’s problem that it pumps the confusing intuitions we see in scenarios like the Sleep Beauty problem without recourse to memory erasing drugs or teleporters—I think it tells us something important about this class of problem. But mathematically the problem is equivalent to one where the coin-flip doesn’t make nine people deciders but copies you nine times- I don’t think there is a good justification for labeling these problems differently.
The interesting question is what this example tells us about the nature of this class of problem- and I’m having trouble putting my finger on just what that is.
RIght. That’s the question I wanted people to answer, not just solve the object-level problem (UDT solves it just fine).
So I have an idea- which is either going to make perfect sense to people right away or it’s going to have to wait for a post from me (on stuff I’ve said I’d write a post on forever). The short and sweet of it is: There is only one decision theoretic agent in this problem (never nine) and that agent gets no new information with which to update on. I need to sleep but I’ll start writing in the morning.