I purport the two probabilities should be used for questions regarding respective perspectives: for my decisions maximizing my payoffs, use 0.9; for coordination strategy prescribing action of all participants with the goal of maximizing overall payoffs, use 0.5. In fact, the paradox started with the coordination strategy from an objective viewpoint when talking about the pre-game plan, but it later switched to the personal strategy using 0.9.
I think I mostly agree with that.
I understand you do not endorse this perspective-based reasoning.
I agree that there can be valid differences in people perspectives. But I reduce them to differences in possible events that people can or can’t observe. This allows to reduce all the mysterious anthropic stuff to simple probability theory and makes the reasoning more clear, I believe.
If you say they are based on two mathematic models that are both valid, then after you drew a green ball, if someone asks about your probability of the mostly-green urn what is your answer? 0.5 AND 0.9? It depends?
As I’ve written in the post, my personal probability is 0.9. More specifically, it’s probability that the coin is Heads, conditionally on me seeing green.
This is because while I may or may not see green, someone from the group always will. Me, in particular and any person have different possible events that we can observe. Thus we have different probabilities for these events. If we had the same possible events, for example, because I’m the only person in the experiment, then the probabilities would be the same
And then you just check which probability is relevant to which betting scheme. In this case it’s the probability for any person not for me.
Furthermore, using whatever probability that best match the betting scheme to me is a convenient way of avoiding undesirable answers without committing to a hard methodology. It is akin to endorse SSA or SIA situationally to get the least paradoxical answer for each individual question.
Of course it would look like that from inside the SSA vs SIA framework. But that’s because the framework is stupid.
Imagine there is a passionate disagreement of what color the sky is. Some people claim that it’s blue, while other people claim that it’s black. There is a significant amount of evidence supporting both sides. For example, a group of blue sky supporters went outside during the day and recorded that the sky is blue. Then a group of black sky supporters did the same during the night and recorded that the sky is black. Both groups argue that the other group made their experiment from the other side of the planet then the result would be different. With time, two theories are developped: Constant Day Assumption and Constant Night Assumption. Followers of CDA claim that one should reason about the color of the sky as if it’s day, while followers of CNA claim that one should reason about the color of the sky as if it’s night. Different experiments are pointing towards different directions and both sides claim, that while they indeed have to bite some bullets, at least it’s not so bad as with the other side.
Now, suppose, someone comes forward and claims that sometimes the sky is blue and sometimes it’s black. That the right behaviour is not to always assume that it’s either day or night, but to to check which is currently true. That when it’s day one should reason as if it’s day and thus the sky is blue, while when it’s night one should reason as if it’s night that the sky is black. Surprisingly, this new approach fixes all the problems with both CDA and CNA.
But isn’t it such an unprincipled position? Just a refusal to commit to one of the theories and switching between them?
No, of course not! It’s just how we solve every other question—we actually look at what’s going on before making assumptions! It’s the core thing about epistemic rationality—if you want to systematically being able to make a map of cities—you have to actually explore them. That’s the most solid metodology.
If my understand is correct you are holding that there is only one goal for the current question: maximizing overall payoff and maximizing my personal payoff is the same goal.
I’m not sure I understand this question. Probabilities are not just about payoffs, we can talk about them even without utility functions over outcomes. But if your probabilistic model is correct then whatever betting scheme is specified you should be able to apply it to get the best payoff. So getting the best payoff isn’t the goal in itself but it can be used as a validation for your mathematical model, though one should be careful and not base everything on just betting as there are still weird edge cases, such as Sleeping Beauty which my future posts will explore.
If so, after drawing the green ball and being told all other participants have said yes to the bet, what is the proper answer to maximize your own gain? Which probability would you use then?
If I’m the only decider then probability for any person to see green becomes the same as probability of me in particular to see green. And so I should say yes to the “collective” bet.
I guess my main problem with your approach is that I don’t see a clear rational of which probability to use, or when to interpret it as “I see green” and when to interpret it as “Anyone see green” when both of the statement is based on the fact that I drew a green ball.
For example, my argument is that after seeing the green ball, my probability is 0.9, and I shall make all my decisions based on that. Why not update the pre-game plan based on that probability? Because the pre-game plan is not my decision. It is an agreement reached by all participants, a coordination. That coordination is reached by everyone reasoning objectively, which does not accommodate any any first-person self identification like “I”. In short, when reasoning from my personal perspective, use “I see green”; when reasoning from an objective perspective, use “someone see green”. All my solution (PBR) for anthropic and related questions are based on the exact same supposition of the axiomatic status of the first-person perspective. It gives the same explanation, and one can predict what this theory says about a problem. Some results are greatly disliked by many, like the nonexistence of self-locating probability and perspective disagreement, but those are clearly the conclusion of PBR, and I am advocating it.
You are arguing the two interpretation of “I see green” and “Anyone sees green” are both valid, and which one to use depends on the specific question. But, to me, what exact logic dictates this assignment is unclear. You argue that the bets structured not depending on which exact person gets green, then “my decision” shall be based on “anyone sees green”, it seems to me, a way of simply selecting whichever interpretation that does not yield a problematic result. A practice of fitting theory to results.
To the example I brought up in the last reply, what would you do if you drew a green ball and were told that all participants said yes, you used the probability of 0.9. Rational being you are the only decider in this case. It puzzles me because in exactly what sense “I am the only decider?” Didn’t other people also decide to say “yes”? Didn’t their “yes” contributed to whether the bet would be taken the same way as your “yes”? If you are saying I am the only decider because whatever I say would determine whether the bet would be taken. How is that different from deriving other’s responses by using the assumption of “everyone in my position would have the same decision as I do”? But you used probability of 0.5 (“someone sees green”) in that situation. If you are referring you being the only decider in a causal—counterfactual sense, then you are still in the same position as all other green ball holders. What justifies the change regarding which interpretation—which probability (0.5 or 0.9)—to use?
And also the case of our discussion about perspective disagreement in the other post where you and cousin-it were having a discussion. I, by PBR, concluded there should be a perspective disagreement. You held that there won’t be a probability disagreement, because the correct way for Alice to interpret the meeting is “Bob has met Alice in the experiment overall” rather than “Bob has met Alice today”. I am not sure your rational for picking one interpretation over the other. It seems the correct interpretation is always the one that does not give the problematic outcome. And that to me, is a practice of avoiding the paradoxes but not a theory to resolve them.
I think I mostly agree with that.
I agree that there can be valid differences in people perspectives. But I reduce them to differences in possible events that people can or can’t observe. This allows to reduce all the mysterious anthropic stuff to simple probability theory and makes the reasoning more clear, I believe.
As I’ve written in the post, my personal probability is 0.9. More specifically, it’s probability that the coin is Heads, conditionally on me seeing green.
P(Heads|ISeeGreen)=P(ISeeGreen|Heads)P(Heads)/P(ISeeGreen)=0.9
But probability that the coin is Heads, conditionally on any person seeing green is 0.5
P(Heads|AnySeesGreen)=P(AnySeesGreen|Heads)P(Heads)/P(AnySeesGreen)=0.5
This is because while I may or may not see green, someone from the group always will. Me, in particular and any person have different possible events that we can observe. Thus we have different probabilities for these events. If we had the same possible events, for example, because I’m the only person in the experiment, then the probabilities would be the same
And then you just check which probability is relevant to which betting scheme. In this case it’s the probability for any person not for me.
Of course it would look like that from inside the SSA vs SIA framework. But that’s because the framework is stupid.
Imagine there is a passionate disagreement of what color the sky is. Some people claim that it’s blue, while other people claim that it’s black. There is a significant amount of evidence supporting both sides. For example, a group of blue sky supporters went outside during the day and recorded that the sky is blue. Then a group of black sky supporters did the same during the night and recorded that the sky is black. Both groups argue that the other group made their experiment from the other side of the planet then the result would be different. With time, two theories are developped: Constant Day Assumption and Constant Night Assumption. Followers of CDA claim that one should reason about the color of the sky as if it’s day, while followers of CNA claim that one should reason about the color of the sky as if it’s night. Different experiments are pointing towards different directions and both sides claim, that while they indeed have to bite some bullets, at least it’s not so bad as with the other side.
Now, suppose, someone comes forward and claims that sometimes the sky is blue and sometimes it’s black. That the right behaviour is not to always assume that it’s either day or night, but to to check which is currently true. That when it’s day one should reason as if it’s day and thus the sky is blue, while when it’s night one should reason as if it’s night that the sky is black. Surprisingly, this new approach fixes all the problems with both CDA and CNA.
But isn’t it such an unprincipled position? Just a refusal to commit to one of the theories and switching between them?
No, of course not! It’s just how we solve every other question—we actually look at what’s going on before making assumptions! It’s the core thing about epistemic rationality—if you want to systematically being able to make a map of cities—you have to actually explore them. That’s the most solid metodology.
I’m not sure I understand this question. Probabilities are not just about payoffs, we can talk about them even without utility functions over outcomes. But if your probabilistic model is correct then whatever betting scheme is specified you should be able to apply it to get the best payoff. So getting the best payoff isn’t the goal in itself but it can be used as a validation for your mathematical model, though one should be careful and not base everything on just betting as there are still weird edge cases, such as Sleeping Beauty which my future posts will explore.
If I’m the only decider then probability for any person to see green becomes the same as probability of me in particular to see green. And so I should say yes to the “collective” bet.
I guess my main problem with your approach is that I don’t see a clear rational of which probability to use, or when to interpret it as “I see green” and when to interpret it as “Anyone see green” when both of the statement is based on the fact that I drew a green ball.
For example, my argument is that after seeing the green ball, my probability is 0.9, and I shall make all my decisions based on that. Why not update the pre-game plan based on that probability? Because the pre-game plan is not my decision. It is an agreement reached by all participants, a coordination. That coordination is reached by everyone reasoning objectively, which does not accommodate any any first-person self identification like “I”. In short, when reasoning from my personal perspective, use “I see green”; when reasoning from an objective perspective, use “someone see green”. All my solution (PBR) for anthropic and related questions are based on the exact same supposition of the axiomatic status of the first-person perspective. It gives the same explanation, and one can predict what this theory says about a problem. Some results are greatly disliked by many, like the nonexistence of self-locating probability and perspective disagreement, but those are clearly the conclusion of PBR, and I am advocating it.
You are arguing the two interpretation of “I see green” and “Anyone sees green” are both valid, and which one to use depends on the specific question. But, to me, what exact logic dictates this assignment is unclear. You argue that the bets structured not depending on which exact person gets green, then “my decision” shall be based on “anyone sees green”, it seems to me, a way of simply selecting whichever interpretation that does not yield a problematic result. A practice of fitting theory to results.
To the example I brought up in the last reply, what would you do if you drew a green ball and were told that all participants said yes, you used the probability of 0.9. Rational being you are the only decider in this case. It puzzles me because in exactly what sense “I am the only decider?” Didn’t other people also decide to say “yes”? Didn’t their “yes” contributed to whether the bet would be taken the same way as your “yes”? If you are saying I am the only decider because whatever I say would determine whether the bet would be taken. How is that different from deriving other’s responses by using the assumption of “everyone in my position would have the same decision as I do”? But you used probability of 0.5 (“someone sees green”) in that situation. If you are referring you being the only decider in a causal—counterfactual sense, then you are still in the same position as all other green ball holders. What justifies the change regarding which interpretation—which probability (0.5 or 0.9)—to use?
And also the case of our discussion about perspective disagreement in the other post where you and cousin-it were having a discussion. I, by PBR, concluded there should be a perspective disagreement. You held that there won’t be a probability disagreement, because the correct way for Alice to interpret the meeting is “Bob has met Alice in the experiment overall” rather than “Bob has met Alice today”. I am not sure your rational for picking one interpretation over the other. It seems the correct interpretation is always the one that does not give the problematic outcome. And that to me, is a practice of avoiding the paradoxes but not a theory to resolve them.