Isn’t the conclusion to the Sleeping Beauty problem that there are two different but equally valid ways of applying probability theory to the problem; that natural language and even formal notation makes it very easy to gloss over the difference; and that which one you should use depends on exactly what question you mean to ask? Would those same lessons apply to SIA vs. SSA?
In Sleeping Beauty, IIRC the distinction is between “per-experiment” probabilities and “per-observation” probabilities. My interpretation of these was to distinguish between the question “what’s the probability that the coin came up heads” (a physical event that happened exactly once, when the coin landed on the table) from “what’s the probability that Beauty will witness the coin being heads” (an event in Beauty’s brain that will occur once or twice). The former having probability 1⁄2 and the latter having probability 1⁄3. Though it might be a bit more subtle than that.
For SSA vs. SIA, who do you want to be right most often? Do you want a person chosen uniformly at random from among all people in all possible universes to be right most often? If so, use SIA. Or do you want to maximize average-rightness-per-universe? If so, use SSA, or something like it, I’m not exactly clear.
Let’s be concrete, and look at the “heads: 1 person in a white room and 9 chimps in a jungle; tails: 10 people in a white room” situation.
If God says “I want you to guess whether the coin landed heads or tails. I will exterminate everyone who guesses wrong.”, then you should guess tails because that saves the most people in expectation. But if God says “I want to see how good the people of this universe are at reasoning. Guess whether the coin landed heads or tails. If most people in your universe guess correctly, then your universe will be rewarded with the birth of a single happy child. Oh and also the coin wasn’t perfectly fair; it landed heads with probability 51%.”, then you should guess heads because that maximizes the chance that the child is born.
I’m not sure that’s all exactly right. But the point I’m trying to make is, are we sure that “the probability that you’re in the universe with 1 person in the white room” has an unambiguous answer?
I agree with all of this (and I admire its clarity). In addition, I believe that the SIA-formulated questions are generally the important ones, for roughly the reason that the consequences of our choices are generally more like value is proportional to correct actions than value is proportional to fraction of actions correct (across all observers subjectively indistinguishable from me). (Our choices seem to be local in some sense; their effects are independent of the choices of our faraway subjectively-indistinguishable counterparts, and their effects seem to scale with our numbers. Perhaps some formalization of bigger universes matter more is equivalent.)
I’m not sure about this, but perhaps with some kind of locality assumption, the intuitive sense of probability as something like odds at which I’m indifferent to bet (under certain idealizations) reduces to SIA probability, whereas SSA probability would correspond to something like odds at which I’m indifferent to bet if the value from winning is proportional to the fraction rather than the number of correct bets. Again, SSA is in conflict with bigger universes matter more; assuming locality, this is particularly disturbing since it roughly means that the value of a choice is inversely proportional to the number of similarly-situated choosers.
Isn’t the conclusion to the Sleeping Beauty problem that there are two different but equally valid ways of applying probability theory to the problem; that natural language and even formal notation makes it very easy to gloss over the difference; and that which one you should use depends on exactly what question you mean to ask? Would those same lessons apply to SIA vs. SSA?
In Sleeping Beauty, IIRC the distinction is between “per-experiment” probabilities and “per-observation” probabilities. My interpretation of these was to distinguish between the question “what’s the probability that the coin came up heads” (a physical event that happened exactly once, when the coin landed on the table) from “what’s the probability that Beauty will witness the coin being heads” (an event in Beauty’s brain that will occur once or twice). The former having probability 1⁄2 and the latter having probability 1⁄3. Though it might be a bit more subtle than that.
For SSA vs. SIA, who do you want to be right most often? Do you want a person chosen uniformly at random from among all people in all possible universes to be right most often? If so, use SIA. Or do you want to maximize average-rightness-per-universe? If so, use SSA, or something like it, I’m not exactly clear.
Let’s be concrete, and look at the “heads: 1 person in a white room and 9 chimps in a jungle; tails: 10 people in a white room” situation.
If God says “I want you to guess whether the coin landed heads or tails. I will exterminate everyone who guesses wrong.”, then you should guess tails because that saves the most people in expectation. But if God says “I want to see how good the people of this universe are at reasoning. Guess whether the coin landed heads or tails. If most people in your universe guess correctly, then your universe will be rewarded with the birth of a single happy child. Oh and also the coin wasn’t perfectly fair; it landed heads with probability 51%.”, then you should guess heads because that maximizes the chance that the child is born.
I’m not sure that’s all exactly right. But the point I’m trying to make is, are we sure that “the probability that you’re in the universe with 1 person in the white room” has an unambiguous answer?
I agree with all of this (and I admire its clarity). In addition, I believe that the SIA-formulated questions are generally the important ones, for roughly the reason that the consequences of our choices are generally more like value is proportional to correct actions than value is proportional to fraction of actions correct (across all observers subjectively indistinguishable from me). (Our choices seem to be local in some sense; their effects are independent of the choices of our faraway subjectively-indistinguishable counterparts, and their effects seem to scale with our numbers. Perhaps some formalization of bigger universes matter more is equivalent.)
I’m not sure about this, but perhaps with some kind of locality assumption, the intuitive sense of probability as something like odds at which I’m indifferent to bet (under certain idealizations) reduces to SIA probability, whereas SSA probability would correspond to something like odds at which I’m indifferent to bet if the value from winning is proportional to the fraction rather than the number of correct bets. Again, SSA is in conflict with bigger universes matter more; assuming locality, this is particularly disturbing since it roughly means that the value of a choice is inversely proportional to the number of similarly-situated choosers.