SSA and SIA aren’t exactly untestable. They both make predictions, and can be evaluated according to them, e.g. SIA predicts larger universes. It could be said to predict an infinite universe with probability 1, insofar as it at all works with infinities.
The anthropic bits in their paper looks like SSA, rather than SIA.
Well, SSA and SIA are statements about subjective probabilities. How do you test a statement about subjective probabilities? Let’s try an easier example: “this coin is biased toward heads”. You just flip it a few times and see. The more you flip, the more certain you become. So to test SSA vs SIA, we need to flip an “anthropic coin” repeatedly.
What could such an anthropic coin look like? We could set up an event after which, with 1:N odds, N^2 copies of you exist. Otherwise, with N:1 odds, nothing happens and you stay as one copy. Going through this experiment once is guaranteed to give you an N:1 update in favor of either SSA (if you didn’t get copied) or SIA (if you got copied). Then we can have everyone coming out of this experiment go through it again and again, keeping all memory of previous iterations. The population of copies will grow fast, but that’s okay.
Imagine yourself after a million iterations, finding out that the percentage of times you got copied agrees closely with SIA. You try a thousand more iterations and it still checks out. Would that be evidence for you that SIA is “true” in some sense? For me it would! It’s the same as with a regular coin, after seeing a lot of heads you believe that it’s either biased toward heads or you’re having one hell of a coincidence.
That way of thinking about anthropic updates can be formalized in UDT: after a few iterations it learns to act as if SSA or SIA were “true”. So I’m pretty sure it’s right.
We could set up an event after which, with 1:N odds, N^2 copies of you exist.
So that’s a pure thought experiment then. There is no actual way to test those As. Besides, in the universe where we are able to copy humans, SSA vs SIA would be the least interesting question to talk about :) I am more interested in “testing” that applies to this universe.
Would that be evidence for you that SIA is “true” in some sense? For me it would!
“For me”? I don’t understand. Presumably you mean some kind of objective truth? Not a personal truth? Or do you mean adhering to one of the two is useful for, I don’t know, navigating the world?
It would be nice to have an realistic example one could point at and say “Thinking in this way pays rent.”
I don’t know, do you like chocolate? If yes, does that fact pay rent? Our preferences about happiness of observers vs. number of observers are part of what needs to be encoded into FAI’s utility function. So we need to figure them out, with thought experiments if we have to.
As to objective vs personal truth, I think anthropic probabilities aren’t much different from regular probabilities in that sense. Seeing a quantum coin come up heads half the time is the same kind of “personal truth” as getting anthropic evidence in the game I described. Either way there will be many copies of you seeing different things and you need to figure out the weighting.
When you repeat this experiment a bunch of times, I think an SSA advocate can choose their reference class to include all iterations of the experiment. This will result in them assigning similar credences as SIA, since a randomly chosen awakening from all iterations of the experiment is likely to be one of the new copies. So the update towards SIA won’t be that strong.
This way of choosing the reference class lets SSA avoid a lot of unintuitive results. But it’s kind of a symmetric way of avoiding unintuitive results, in that it might work even if the theory is false.
SSA and SIA aren’t exactly untestable. They both make predictions, and can be evaluated according to them, e.g. SIA predicts larger universes. It could be said to predict an infinite universe with probability 1, insofar as it at all works with infinities.
The anthropic bits in their paper looks like SSA, rather than SIA.
I am not sure how one can test SSA or SIA. What kind of experiment would need to be set up, or what data needs to be collected?
Well, SSA and SIA are statements about subjective probabilities. How do you test a statement about subjective probabilities? Let’s try an easier example: “this coin is biased toward heads”. You just flip it a few times and see. The more you flip, the more certain you become. So to test SSA vs SIA, we need to flip an “anthropic coin” repeatedly.
What could such an anthropic coin look like? We could set up an event after which, with 1:N odds, N^2 copies of you exist. Otherwise, with N:1 odds, nothing happens and you stay as one copy. Going through this experiment once is guaranteed to give you an N:1 update in favor of either SSA (if you didn’t get copied) or SIA (if you got copied). Then we can have everyone coming out of this experiment go through it again and again, keeping all memory of previous iterations. The population of copies will grow fast, but that’s okay.
Imagine yourself after a million iterations, finding out that the percentage of times you got copied agrees closely with SIA. You try a thousand more iterations and it still checks out. Would that be evidence for you that SIA is “true” in some sense? For me it would! It’s the same as with a regular coin, after seeing a lot of heads you believe that it’s either biased toward heads or you’re having one hell of a coincidence.
That way of thinking about anthropic updates can be formalized in UDT: after a few iterations it learns to act as if SSA or SIA were “true”. So I’m pretty sure it’s right.
So that’s a pure thought experiment then. There is no actual way to test those As. Besides, in the universe where we are able to copy humans, SSA vs SIA would be the least interesting question to talk about :) I am more interested in “testing” that applies to this universe.
“For me”? I don’t understand. Presumably you mean some kind of objective truth? Not a personal truth? Or do you mean adhering to one of the two is useful for, I don’t know, navigating the world?
It would be nice to have an realistic example one could point at and say “Thinking in this way pays rent.”
I don’t know, do you like chocolate? If yes, does that fact pay rent? Our preferences about happiness of observers vs. number of observers are part of what needs to be encoded into FAI’s utility function. So we need to figure them out, with thought experiments if we have to.
As to objective vs personal truth, I think anthropic probabilities aren’t much different from regular probabilities in that sense. Seeing a quantum coin come up heads half the time is the same kind of “personal truth” as getting anthropic evidence in the game I described. Either way there will be many copies of you seeing different things and you need to figure out the weighting.
When you repeat this experiment a bunch of times, I think an SSA advocate can choose their reference class to include all iterations of the experiment. This will result in them assigning similar credences as SIA, since a randomly chosen awakening from all iterations of the experiment is likely to be one of the new copies. So the update towards SIA won’t be that strong.
This way of choosing the reference class lets SSA avoid a lot of unintuitive results. But it’s kind of a symmetric way of avoiding unintuitive results, in that it might work even if the theory is false.
(Which I think it is.)