We could set up an event after which, with 1:N odds, N^2 copies of you exist.
So that’s a pure thought experiment then. There is no actual way to test those As. Besides, in the universe where we are able to copy humans, SSA vs SIA would be the least interesting question to talk about :) I am more interested in “testing” that applies to this universe.
Would that be evidence for you that SIA is “true” in some sense? For me it would!
“For me”? I don’t understand. Presumably you mean some kind of objective truth? Not a personal truth? Or do you mean adhering to one of the two is useful for, I don’t know, navigating the world?
It would be nice to have an realistic example one could point at and say “Thinking in this way pays rent.”
I don’t know, do you like chocolate? If yes, does that fact pay rent? Our preferences about happiness of observers vs. number of observers are part of what needs to be encoded into FAI’s utility function. So we need to figure them out, with thought experiments if we have to.
As to objective vs personal truth, I think anthropic probabilities aren’t much different from regular probabilities in that sense. Seeing a quantum coin come up heads half the time is the same kind of “personal truth” as getting anthropic evidence in the game I described. Either way there will be many copies of you seeing different things and you need to figure out the weighting.
So that’s a pure thought experiment then. There is no actual way to test those As. Besides, in the universe where we are able to copy humans, SSA vs SIA would be the least interesting question to talk about :) I am more interested in “testing” that applies to this universe.
“For me”? I don’t understand. Presumably you mean some kind of objective truth? Not a personal truth? Or do you mean adhering to one of the two is useful for, I don’t know, navigating the world?
It would be nice to have an realistic example one could point at and say “Thinking in this way pays rent.”
I don’t know, do you like chocolate? If yes, does that fact pay rent? Our preferences about happiness of observers vs. number of observers are part of what needs to be encoded into FAI’s utility function. So we need to figure them out, with thought experiments if we have to.
As to objective vs personal truth, I think anthropic probabilities aren’t much different from regular probabilities in that sense. Seeing a quantum coin come up heads half the time is the same kind of “personal truth” as getting anthropic evidence in the game I described. Either way there will be many copies of you seeing different things and you need to figure out the weighting.