Does this theory really alter the probability that your next chocolate bar will turn into a hamster? After all, if there were only one of you, maybe there’s a one in a trillion chance that one is in a simulation whose alien overlords will turn a chocolate bar into a hamster. If there are a trillion of you, and one of those trillion is in such a simulation, and your subjective experience has an equal chance of continuing down any branch, then the probability of the bar turning into the hamster is still one in a trillion. Although I’ve never seen a proof, intuitively you’d expect those two probabilities to be the same, or at least not be able to predict how they differ.
It all adds up to normality...except that this takes a lot of the oomph out of the project to reduce existential risk. Saving all humanity from destruction makes a much better motivator for me than reducing the percentage of branches of humanity that end in destruction by an insignificaaEEEEGH MY KEYBOARD JUST TURNED INTO A BADGER!!11asdaghf
I spent a lot of time in the late 90s trying to work out a coherent system of thinking about probabilities that involved things like “your subjective experience has an equal chance of continuing down any branch” but could not make it work out.
Eventually I gave up and went down the road of UDASSA and then UDT, but “your subjective experience has an equal chance of continuing down any branch” seems to be the natural first thing that someone would think of when they think about probabilities in the context of multiple copies/branches. I wish there is a simple and convincing argument why thinking about probabilities this way doesn’t work, so people don’t spend too much time on this step before they move on.
The implied difference between making N copies straight away, and making two copies and then making N-1 copies of one of them, might be a simple convincing argument that something really odd is going on.
I wish there is a simple and convincing argument why thinking about probabilities this way doesn’t work
It doesn’t? If I flip a fair coin, I can think of the outcomes as “my subjective experience goes down the branch where heads comes up” and “my subjective experience goes down the branch where tails comes up”, and the principle works.
Maybe nothing—maybe the fundamental unit of conscious experience is the observer-moment and that continuity of experience is an illusion—but the consensus on this site seems to be that it’s worth talking about in situations like eg quantum suicide or simulation.
One inferential step is too little. Really you need an interval sufficiently long for the person to think coherently and do decision theory, but short enough that they don’t get copied at all.
Yes, but utility isn’t linear across saved lives and maybe it even shouldn’t be. I would be willing to give many more resources to save the lives of the last fifty pandas in the world, saving pandas from extinction, than I would be to save fifty pandas if total panda population was 100,000 threatening to go down to 99,950.
Now it’s true that human utility is more linear than panda utility because I care much more about humans for their own sake versus for the sake of my preference for there being humans, but I still think saving the last eight billion humans is more important than saving eight billion out of infinity.
You’re an equivalence class. You don’t save the last eight billion humans, you save eight billion humans in each of the infinitely many worlds in which your decision algorithm is instantiated.
Why is that significant? No matter how many worlds I’m saving eight billion humans in, there are still humans left over who are saved no matter what I do or don’t do. So the “reward” of my actions still gets downgraded from “preventing human extinction” to “saving a bunch of people, but humanity will be safe no matter what”.
In fact...hmm...any given human will be instantiated in infinitely many worlds, so you don’t actually save any lives. You just increase those lives’ measure, which is sort of hard to get excited about.
Should it? It appears to me that efforts toward saving the world, if successful, only raise the odds that the branch you personally experience will include a saved world.
Or from a different perspective your decision algorithm partially determines the optimization target for the updateless game-theoretical compromise that emerges around that algorithm.
That’s certainly a useful view of the ambiguity inherent in decision theory in MWI. Or it would be, if I had a local group to help me get a deep understanding of UDT—the Tampa chapter of the Bayesian Conspiracy has lain in abeyance since your visit.
Does this theory really alter the probability that your next chocolate bar will turn into a hamster?
After all, if there were only one of you, maybe there’s a one in a trillion chance that one is in a simulation whose alien overlords will turn a chocolate bar into a hamster.
But what if there are infinity of you, and the set of you that are not in simulations has measure 0? Then the probability of bizarre things happening is much higher, and depends entirely upon the probability distribution over motivations of simulators.
It sounds a bit chicken-and-egg to me. My subjective probability estimate of simulators’ motivations comes great part from the frequency and nature of observed bizarre events. Based on what I know about my universe the vast majority of my simulators don’t interfere with my physical laws.
I hear things like this a lot, but I’m not sure if I’ve heard a clear reason to think that the people that the simulators (of a long-running, naturalistic simulation) are interested in should be more likely to be conscious, or otherwise gain any sort of epistemological or metaphysical significance.
One hypothesis is that we are being mass simulated for acausal game theoretic reasons, and that only the “interesting” people are simulated in enough detail to be conscious.
Isn’t the measure of the set of me not in simulations (in a big world) equal to the probability that I’m not in a simulation (if there’s only one of me)?
only if you reason anthropically in calculating the “one of me” probability.
The point is that if there are some places in the multiverse with truly vast or even infinite amounts of computing power, then that will dominate the calculation in the case of thinking of yourself as the union of all your instances. So if that is to agree with the “one of me” case, then you’d better reason anthropically in that case, otherwise they’ll disagree.
Does this theory really alter the probability that your next chocolate bar will turn into a hamster? After all, if there were only one of you, maybe there’s a one in a trillion chance that one is in a simulation whose alien overlords will turn a chocolate bar into a hamster. If there are a trillion of you, and one of those trillion is in such a simulation, and your subjective experience has an equal chance of continuing down any branch, then the probability of the bar turning into the hamster is still one in a trillion. Although I’ve never seen a proof, intuitively you’d expect those two probabilities to be the same, or at least not be able to predict how they differ.
It all adds up to normality...except that this takes a lot of the oomph out of the project to reduce existential risk. Saving all humanity from destruction makes a much better motivator for me than reducing the percentage of branches of humanity that end in destruction by an insignificaaEEEEGH MY KEYBOARD JUST TURNED INTO A BADGER!!11asdaghf
At least it’s a QWERTY badger, from the looks of it...
And just what does that mean?
I spent a lot of time in the late 90s trying to work out a coherent system of thinking about probabilities that involved things like “your subjective experience has an equal chance of continuing down any branch” but could not make it work out.
Eventually I gave up and went down the road of UDASSA and then UDT, but “your subjective experience has an equal chance of continuing down any branch” seems to be the natural first thing that someone would think of when they think about probabilities in the context of multiple copies/branches. I wish there is a simple and convincing argument why thinking about probabilities this way doesn’t work, so people don’t spend too much time on this step before they move on.
The implied difference between making N copies straight away, and making two copies and then making N-1 copies of one of them, might be a simple convincing argument that something really odd is going on.
Yeah, that one is nasty nasty nasty.
It doesn’t? If I flip a fair coin, I can think of the outcomes as “my subjective experience goes down the branch where heads comes up” and “my subjective experience goes down the branch where tails comes up”, and the principle works.
Maybe nothing—maybe the fundamental unit of conscious experience is the observer-moment and that continuity of experience is an illusion—but the consensus on this site seems to be that it’s worth talking about in situations like eg quantum suicide or simulation.
Maybe the inferential step would work better than the observer moment?
One inferential step is too little. Really you need an interval sufficiently long for the person to think coherently and do decision theory, but short enough that they don’t get copied at all.
Well, it definitely sounds worse than simply saving the world, but the expected number of saved lives should be the same either ways.
Yes, but utility isn’t linear across saved lives and maybe it even shouldn’t be. I would be willing to give many more resources to save the lives of the last fifty pandas in the world, saving pandas from extinction, than I would be to save fifty pandas if total panda population was 100,000 threatening to go down to 99,950.
Now it’s true that human utility is more linear than panda utility because I care much more about humans for their own sake versus for the sake of my preference for there being humans, but I still think saving the last eight billion humans is more important than saving eight billion out of infinity.
You’re an equivalence class. You don’t save the last eight billion humans, you save eight billion humans in each of the infinitely many worlds in which your decision algorithm is instantiated.
Why is that significant? No matter how many worlds I’m saving eight billion humans in, there are still humans left over who are saved no matter what I do or don’t do. So the “reward” of my actions still gets downgraded from “preventing human extinction” to “saving a bunch of people, but humanity will be safe no matter what”.
In fact...hmm...any given human will be instantiated in infinitely many worlds, so you don’t actually save any lives. You just increase those lives’ measure, which is sort of hard to get excited about.
Should it? It appears to me that efforts toward saving the world, if successful, only raise the odds that the branch you personally experience will include a saved world.
Or from a different perspective your decision algorithm partially determines the optimization target for the updateless game-theoretical compromise that emerges around that algorithm.
That’s certainly a useful view of the ambiguity inherent in decision theory in MWI. Or it would be, if I had a local group to help me get a deep understanding of UDT—the Tampa chapter of the Bayesian Conspiracy has lain in abeyance since your visit.
But what if there are infinity of you, and the set of you that are not in simulations has measure 0? Then the probability of bizarre things happening is much higher, and depends entirely upon the probability distribution over motivations of simulators.
It sounds a bit chicken-and-egg to me. My subjective probability estimate of simulators’ motivations comes great part from the frequency and nature of observed bizarre events. Based on what I know about my universe the vast majority of my simulators don’t interfere with my physical laws.
Now update on the fact that you’re one of perhaps 1000 people who think seriously about the singularity out of 6,000,000,000…
I hear things like this a lot, but I’m not sure if I’ve heard a clear reason to think that the people that the simulators (of a long-running, naturalistic simulation) are interested in should be more likely to be conscious, or otherwise gain any sort of epistemological or metaphysical significance.
One hypothesis is that we are being mass simulated for acausal game theoretic reasons, and that only the “interesting” people are simulated in enough detail to be conscious.
“interesting” is very much the wrong word though. More like informative regarding the optimization target that one cooperates by pursuing.
Isn’t the measure of the set of me not in simulations (in a big world) equal to the probability that I’m not in a simulation (if there’s only one of me)?
only if you reason anthropically in calculating the “one of me” probability.
The point is that if there are some places in the multiverse with truly vast or even infinite amounts of computing power, then that will dominate the calculation in the case of thinking of yourself as the union of all your instances. So if that is to agree with the “one of me” case, then you’d better reason anthropically in that case, otherwise they’ll disagree.