Yes, but utility isn’t linear across saved lives and maybe it even shouldn’t be. I would be willing to give many more resources to save the lives of the last fifty pandas in the world, saving pandas from extinction, than I would be to save fifty pandas if total panda population was 100,000 threatening to go down to 99,950.
Now it’s true that human utility is more linear than panda utility because I care much more about humans for their own sake versus for the sake of my preference for there being humans, but I still think saving the last eight billion humans is more important than saving eight billion out of infinity.
You’re an equivalence class. You don’t save the last eight billion humans, you save eight billion humans in each of the infinitely many worlds in which your decision algorithm is instantiated.
Why is that significant? No matter how many worlds I’m saving eight billion humans in, there are still humans left over who are saved no matter what I do or don’t do. So the “reward” of my actions still gets downgraded from “preventing human extinction” to “saving a bunch of people, but humanity will be safe no matter what”.
In fact...hmm...any given human will be instantiated in infinitely many worlds, so you don’t actually save any lives. You just increase those lives’ measure, which is sort of hard to get excited about.
Should it? It appears to me that efforts toward saving the world, if successful, only raise the odds that the branch you personally experience will include a saved world.
Or from a different perspective your decision algorithm partially determines the optimization target for the updateless game-theoretical compromise that emerges around that algorithm.
That’s certainly a useful view of the ambiguity inherent in decision theory in MWI. Or it would be, if I had a local group to help me get a deep understanding of UDT—the Tampa chapter of the Bayesian Conspiracy has lain in abeyance since your visit.
Well, it definitely sounds worse than simply saving the world, but the expected number of saved lives should be the same either ways.
Yes, but utility isn’t linear across saved lives and maybe it even shouldn’t be. I would be willing to give many more resources to save the lives of the last fifty pandas in the world, saving pandas from extinction, than I would be to save fifty pandas if total panda population was 100,000 threatening to go down to 99,950.
Now it’s true that human utility is more linear than panda utility because I care much more about humans for their own sake versus for the sake of my preference for there being humans, but I still think saving the last eight billion humans is more important than saving eight billion out of infinity.
You’re an equivalence class. You don’t save the last eight billion humans, you save eight billion humans in each of the infinitely many worlds in which your decision algorithm is instantiated.
Why is that significant? No matter how many worlds I’m saving eight billion humans in, there are still humans left over who are saved no matter what I do or don’t do. So the “reward” of my actions still gets downgraded from “preventing human extinction” to “saving a bunch of people, but humanity will be safe no matter what”.
In fact...hmm...any given human will be instantiated in infinitely many worlds, so you don’t actually save any lives. You just increase those lives’ measure, which is sort of hard to get excited about.
Should it? It appears to me that efforts toward saving the world, if successful, only raise the odds that the branch you personally experience will include a saved world.
Or from a different perspective your decision algorithm partially determines the optimization target for the updateless game-theoretical compromise that emerges around that algorithm.
That’s certainly a useful view of the ambiguity inherent in decision theory in MWI. Or it would be, if I had a local group to help me get a deep understanding of UDT—the Tampa chapter of the Bayesian Conspiracy has lain in abeyance since your visit.