Yes, though I actually think “belief” is more correct here. I assume that if MWI is correct then there will always exist a future branch in which humanity continues to exist. This doesn’t concern me very much, because at this point I don’t believe humanity is nearing extinction anyway (I’m a generally optimistic person). I do think that if I would share MIRI’s outlook on AI risk, this would actually become very relevant to me as a concrete hope since my belief in MWI is higher than the likelihood Eliezer stated for humanity surviving AI.
It might interest you to know that Eliezer considers MWI to be obviously true:
We have embarrassed our Earth long enough by failing to see the obvious. So for the honor of my Earth, I write as if the existence of many-worlds were an established fact, because it is. The only question now is how long it will take for the people of this world to update.
The reason he is pessimistic about humanity’s survival even though he believes in MWI is because MWI’s being true does not save us.
Although it is possible to set up a special situation (e.g., by connecting a quantum-measurement device to a bomb) in which you will die in one branch, but live in a different branch, most situations aren’t like that. Most situations have you surviving in both branches or dying in both branches.
This seems silly to me—it is true that in a single instance, a quantum coin flip probably can’t save you if classical physics has decided that you’re going to die. But the exponential butterfly effect from all the minuscule changes that occur between splits from now should add up to providing us a huge possible spread of universes by the time AGI will arrive. In some of which the AI will be deadly, and in others, the seed of the AI will be picked just right for it to turn out good, or the exact right method for successful alignment will be the first one discovered.
Er, I mean, Is that in fact your hope?
Yes, though I actually think “belief” is more correct here. I assume that if MWI is correct then there will always exist a future branch in which humanity continues to exist. This doesn’t concern me very much, because at this point I don’t believe humanity is nearing extinction anyway (I’m a generally optimistic person). I do think that if I would share MIRI’s outlook on AI risk, this would actually become very relevant to me as a concrete hope since my belief in MWI is higher than the likelihood Eliezer stated for humanity surviving AI.
It might interest you to know that Eliezer considers MWI to be obviously true:
Source. More on MWI by Eliezer..
The reason he is pessimistic about humanity’s survival even though he believes in MWI is because MWI’s being true does not save us.
Although it is possible to set up a special situation (e.g., by connecting a quantum-measurement device to a bomb) in which you will die in one branch, but live in a different branch, most situations aren’t like that. Most situations have you surviving in both branches or dying in both branches.
This seems silly to me—it is true that in a single instance, a quantum coin flip probably can’t save you if classical physics has decided that you’re going to die. But the exponential butterfly effect from all the minuscule changes that occur between splits from now should add up to providing us a huge possible spread of universes by the time AGI will arrive. In some of which the AI will be deadly, and in others, the seed of the AI will be picked just right for it to turn out good, or the exact right method for successful alignment will be the first one discovered.