Now, suppose one really dedicated and overzealous grad student of Tegmark performs this experiment. The odds of the MWI being a good model might go up significantly enough for others to try to replicate it in the tiny subset of the universes where she survives. As a result, in a tiny minority of the universes Max gets a Nobel prize for this major discovery, whereas in most others he gets sued by the family of the deceased.
I’m not suggesting that this is a scientific experiment that should be conducted. Nor was I suggesting you should believe in this form of MWI. I was merely responding to your claim that wedrifid’s position is untestable.
Also, note that a proposition does not have to meet scientific standards of interpersonal testability in order to be testable. If I conducted a sequence of experiments that could kill me with high probability and remained alive, I would become pretty convinced that some form of MWI is right, but I would not expect my survival to convince you of this. After all, most other people in our branch who conducted this experiment would be dead. From your perspective, my survival could be an entirely expected fluke.
If EY believed in this kind of MWI, he would not bother with existential risks, since humanity will surely survive in some of the branches.
I’m fairly sure EY believes that humanity will survive in some branch with non-zero amplitude. I don’t see why it follows that one should not bother with existential risks. Presumably Eliezer wants to maximize the wave-function mass associated with humanity surviving.
If I conducted a sequence of experiments that could kill me with high probability and remained alive, I would become pretty convinced that some form of MWI is right, but I would not expect my survival to convince you of this.
Probably, but I’m having trouble thinking of this experiment as scientifically useful if you cannot convince anyone else of your findings. Maybe there is a way to gather some statistics from so called “miracle survival stories” and see if there is an excess that can be attributed to the MWI, but I doubt that there is such excess to begin with.
Presumably Eliezer wants to maximize the wave-function mass associated with humanity surviving.
Why? The only ones that matter are those where he survives.
Why? The only ones that matter are those where he survives.
This seems like a pretty controversial ethical position. I disagree and I’m pretty sure Eliezer does as well. To analogize, I’m pretty confident that I won’t be alive a thousand years from now, but I wouldn’t be indifferent about actions that would lead to the extinction of all life at that time.
I’m pretty confident that I won’t be alive a thousand years from now, but I wouldn’t be ambivalent about actions that would lead to the extinction of all life at that time.
Indifferent. Ambivalent means, more or less, that you have reasons for wanting it either way as opposed to not caring at all.
Why? The only ones that matter are those where he survives.
If they don’t matter to you, that still doesn’t necessitate that they don’t matter to him. Each person’s utility function may care about whatever it pleases.
I’m not suggesting that this is a scientific experiment that should be conducted. Nor was I suggesting you should believe in this form of MWI. I was merely responding to your claim that wedrifid’s position is untestable.
Also, note that a proposition does not have to meet scientific standards of interpersonal testability in order to be testable. If I conducted a sequence of experiments that could kill me with high probability and remained alive, I would become pretty convinced that some form of MWI is right, but I would not expect my survival to convince you of this. After all, most other people in our branch who conducted this experiment would be dead. From your perspective, my survival could be an entirely expected fluke.
I’m fairly sure EY believes that humanity will survive in some branch with non-zero amplitude. I don’t see why it follows that one should not bother with existential risks. Presumably Eliezer wants to maximize the wave-function mass associated with humanity surviving.
Probably, but I’m having trouble thinking of this experiment as scientifically useful if you cannot convince anyone else of your findings. Maybe there is a way to gather some statistics from so called “miracle survival stories” and see if there is an excess that can be attributed to the MWI, but I doubt that there is such excess to begin with.
Why? The only ones that matter are those where he survives.
If if he doesn’t care at all about anyone else at all. This doesn’t seem likely.
This seems like a pretty controversial ethical position. I disagree and I’m pretty sure Eliezer does as well. To analogize, I’m pretty confident that I won’t be alive a thousand years from now, but I wouldn’t be indifferent about actions that would lead to the extinction of all life at that time.
Indifferent. Ambivalent means, more or less, that you have reasons for wanting it either way as opposed to not caring at all.
Well, presumably he wouldn’t be ambivalent as well as not being indifferent about performing/not-performing those actions.
Thanks. Corrected.
If they don’t matter to you, that still doesn’t necessitate that they don’t matter to him. Each person’s utility function may care about whatever it pleases.