There are many forms of Bayesianism, and I’ve only seen a few that are married to the notion that ALL uncertainty is due to ignorance and none due to nondeterminism.
QM, incidentally, even in MWI, is nondeterministic in the sense that you don’t know which of the outcomes you personally will experience.
QM, incidentally, even in MWI, is nondeterministic in the sense that you don’t know which of the outcomes you personally will experience.
Yes I do. All of them. What I cannot predict is what my observation will be when it is determined by a quantum event that has already occurred but with which I have not yet had any interaction. That’s no more deterministic than a ‘for loop’ in a computer—self reflective code before the loop can predict exactly what is going to happen in the future but code within the loop has to do a lookup of the counter variable (or a side effect) if it is going to work conditionally.
Sorry, I should have elaborated, but I was short on time when I wrote the comment.
Let’s say you set up a sequence of quantum experiments, each of which has a 90% chance (according to the Born probabilities) of killing you instantly and a 10% chance of leaving you unharmed. After a number of such experiments you find yourself alive. This is something you would expect if some form of MWI were true and if all surviving future selves had conscious experience continuous with yours. It is not something you would expect if a collapse interpretation were true, or if MWI combined with some sort of indeterminism (governed by Born’s rule, presumably) about which future self continues your conscious experience were true. So such a sequence of experiments should lead you to update in favor of MWI + experience all possible outcomes.
Sorry, I am having trouble taking quantum suicide/immortality seriously. How is this different from The Simple Truth:
Inspector Darwin looks at the two arguers, both apparently unwilling to give up their positions. “Listen,” Darwin says, more kindly now, “I have a simple notion for resolving your dispute. You say,” says Darwin, pointing to Mark, “that people’s beliefs alter their personal realities. And you fervently believe,” his finger swivels to point at Autrey, “that Mark’s beliefs can’t alter reality. So let Mark believe really hard that he can fly, and then step off a cliff. Mark shall see himself fly away like a bird, and Autrey shall see him plummet down and go splat, and you shall both be happy.”
If there is even a remote chance that Mark would fly, he probably flew in almost every universe he survived.
Now, suppose one really dedicated and overzealous grad student of Tegmark performs this experiment. The odds of the MWI being a good model might go up significantly enough for others to try to replicate it in the tiny subset of the universes where she survives. As a result, in a tiny minority of the universes Max gets a Nobel prize for this major discovery, whereas in most others he gets sued by the family of the deceased.
If EY believed in this kind of MWI, he would not bother with existential risks, since humanity will surely survive in some of the branches.
Now, suppose one really dedicated and overzealous grad student of Tegmark performs this experiment. The odds of the MWI being a good model might go up significantly enough for others to try to replicate it in the tiny subset of the universes where she survives. As a result, in a tiny minority of the universes Max gets a Nobel prize for this major discovery, whereas in most others he gets sued by the family of the deceased.
I’m not suggesting that this is a scientific experiment that should be conducted. Nor was I suggesting you should believe in this form of MWI. I was merely responding to your claim that wedrifid’s position is untestable.
Also, note that a proposition does not have to meet scientific standards of interpersonal testability in order to be testable. If I conducted a sequence of experiments that could kill me with high probability and remained alive, I would become pretty convinced that some form of MWI is right, but I would not expect my survival to convince you of this. After all, most other people in our branch who conducted this experiment would be dead. From your perspective, my survival could be an entirely expected fluke.
If EY believed in this kind of MWI, he would not bother with existential risks, since humanity will surely survive in some of the branches.
I’m fairly sure EY believes that humanity will survive in some branch with non-zero amplitude. I don’t see why it follows that one should not bother with existential risks. Presumably Eliezer wants to maximize the wave-function mass associated with humanity surviving.
If I conducted a sequence of experiments that could kill me with high probability and remained alive, I would become pretty convinced that some form of MWI is right, but I would not expect my survival to convince you of this.
Probably, but I’m having trouble thinking of this experiment as scientifically useful if you cannot convince anyone else of your findings. Maybe there is a way to gather some statistics from so called “miracle survival stories” and see if there is an excess that can be attributed to the MWI, but I doubt that there is such excess to begin with.
Presumably Eliezer wants to maximize the wave-function mass associated with humanity surviving.
Why? The only ones that matter are those where he survives.
Why? The only ones that matter are those where he survives.
This seems like a pretty controversial ethical position. I disagree and I’m pretty sure Eliezer does as well. To analogize, I’m pretty confident that I won’t be alive a thousand years from now, but I wouldn’t be indifferent about actions that would lead to the extinction of all life at that time.
I’m pretty confident that I won’t be alive a thousand years from now, but I wouldn’t be ambivalent about actions that would lead to the extinction of all life at that time.
Indifferent. Ambivalent means, more or less, that you have reasons for wanting it either way as opposed to not caring at all.
Why? The only ones that matter are those where he survives.
If they don’t matter to you, that still doesn’t necessitate that they don’t matter to him. Each person’s utility function may care about whatever it pleases.
QM, incidentally, even in MWI, is nondeterministic in the sense that you don’t know which of the outcomes you personally will experience.
This is broken because
which of the outcomes you personally will experience
is incoherent in the context of MWI. There is a “you” now, on this side of the event. There will be many people labeled “you”, on the other side. There is no one person on the other side that corresponds to “you personally” while the event is something you can say “will” about—at that point, it’s “did”.
There are many forms of Bayesianism, and I’ve only seen a few that are married to the notion that ALL uncertainty is due to ignorance and none due to nondeterminism.
QM, incidentally, even in MWI, is nondeterministic in the sense that you don’t know which of the outcomes you personally will experience.
Yes I do. All of them. What I cannot predict is what my observation will be when it is determined by a quantum event that has already occurred but with which I have not yet had any interaction. That’s no more deterministic than a ‘for loop’ in a computer—self reflective code before the loop can predict exactly what is going to happen in the future but code within the loop has to do a lookup of the counter variable (or a side effect) if it is going to work conditionally.
That’s not a testable prediction, or a useful one.
It is in fact a testable prediction.
I cannot find anything in that entry that suggests that experiencing all possible outcomes can be experimentally tested. Feel free to elaborate.
Sorry, I should have elaborated, but I was short on time when I wrote the comment.
Let’s say you set up a sequence of quantum experiments, each of which has a 90% chance (according to the Born probabilities) of killing you instantly and a 10% chance of leaving you unharmed. After a number of such experiments you find yourself alive. This is something you would expect if some form of MWI were true and if all surviving future selves had conscious experience continuous with yours. It is not something you would expect if a collapse interpretation were true, or if MWI combined with some sort of indeterminism (governed by Born’s rule, presumably) about which future self continues your conscious experience were true. So such a sequence of experiments should lead you to update in favor of MWI + experience all possible outcomes.
Sorry, I am having trouble taking quantum suicide/immortality seriously. How is this different from The Simple Truth:
If there is even a remote chance that Mark would fly, he probably flew in almost every universe he survived.
Now, suppose one really dedicated and overzealous grad student of Tegmark performs this experiment. The odds of the MWI being a good model might go up significantly enough for others to try to replicate it in the tiny subset of the universes where she survives. As a result, in a tiny minority of the universes Max gets a Nobel prize for this major discovery, whereas in most others he gets sued by the family of the deceased.
If EY believed in this kind of MWI, he would not bother with existential risks, since humanity will surely survive in some of the branches.
See this post.
I’m not suggesting that this is a scientific experiment that should be conducted. Nor was I suggesting you should believe in this form of MWI. I was merely responding to your claim that wedrifid’s position is untestable.
Also, note that a proposition does not have to meet scientific standards of interpersonal testability in order to be testable. If I conducted a sequence of experiments that could kill me with high probability and remained alive, I would become pretty convinced that some form of MWI is right, but I would not expect my survival to convince you of this. After all, most other people in our branch who conducted this experiment would be dead. From your perspective, my survival could be an entirely expected fluke.
I’m fairly sure EY believes that humanity will survive in some branch with non-zero amplitude. I don’t see why it follows that one should not bother with existential risks. Presumably Eliezer wants to maximize the wave-function mass associated with humanity surviving.
Probably, but I’m having trouble thinking of this experiment as scientifically useful if you cannot convince anyone else of your findings. Maybe there is a way to gather some statistics from so called “miracle survival stories” and see if there is an excess that can be attributed to the MWI, but I doubt that there is such excess to begin with.
Why? The only ones that matter are those where he survives.
If if he doesn’t care at all about anyone else at all. This doesn’t seem likely.
This seems like a pretty controversial ethical position. I disagree and I’m pretty sure Eliezer does as well. To analogize, I’m pretty confident that I won’t be alive a thousand years from now, but I wouldn’t be indifferent about actions that would lead to the extinction of all life at that time.
Indifferent. Ambivalent means, more or less, that you have reasons for wanting it either way as opposed to not caring at all.
Well, presumably he wouldn’t be ambivalent as well as not being indifferent about performing/not-performing those actions.
Thanks. Corrected.
If they don’t matter to you, that still doesn’t necessitate that they don’t matter to him. Each person’s utility function may care about whatever it pleases.
This is broken because
is incoherent in the context of MWI. There is a “you” now, on this side of the event. There will be many people labeled “you”, on the other side. There is no one person on the other side that corresponds to “you personally” while the event is something you can say “will” about—at that point, it’s “did”.
Congratulations! You have constructed an interpretation of what I said that doesn’t make sense.
Why don’t you go back and try doing it the other way?
Which other way?