That there is no finite algorithm behind, which would decide when which atom will explode.
In other words, that there is no hidden variables behind those events, which would cause the decay.
It is quite an unorthodox view in quantum mechanics, that those exist.
As far as I can see, the official view in QM is inherently “nonbayesian” in this sense. No hidden mechanism which would output the decay time of an uranium atom for example.
All of them will decay and all of them will not decay in different worlds.
Let’s assume for the sake of argument that the Copenhagen interpretation is true. Before the particles decay, there is no way to tell which decay. It’s random. After they decay, you can tell by looking. You know which ones decayed. It’s not random at all.
Randomness is a state of the mind. All indeterminism tells you is that randomness must be a state of every mind that exists before the event.
Imagine a universe where it’s always possible to tell the future from the past, but it’s not always possible to tell what’s on the right from what’s on the left. This is deterministic, but if you rotate it 90 degrees, it isn’t. A coordinate transformation can’t be changing whether or not something is random, can it?
This is deterministic, but if you rotate it 90 degrees, it isn’t.
Time is not quite like space (or, in the PDE language, initial value problems are quite different from boundary value problems). There are QFT techniques that treat time as “imaginary space”, but their applicability is quite limited and they certainly do not justify the view that “Randomness is a state of the mind”, which is either untestable or manifestly false.
Could you be more specific, in what sense different?
Start with the obvious link, look for hyperbolic and elliptic PDEs and ways to solve them, especially numerically. The wave equation techniques are different from the Laplace equation techniques, though there is some overlap. Anyway, this is getting quite far from the original discussion.
In any case, final (or terminal?) value problems are the same as the initial value problems, PDE-wise.
Not quite. For example, the heat/diffusion equation cannot be run backwards, because it fails the uniqueness conditions with the t sign reversed. In a simpler language, you cannot reconstruct the shape of an ink drop once it is dissolved in a cup of water.
I was mainly asking what differences are, in your opinion, important in the context of the present debate.
Well, we are quite a ways from the original context, but I was commenting on that treating time and space on the same footing and saying that future and past are basically interchangeable (sorry for paraphrasing) is often a bad assumption.
You can run the diffusion equation backwards, only you encounter problems with precision when the solution exponentially grows.
In other words, it is ill-posed and cannot be used to recover the initial conditions.
Fundamental laws of nature are second order in time and symmetric with respect to time reversal.
This is a whole other debate on what is fundamental and what is emergent. Clearly, the heat equation is pretty fundamental in many contexts, but its origins can be traced to the microscopic models of diffusion. There are other reasons why the apparent time reversal might not be there. For example, if you take the MWI seriously, the branching process is has a clear time arrow attached to it.
In other words, it is ill-posed and cannot be used to recover the initial conditions.
With precise measurement you can. Once started to be solved numerically different initial conditions (for the standard diffusion equation) are all going to yield constant function after some time due to rounding errors, so the information is lost and can’t be recovered by time-reversed process. But as a mathematical problem the diffusion equation with reversed time is well defined and has unique solution nevertheless.
From what I recall, the reverse-time diffusion u_t=-u_xx is not well posed, i.e. for a given solution u(t), if we perturb u(0) by epsilon, there is no finite t such that the deviation of the new solution from the old one is bounded by epsilon*e^t. A quick Google search confirms it: (pdf, search inside for “well posed”)
I didn’t realise that “well-posed” is a term with a technical meaning. The definition of well-posedness I have found says that the solution must exist, be unique and continuously depend on the initial data, I am not sure whether this is equivalent to your definition.
Anyway, the problem with the reverse dissipation equation is that for some initial conditions, namely discontinuous ones, the solution doesn’t exist. However, if a function u(x,t) satisfies the diffusion equation on the interval [t1,t2], we can recover it completely from knowledge of not only u(x,t1), but also from u(x,t0) with any fixed t0 lying between t1 and t2.
You could define “randomness” to mean indeterminism. If you do so, you would call the results of a fair, indeterministic coin toss random. Even so, if I tossed such a quantum coin, and you saw it land on heads, you would not be willing to bet even money that it landed on tails. P(coin landed on heads|your current information) ≈ 1. When you’re finding expected value, this is all that matters.
As far as I can see, the official view in QM is inherently “nonbayesian” in this sense. No hidden mechanism which would output the decay time of an uranium atom for example.
Indeed there is no hidden “time until I decay” number hidden inside each radioactive atom (based on some pseudo-random generator, or what have you), but how is it related to Bayes? And what do you mean by “official”?
I mean the *prevailing view among (quantum) physicists that:
“Indeed there is no hidden “time until I decay” number hidden inside each radioactive atom ”
You said it.
but how is it related to Bayes?
It is, while one thinks, that he must update on every evidence. You can’t update anything on a decay of the particular radioactive atom. Could be another one, but it just wasn’t and what is to update? Nothing, if that was a “truly random” event.
Either it wasn’t, either you have nothing to update based on this evidence.
This “view” has been experimentally tested in a simpler case of two-state systems as Bell’s inequality, though I do not remember, off-hand, any tests related to radioactive decay.
It is, while one thinks, that he must update on every evidence. You can’t update anything on a decay of the particular radioactive atom.
You can update your estimate of the element’s half-life, if nothing else.
That there is no finite algorithm behind, which would decide when which atom will explode.
We can test algorithms which we use to predict which atom would explode and when. The variables are part of the theory, not of the atoms. Absence of hidden variables effectively means that there is no regularity such that we could infer a law that would predict the state of an arbitrary system at time t1 with certainty* from observations made at time t0 < t1. Nevertheless any selected atom* is either going to explode or isn’t at a given time, and we can observe which was the case afterwards. Bayesianism doesn’t prohibit updating our beliefs about events after those events happened, in fact it doesn’t say anything at all about time. The “inherent randomness” of radioactive decay doesn’t make the uncertainty non-Bayesian in any meaningful way.
That said, I am afraid we may start to argue over the silly problem of future contingents and over definitions in general. The right question to ask now is: why do you want to distinguish truly random numbers from apparently random ones? The answer to the question about the quality of quantum randomness may depend on that purpose.
*) Although I know that certainty is impossible to achieve and atoms are indistinguishable, I have chosen to formulate the sentences the way I did for sake of brevity.
There are many forms of Bayesianism, and I’ve only seen a few that are married to the notion that ALL uncertainty is due to ignorance and none due to nondeterminism.
QM, incidentally, even in MWI, is nondeterministic in the sense that you don’t know which of the outcomes you personally will experience.
QM, incidentally, even in MWI, is nondeterministic in the sense that you don’t know which of the outcomes you personally will experience.
Yes I do. All of them. What I cannot predict is what my observation will be when it is determined by a quantum event that has already occurred but with which I have not yet had any interaction. That’s no more deterministic than a ‘for loop’ in a computer—self reflective code before the loop can predict exactly what is going to happen in the future but code within the loop has to do a lookup of the counter variable (or a side effect) if it is going to work conditionally.
Sorry, I should have elaborated, but I was short on time when I wrote the comment.
Let’s say you set up a sequence of quantum experiments, each of which has a 90% chance (according to the Born probabilities) of killing you instantly and a 10% chance of leaving you unharmed. After a number of such experiments you find yourself alive. This is something you would expect if some form of MWI were true and if all surviving future selves had conscious experience continuous with yours. It is not something you would expect if a collapse interpretation were true, or if MWI combined with some sort of indeterminism (governed by Born’s rule, presumably) about which future self continues your conscious experience were true. So such a sequence of experiments should lead you to update in favor of MWI + experience all possible outcomes.
Sorry, I am having trouble taking quantum suicide/immortality seriously. How is this different from The Simple Truth:
Inspector Darwin looks at the two arguers, both apparently unwilling to give up their positions. “Listen,” Darwin says, more kindly now, “I have a simple notion for resolving your dispute. You say,” says Darwin, pointing to Mark, “that people’s beliefs alter their personal realities. And you fervently believe,” his finger swivels to point at Autrey, “that Mark’s beliefs can’t alter reality. So let Mark believe really hard that he can fly, and then step off a cliff. Mark shall see himself fly away like a bird, and Autrey shall see him plummet down and go splat, and you shall both be happy.”
If there is even a remote chance that Mark would fly, he probably flew in almost every universe he survived.
Now, suppose one really dedicated and overzealous grad student of Tegmark performs this experiment. The odds of the MWI being a good model might go up significantly enough for others to try to replicate it in the tiny subset of the universes where she survives. As a result, in a tiny minority of the universes Max gets a Nobel prize for this major discovery, whereas in most others he gets sued by the family of the deceased.
If EY believed in this kind of MWI, he would not bother with existential risks, since humanity will surely survive in some of the branches.
Now, suppose one really dedicated and overzealous grad student of Tegmark performs this experiment. The odds of the MWI being a good model might go up significantly enough for others to try to replicate it in the tiny subset of the universes where she survives. As a result, in a tiny minority of the universes Max gets a Nobel prize for this major discovery, whereas in most others he gets sued by the family of the deceased.
I’m not suggesting that this is a scientific experiment that should be conducted. Nor was I suggesting you should believe in this form of MWI. I was merely responding to your claim that wedrifid’s position is untestable.
Also, note that a proposition does not have to meet scientific standards of interpersonal testability in order to be testable. If I conducted a sequence of experiments that could kill me with high probability and remained alive, I would become pretty convinced that some form of MWI is right, but I would not expect my survival to convince you of this. After all, most other people in our branch who conducted this experiment would be dead. From your perspective, my survival could be an entirely expected fluke.
If EY believed in this kind of MWI, he would not bother with existential risks, since humanity will surely survive in some of the branches.
I’m fairly sure EY believes that humanity will survive in some branch with non-zero amplitude. I don’t see why it follows that one should not bother with existential risks. Presumably Eliezer wants to maximize the wave-function mass associated with humanity surviving.
If I conducted a sequence of experiments that could kill me with high probability and remained alive, I would become pretty convinced that some form of MWI is right, but I would not expect my survival to convince you of this.
Probably, but I’m having trouble thinking of this experiment as scientifically useful if you cannot convince anyone else of your findings. Maybe there is a way to gather some statistics from so called “miracle survival stories” and see if there is an excess that can be attributed to the MWI, but I doubt that there is such excess to begin with.
Presumably Eliezer wants to maximize the wave-function mass associated with humanity surviving.
Why? The only ones that matter are those where he survives.
Why? The only ones that matter are those where he survives.
This seems like a pretty controversial ethical position. I disagree and I’m pretty sure Eliezer does as well. To analogize, I’m pretty confident that I won’t be alive a thousand years from now, but I wouldn’t be indifferent about actions that would lead to the extinction of all life at that time.
I’m pretty confident that I won’t be alive a thousand years from now, but I wouldn’t be ambivalent about actions that would lead to the extinction of all life at that time.
Indifferent. Ambivalent means, more or less, that you have reasons for wanting it either way as opposed to not caring at all.
Why? The only ones that matter are those where he survives.
If they don’t matter to you, that still doesn’t necessitate that they don’t matter to him. Each person’s utility function may care about whatever it pleases.
QM, incidentally, even in MWI, is nondeterministic in the sense that you don’t know which of the outcomes you personally will experience.
This is broken because
which of the outcomes you personally will experience
is incoherent in the context of MWI. There is a “you” now, on this side of the event. There will be many people labeled “you”, on the other side. There is no one person on the other side that corresponds to “you personally” while the event is something you can say “will” about—at that point, it’s “did”.
That there is no finite algorithm behind, which would decide when which atom will explode.
In other words, that there is no hidden variables behind those events, which would cause the decay.
It is quite an unorthodox view in quantum mechanics, that those exist.
As far as I can see, the official view in QM is inherently “nonbayesian” in this sense. No hidden mechanism which would output the decay time of an uranium atom for example.
All of them will decay and all of them will not decay in different worlds.
Let’s assume for the sake of argument that the Copenhagen interpretation is true. Before the particles decay, there is no way to tell which decay. It’s random. After they decay, you can tell by looking. You know which ones decayed. It’s not random at all.
Randomness is a state of the mind. All indeterminism tells you is that randomness must be a state of every mind that exists before the event.
Imagine a universe where it’s always possible to tell the future from the past, but it’s not always possible to tell what’s on the right from what’s on the left. This is deterministic, but if you rotate it 90 degrees, it isn’t. A coordinate transformation can’t be changing whether or not something is random, can it?
Time is not quite like space (or, in the PDE language, initial value problems are quite different from boundary value problems). There are QFT techniques that treat time as “imaginary space”, but their applicability is quite limited and they certainly do not justify the view that “Randomness is a state of the mind”, which is either untestable or manifestly false.
Could you be more specific, in what sense different?
In any case, final (or terminal?) value problems are the same as the initial value problems, PDE-wise.
Start with the obvious link, look for hyperbolic and elliptic PDEs and ways to solve them, especially numerically. The wave equation techniques are different from the Laplace equation techniques, though there is some overlap. Anyway, this is getting quite far from the original discussion.
Not quite. For example, the heat/diffusion equation cannot be run backwards, because it fails the uniqueness conditions with the t sign reversed. In a simpler language, you cannot reconstruct the shape of an ink drop once it is dissolved in a cup of water.
I was mainly asking what differences are, in your opinion, important in the context of the present debate.
1) You can run the diffusion equation backwards, only you encounter problems with precision when the solution exponentially grows.
2) Fundamental laws of nature are [second order in time and—edit:that’s not true ] symmetric with respect to time reversal.
Well, we are quite a ways from the original context, but I was commenting on that treating time and space on the same footing and saying that future and past are basically interchangeable (sorry for paraphrasing) is often a bad assumption.
In other words, it is ill-posed and cannot be used to recover the initial conditions.
This is a whole other debate on what is fundamental and what is emergent. Clearly, the heat equation is pretty fundamental in many contexts, but its origins can be traced to the microscopic models of diffusion. There are other reasons why the apparent time reversal might not be there. For example, if you take the MWI seriously, the branching process is has a clear time arrow attached to it.
With precise measurement you can. Once started to be solved numerically different initial conditions (for the standard diffusion equation) are all going to yield constant function after some time due to rounding errors, so the information is lost and can’t be recovered by time-reversed process. But as a mathematical problem the diffusion equation with reversed time is well defined and has unique solution nevertheless.
From what I recall, the reverse-time diffusion u_t=-u_xx is not well posed, i.e. for a given solution u(t), if we perturb u(0) by epsilon, there is no finite t such that the deviation of the new solution from the old one is bounded by epsilon*e^t. A quick Google search confirms it: (pdf, search inside for “well posed”)
I didn’t realise that “well-posed” is a term with a technical meaning. The definition of well-posedness I have found says that the solution must exist, be unique and continuously depend on the initial data, I am not sure whether this is equivalent to your definition.
Anyway, the problem with the reverse dissipation equation is that for some initial conditions, namely discontinuous ones, the solution doesn’t exist. However, if a function u(x,t) satisfies the diffusion equation on the interval [t1,t2], we can recover it completely from knowledge of not only u(x,t1), but also from u(x,t0) with any fixed t0 lying between t1 and t2.
A small nitpick: The Schrodinger equation is not second order in time.
The Dirac one as well. Corrected.
Let me start over.
You could define “randomness” to mean indeterminism. If you do so, you would call the results of a fair, indeterministic coin toss random. Even so, if I tossed such a quantum coin, and you saw it land on heads, you would not be willing to bet even money that it landed on tails. P(coin landed on heads|your current information) ≈ 1. When you’re finding expected value, this is all that matters.
Indeed there is no hidden “time until I decay” number hidden inside each radioactive atom (based on some pseudo-random generator, or what have you), but how is it related to Bayes? And what do you mean by “official”?
I mean the *prevailing view among (quantum) physicists that:
You said it.
It is, while one thinks, that he must update on every evidence. You can’t update anything on a decay of the particular radioactive atom. Could be another one, but it just wasn’t and what is to update? Nothing, if that was a “truly random” event.
Either it wasn’t, either you have nothing to update based on this evidence.
This “view” has been experimentally tested in a simpler case of two-state systems as Bell’s inequality, though I do not remember, off-hand, any tests related to radioactive decay.
You can update your estimate of the element’s half-life, if nothing else.
You can update the half-life from the TIME of the decay. But nothing from the fact that it was the atom number 2 and not the number 1 or any other.
I know. That’s way I keep bringing up this “true random” case.
If I understand correctly, there is no physical difference between atom 2 and atom 1. There just is no fact of the matter to update on.
Say you have two diamonds, both marked with a million uranium 238 atoms.
You can measure in WHICH diamond the first decay will occur. An evidence you can’t use it for any update then.
We can test algorithms which we use to predict which atom would explode and when. The variables are part of the theory, not of the atoms. Absence of hidden variables effectively means that there is no regularity such that we could infer a law that would predict the state of an arbitrary system at time t1 with certainty* from observations made at time t0 < t1. Nevertheless any selected atom* is either going to explode or isn’t at a given time, and we can observe which was the case afterwards. Bayesianism doesn’t prohibit updating our beliefs about events after those events happened, in fact it doesn’t say anything at all about time. The “inherent randomness” of radioactive decay doesn’t make the uncertainty non-Bayesian in any meaningful way.
That said, I am afraid we may start to argue over the silly problem of future contingents and over definitions in general. The right question to ask now is: why do you want to distinguish truly random numbers from apparently random ones? The answer to the question about the quality of quantum randomness may depend on that purpose.
*) Although I know that certainty is impossible to achieve and atoms are indistinguishable, I have chosen to formulate the sentences the way I did for sake of brevity.
There are many forms of Bayesianism, and I’ve only seen a few that are married to the notion that ALL uncertainty is due to ignorance and none due to nondeterminism.
QM, incidentally, even in MWI, is nondeterministic in the sense that you don’t know which of the outcomes you personally will experience.
Yes I do. All of them. What I cannot predict is what my observation will be when it is determined by a quantum event that has already occurred but with which I have not yet had any interaction. That’s no more deterministic than a ‘for loop’ in a computer—self reflective code before the loop can predict exactly what is going to happen in the future but code within the loop has to do a lookup of the counter variable (or a side effect) if it is going to work conditionally.
That’s not a testable prediction, or a useful one.
It is in fact a testable prediction.
I cannot find anything in that entry that suggests that experiencing all possible outcomes can be experimentally tested. Feel free to elaborate.
Sorry, I should have elaborated, but I was short on time when I wrote the comment.
Let’s say you set up a sequence of quantum experiments, each of which has a 90% chance (according to the Born probabilities) of killing you instantly and a 10% chance of leaving you unharmed. After a number of such experiments you find yourself alive. This is something you would expect if some form of MWI were true and if all surviving future selves had conscious experience continuous with yours. It is not something you would expect if a collapse interpretation were true, or if MWI combined with some sort of indeterminism (governed by Born’s rule, presumably) about which future self continues your conscious experience were true. So such a sequence of experiments should lead you to update in favor of MWI + experience all possible outcomes.
Sorry, I am having trouble taking quantum suicide/immortality seriously. How is this different from The Simple Truth:
If there is even a remote chance that Mark would fly, he probably flew in almost every universe he survived.
Now, suppose one really dedicated and overzealous grad student of Tegmark performs this experiment. The odds of the MWI being a good model might go up significantly enough for others to try to replicate it in the tiny subset of the universes where she survives. As a result, in a tiny minority of the universes Max gets a Nobel prize for this major discovery, whereas in most others he gets sued by the family of the deceased.
If EY believed in this kind of MWI, he would not bother with existential risks, since humanity will surely survive in some of the branches.
See this post.
I’m not suggesting that this is a scientific experiment that should be conducted. Nor was I suggesting you should believe in this form of MWI. I was merely responding to your claim that wedrifid’s position is untestable.
Also, note that a proposition does not have to meet scientific standards of interpersonal testability in order to be testable. If I conducted a sequence of experiments that could kill me with high probability and remained alive, I would become pretty convinced that some form of MWI is right, but I would not expect my survival to convince you of this. After all, most other people in our branch who conducted this experiment would be dead. From your perspective, my survival could be an entirely expected fluke.
I’m fairly sure EY believes that humanity will survive in some branch with non-zero amplitude. I don’t see why it follows that one should not bother with existential risks. Presumably Eliezer wants to maximize the wave-function mass associated with humanity surviving.
Probably, but I’m having trouble thinking of this experiment as scientifically useful if you cannot convince anyone else of your findings. Maybe there is a way to gather some statistics from so called “miracle survival stories” and see if there is an excess that can be attributed to the MWI, but I doubt that there is such excess to begin with.
Why? The only ones that matter are those where he survives.
If if he doesn’t care at all about anyone else at all. This doesn’t seem likely.
This seems like a pretty controversial ethical position. I disagree and I’m pretty sure Eliezer does as well. To analogize, I’m pretty confident that I won’t be alive a thousand years from now, but I wouldn’t be indifferent about actions that would lead to the extinction of all life at that time.
Indifferent. Ambivalent means, more or less, that you have reasons for wanting it either way as opposed to not caring at all.
Well, presumably he wouldn’t be ambivalent as well as not being indifferent about performing/not-performing those actions.
Thanks. Corrected.
If they don’t matter to you, that still doesn’t necessitate that they don’t matter to him. Each person’s utility function may care about whatever it pleases.
This is broken because
is incoherent in the context of MWI. There is a “you” now, on this side of the event. There will be many people labeled “you”, on the other side. There is no one person on the other side that corresponds to “you personally” while the event is something you can say “will” about—at that point, it’s “did”.
Congratulations! You have constructed an interpretation of what I said that doesn’t make sense.
Why don’t you go back and try doing it the other way?
Which other way?