This is too harsh. Yudkowsky’s conclusions aren’t that far from the positions of many mainstream specialists. The main problem with the sequence is, as Mitchell_Porter has noted, its overconfidence and insistence on obvious superiority of MWI. But that is a rather subtle mistake; calling it a spectacular failure would be needless exaggeration.
Also, I think the E.Y.’s epistemology is sound (and not that exotic; most components have been around for decades at least) and his expertise (or lack thereof) may have caused him to overlook some existing alternatives to MWI but wasn’t the main problem. To me it seems as if the epistemology endorsed elsewhere in the sequences was misapplied; instead of explaining away the question about the real essence of apparent collapse in a similar way as he has explained away the questions about the essences of sound or free will, he insisted on a verbal explanation of dubious meaning just because he found it unimaginable to have the concept of wave function not refer directly to an element of objective reality.
The position of mainstream specialists is that MWI is plausible, but there is no compelling evidence or theoretical arguments to conclusively decide in favour of one particular interpretation, and they have personal preferences largely based on intuitive appeal.
Others go further and claim that the whole QM interpretation issue is meaningless and scientifically improper.
Therefore, there is no consensus on the issue in the mainstream scientific community.
Yudkowsky attempted to resolve the issue once and for all, and I think it’s uncontroversial to say that he objectively failed.
Also, I think the E.Y.’s epistemology is sound
Most of its elements (empiricism, bayesian inference, etc.) are sound and essentially uncontroversial. Some elements specific to his version (“informal” Solomonoff Induction and Kolmogorov complexity) are more questionable.
The position of mainstream specialists is that MWI is plausible, but there is no compelling evidence or theoretical arguments to conclusively decide in favour of one particular interpretation, and they have personal preferences largely based on intuitive appeal. Others go further and claim that the whole QM interpretation issue is meaningless and scientifically improper.
Science google off, Bayes googles back on. If that is the state of affairs in science, then we know MWI is the better one because it is simpler.
Yudkowsky attempted to resolve the issue once and for all, and I think it’s uncontroversial to say that he objectively failed.
Sorry, this statement is inconsistent with the other paragraph I quoted above. Either the scientists are undecided because there’s no evidence, and Occam clear it up as EY says, or the scientists are in some other state and the whole sequence is built on bad premises.
Unless you literally mean “once and for all”, which isn’t what he attempted to do, and is a strawman.
(He said given the current state of evidence, we must prefer MWI to Collapse, and that there should be no controversy about this, not that MWI was 100% correct and will never be replaced.)
Science google off, Bayes googles back on. If that is the state of affairs in science, then we know MWI is the better one because it is simpler.
The goal of the sequence is to convince us of this position. But if we hypothesize that the difference between the two theories does not pay rent in anticipated experience, then I’m unconvinced that it is rational to say that one theory has higher probability—certainly not to the level of certainty presented in the sequence.
If one wants to argue that research resources are poorly allocated between less complex and more complex hypothesis, have at it. I don’t disagree, but I think re-engineering the practice of scientific research is sociology issue, not a pure right and wrong issue.
Either the scientists are undecided because there’s no evidence, and Occam clear it up as EY says, or the scientists are in some other state and the whole sequence is built on bad premises.
Even granting the assertion that one should assign probability to beliefs that don’t pay rent, it really requires a specialist to determine that MWI is the simpler explanation. Eliezer’s ridicule of the collapse theories could fulfill that function, but my sense is that his talented-layperson perspective leads him astray. Much like the difference between “clear and present danger” and “imminent lawless action” are hard to distinguish unless one has studied the relevant free speech law.
And that’s why quantum mechanics was a poor choice of topic for the case study. Eliezer doesn’t know enough physics to justify his confidence in the relative simplicity of MWI. And fighting that fight is totally distinct from the essential issue I discussed above.
But if we hypothesize that the difference between the two theories does not pay rent in anticipated experience, then I’m unconvinced that it is rational to say that one theory has higher probability.
If I offered two competing theories:
Each electron contains inside it a tiny little angel that is happy when things go well in the world and sad when things go badly. But there’s absolutely no way to detect such from the outside.
Electrons don’t actually contain any entities with minds inside them, even undetectable ones.
I think you’d assign higher probability to the latter theory, even though there’s no difference in anticipated experience between the two of them.
Either the scientists are undecided because there’s no evidence, and Occam clear it up as EY says, or the scientists are in some other state and the whole sequence is built on bad premises.
As I mentioned before, whether MWI is better or not, the QM Sequence itself is based on too controversial an example, and so failed to achieve the desired educational effect (whatever that might be, I am not sure, something Occam-related, apparently). I am hard-pressed to believe that there is not a single other example in all of physics which would illustrate the same point with less controversy.
By the very nature of the topic, any contemporary examples cannot fail to be controversial. If “traditional” scientific rationality supports position X, then many or most scientists will support X, and the claim that they are wrong and the true position is the Bayes-supported Y is bound to be controversial.
So for non-controversial examples one would have to look to the history of science. For example, there must have been cases where a new theory was proposed that was much better than the current ones by Bayes, but which was not accepted by the scientific community until confirmed by experiments. Maybe general relativity?
Physicists love simplicity, so they are naturally Bayesian. Unfortunately, Nature is not, otherwise the cosmological constant would be zero, speed of light would be infinite, neutrino would be massless and the Standard Model of Particle Physics would be based on something like SU(5) instead of the SU(3)xSU(2)xU(1).
until general relativy was confirmed by experiments, who besides einstien had the necessary evidence? I’m not familiar with the case enough to really say how much of a difference there should have been.
To me Bayes is but one calculational tool, a way to build better models (i.e. those with higher predictive power), so I do not understand how Bayes can disagree with the traditional scientific method (not the strawmanned version EY likes to destroy). Again, I might be completely off, feel free to suggest what I missed.
Bayes is the well proven (to my knowledge) framework in which you should handle learning from evidence. All the other tools can be understood in how they derive from or contradict Bayes, like how engines can be understood in terms of thermodynamics.
If you let science define itself as rationality (exactly what works for epistemology), then there can be no conflict with Bayesian rationality, but I don’t think current (or traditional, ideal) science is constructed that way. Some elements of Eliezer’s straw science are definitely out there, and I’ve seen some of it first hand. On the other hand, I don’t know the science scene well enough to find good examples, which is why I asked.
Bayes is the well proven (to my knowledge) framework in which you should handle learning from evidence. All the other tools can be understood in how they derive from or contradict Bayes, like how engines can be understood in terms of thermodynamics.
Bayesian updating is a good thing to do when there is no conclusive evidence to discriminate between models and you must decide what to do next. It should be taught to scientists, engineers, economists, lawyers and programmers as the best tool available when deciding under uncertainty. I don’t see how it can be pushed any farther than that, into the realm of determining what is.
There are plenty of Bayesian examples this crowd can benefit from, such as “My code is misbehaving, what’s the best way to find the bug?”, but, unfortunately, EY does not seem to want to settle for a small fry like that.
Bayesian updating is a good thing to do when there is no conclusive evidence to discriminate between models and you must decide what to do next.
Likewise with conclusive evidence. Bayes is always right.
I don’t see how it can be pushed any farther than that, into the realm of determining what is.
I think I’ve confused you, sorry. I don’t mean to claim that Bayes implies or is able to support realism any better or worse than anything else. Bayes allocates anticipation between hypotheses. The what-is thing is orthogonal and (I’m coming to agree with you) probably useless.
2: If you are doing science and not, say, criminal law, at some point you have to get that conclusive evidence (or at least as conclusive as it gets, like the recent Higgs confirmation). Bayes is still probably, on average, the fastest way to get there, though.
Bayes is always right.
Feel free to unpack what you mean by right. Even your best Bayesian guess can turn out to be wrong.
So? It’s correct. Maybe you use some quick approximation, but it’s not like doing the right thing is inherently more costly.
If you are doing science and not, say, criminal law, at some point you have to get that conclusive evidence (or at least as conclusive as it gets, like the recent Higgs confirmation). Bayes is still probably, on average, the fastest way to get there, though.
This get-better-evidence thing would also be recommended by bayes+decision theory. (and if it wasn’t then it would have to defer to bayes+decision). Don’t see the relevence.
Feel free to unpack what you mean by right.
The right probability distribution is the one that maximizes the expected utility of an expected utility maximizer using that probability distribution. That’s missing a bunch of hairy stuff involving where to get the outer probability distribution, but I hope you get the point.
You can often get lucky by not using Bayesian updating. After all, that’s how science has been done for ages. What matters in the end is the superior explanatory and predictive power of the model, not how likely, simple or cute it is.
The right probability distribution is the one that maximizes the expected utility of an expected utility maximizer using that probability distribution.
So, on average, you make better decisions. I agree with that much. As I said, a nice useful tool. You can still lose even if you use it (“but I was doing everything right”—Bayesian’s famous last words), while someone who never heard of Bayes can win (and does, every 6⁄49 draw).
You can often get lucky by not using Bayesian updating. After all, that’s how science has been done for ages.
It’s “gotten lucky” exactly to the extent that it follows Bayes.
What matters in the end is the superior explanatory and predictive power of the model, not how simple or cute it is.
Yes. cuteness is overridden by evidence, but there is a definite trend in physics and elsewhere that the best models have often been quite cute in a certain sense, so we can use that cuteness as a proxy for “probably right”.
As I said, a nice useful tool. You can still lose even if you use it
Yes, a useful tool, but also the proven most-optimal and fully general tool. You can still lose, but any other system will cause you to still lose even more.
I think we are in agreement for the most part. I’m out.
Science google off, Bayes googles back on. If that is the state of affairs in science, then we know MWI is the better one because it is simpler.
I think you are missing the point.
It’s unclear whether MWI is the simplest interpretation. If it was, it would have been uncontroversially accepted. Occam’s razor is a core principle of the standard scientific method.
This is too harsh. Yudkowsky’s conclusions aren’t that far from the positions of many mainstream specialists. The main problem with the sequence is, as Mitchell_Porter has noted, its overconfidence and insistence on obvious superiority of MWI. But that is a rather subtle mistake; calling it a spectacular failure would be needless exaggeration.
Also, I think the E.Y.’s epistemology is sound (and not that exotic; most components have been around for decades at least) and his expertise (or lack thereof) may have caused him to overlook some existing alternatives to MWI but wasn’t the main problem. To me it seems as if the epistemology endorsed elsewhere in the sequences was misapplied; instead of explaining away the question about the real essence of apparent collapse in a similar way as he has explained away the questions about the essences of sound or free will, he insisted on a verbal explanation of dubious meaning just because he found it unimaginable to have the concept of wave function not refer directly to an element of objective reality.
The position of mainstream specialists is that MWI is plausible, but there is no compelling evidence or theoretical arguments to conclusively decide in favour of one particular interpretation, and they have personal preferences largely based on intuitive appeal. Others go further and claim that the whole QM interpretation issue is meaningless and scientifically improper.
Therefore, there is no consensus on the issue in the mainstream scientific community.
Yudkowsky attempted to resolve the issue once and for all, and I think it’s uncontroversial to say that he objectively failed.
Most of its elements (empiricism, bayesian inference, etc.) are sound and essentially uncontroversial. Some elements specific to his version (“informal” Solomonoff Induction and Kolmogorov complexity) are more questionable.
Science google off, Bayes googles back on. If that is the state of affairs in science, then we know MWI is the better one because it is simpler.
Sorry, this statement is inconsistent with the other paragraph I quoted above. Either the scientists are undecided because there’s no evidence, and Occam clear it up as EY says, or the scientists are in some other state and the whole sequence is built on bad premises.
Unless you literally mean “once and for all”, which isn’t what he attempted to do, and is a strawman. (He said given the current state of evidence, we must prefer MWI to Collapse, and that there should be no controversy about this, not that MWI was 100% correct and will never be replaced.)
The goal of the sequence is to convince us of this position. But if we hypothesize that the difference between the two theories does not pay rent in anticipated experience, then I’m unconvinced that it is rational to say that one theory has higher probability—certainly not to the level of certainty presented in the sequence.
If one wants to argue that research resources are poorly allocated between less complex and more complex hypothesis, have at it. I don’t disagree, but I think re-engineering the practice of scientific research is sociology issue, not a pure right and wrong issue.
Even granting the assertion that one should assign probability to beliefs that don’t pay rent, it really requires a specialist to determine that MWI is the simpler explanation. Eliezer’s ridicule of the collapse theories could fulfill that function, but my sense is that his talented-layperson perspective leads him astray. Much like the difference between “clear and present danger” and “imminent lawless action” are hard to distinguish unless one has studied the relevant free speech law.
And that’s why quantum mechanics was a poor choice of topic for the case study. Eliezer doesn’t know enough physics to justify his confidence in the relative simplicity of MWI. And fighting that fight is totally distinct from the essential issue I discussed above.
If I offered two competing theories:
Each electron contains inside it a tiny little angel that is happy when things go well in the world and sad when things go badly. But there’s absolutely no way to detect such from the outside.
Electrons don’t actually contain any entities with minds inside them, even undetectable ones.
I think you’d assign higher probability to the latter theory, even though there’s no difference in anticipated experience between the two of them.
As I mentioned before, whether MWI is better or not, the QM Sequence itself is based on too controversial an example, and so failed to achieve the desired educational effect (whatever that might be, I am not sure, something Occam-related, apparently). I am hard-pressed to believe that there is not a single other example in all of physics which would illustrate the same point with less controversy.
Good point. I don’t disagree with that.
You’re a physicist, do you know of any better examples of issues where traditional science googles and bayes goggles disagree?
By the very nature of the topic, any contemporary examples cannot fail to be controversial. If “traditional” scientific rationality supports position X, then many or most scientists will support X, and the claim that they are wrong and the true position is the Bayes-supported Y is bound to be controversial.
So for non-controversial examples one would have to look to the history of science. For example, there must have been cases where a new theory was proposed that was much better than the current ones by Bayes, but which was not accepted by the scientific community until confirmed by experiments. Maybe general relativity?
Physicists love simplicity, so they are naturally Bayesian. Unfortunately, Nature is not, otherwise the cosmological constant would be zero, speed of light would be infinite, neutrino would be massless and the Standard Model of Particle Physics would be based on something like SU(5) instead of the SU(3)xSU(2)xU(1).
until general relativy was confirmed by experiments, who besides einstien had the necessary evidence? I’m not familiar with the case enough to really say how much of a difference there should have been.
To me Bayes is but one calculational tool, a way to build better models (i.e. those with higher predictive power), so I do not understand how Bayes can disagree with the traditional scientific method (not the strawmanned version EY likes to destroy). Again, I might be completely off, feel free to suggest what I missed.
Bayes is the well proven (to my knowledge) framework in which you should handle learning from evidence. All the other tools can be understood in how they derive from or contradict Bayes, like how engines can be understood in terms of thermodynamics.
If you let science define itself as rationality (exactly what works for epistemology), then there can be no conflict with Bayesian rationality, but I don’t think current (or traditional, ideal) science is constructed that way. Some elements of Eliezer’s straw science are definitely out there, and I’ve seen some of it first hand. On the other hand, I don’t know the science scene well enough to find good examples, which is why I asked.
Bayesian updating is a good thing to do when there is no conclusive evidence to discriminate between models and you must decide what to do next. It should be taught to scientists, engineers, economists, lawyers and programmers as the best tool available when deciding under uncertainty. I don’t see how it can be pushed any farther than that, into the realm of determining what is.
There are plenty of Bayesian examples this crowd can benefit from, such as “My code is misbehaving, what’s the best way to find the bug?”, but, unfortunately, EY does not seem to want to settle for a small fry like that.
Likewise with conclusive evidence. Bayes is always right.
I think I’ve confused you, sorry. I don’t mean to claim that Bayes implies or is able to support realism any better or worse than anything else. Bayes allocates anticipation between hypotheses. The what-is thing is orthogonal and (I’m coming to agree with you) probably useless.
1: It’s an overkill in this case.
2: If you are doing science and not, say, criminal law, at some point you have to get that conclusive evidence (or at least as conclusive as it gets, like the recent Higgs confirmation). Bayes is still probably, on average, the fastest way to get there, though.
Feel free to unpack what you mean by right. Even your best Bayesian guess can turn out to be wrong.
So? It’s correct. Maybe you use some quick approximation, but it’s not like doing the right thing is inherently more costly.
This get-better-evidence thing would also be recommended by bayes+decision theory. (and if it wasn’t then it would have to defer to bayes+decision). Don’t see the relevence.
The right probability distribution is the one that maximizes the expected utility of an expected utility maximizer using that probability distribution. That’s missing a bunch of hairy stuff involving where to get the outer probability distribution, but I hope you get the point.
You can often get lucky by not using Bayesian updating. After all, that’s how science has been done for ages. What matters in the end is the superior explanatory and predictive power of the model, not how likely, simple or cute it is.
So, on average, you make better decisions. I agree with that much. As I said, a nice useful tool. You can still lose even if you use it (“but I was doing everything right”—Bayesian’s famous last words), while someone who never heard of Bayes can win (and does, every 6⁄49 draw).
It’s “gotten lucky” exactly to the extent that it follows Bayes.
Yes. cuteness is overridden by evidence, but there is a definite trend in physics and elsewhere that the best models have often been quite cute in a certain sense, so we can use that cuteness as a proxy for “probably right”.
Yes, a useful tool, but also the proven most-optimal and fully general tool. You can still lose, but any other system will cause you to still lose even more.
I think we are in agreement for the most part. I’m out.
EDIT: also, you should come to more meetups.
Thursday is a bad day for me...
I think you are missing the point.
It’s unclear whether MWI is the simplest interpretation. If it was, it would have been uncontroversially accepted. Occam’s razor is a core principle of the standard scientific method.