Many-worlds is a clearly explicable interpretation of quantum mechanics and dramatically simpler than the Copenhagen interpretation revered in the mainstream. It rules out a lot of the abnormal conclusions that people draw from Copenhagen, e.g. ascribing mystical powers to consciousness, senses, or instruments. It is true enough to use as a model for what goes on in the world; but it is not true enough to lead us to any abnormal beliefs about, e.g., morality, “quantum immortality”, or “possible worlds” in the philosophers’ sense.
Cryonics is worth developing. The whole technology does not exist yet; and ambitions to create it should not be mistaken for an existing technology. That said, as far as I can tell, people who advocate cryonic preservation aren’t deluded about this.
Mainstream science is a social institution commonly mistaken for an epistemology. (We need both. Epistemologies, being abstractions, have a notorious inability to provide funding.) It is an imperfect social institution; reforms to it are likely to come from within, not by abolishing it and replacing it with some unspecified Bayesian upgrade. Reforms worth supporting include performing and publishing more replications of studies; open-access publishing; and registration of trials as a means to fight publication bias. Oh, and better training in probability, too, but everyone can use that. However, cursing “mainstream science” is a way to lose.
Consequentialism is the ground of morality; in a physical world, what else could be? However, human reasoning about morality is embodied in cognitive algorithms that focus on things like social rule-following and the cooperation of other agents. This is why it feels like deontological and virtue ethics have something going on. We kinda have to deal with those to get on with others.
I am not sure that my metaethics accord with Eliezer’s, because I am not entirely sure what Eliezer’s metaethics are. I have my own undeveloped crank theory of ethical claims as observations of symmetry among agents, which accords with Eliezer’s comments on fairness and also Hofstadter’s superrationality, so I’ll give this a pass. It strikes me as deeply unfortunate that game theory came so recently in human history — surprise, it turns out the Golden Rule isn’t “just” morality, it’s also algebra.
The “people are crazy” maxim is a good warning against rationalization; but there are a lot of rationality-hacks to be found that exploit specific biases, cognitive shortcuts, and other areas of improvability in human reasoning. It’s probably more useful as a warning against looking for complex explanations of social behaviors which arise from historical processes rather than reasoned ones.
Is it? I think that the most widely accepted interpretation among physicists is the shut-up-and-calculate interpretation.
There are quite a few people that actively do research and debate on QM foundations, and, among that group, there’s honestly no preferred interpretation. People are even looking for alternatives that bypass the problem entirely (e.g. GRW. The debate is fully open, at the moment.
Outside of this specific field, yes, it’s pretty much shut-up-and-calculate.
Unfortunately, in some cases it is not clear what exactly you should calculate to make a good prediction. Penrose interpretation and MWI can be used to decide—at least sometimes. Nobody has (yet) reached the scale where the difference would be easily testable, though.
The wikipedia page on the Copenhagen Interpretation says:
According to a poll at a Quantum Mechanics workshop in 1997,[13] the Copenhagen interpretation is the most widely-accepted specific interpretation of quantum mechanics, followed by the many-worlds interpretation.[14] Although current trends show substantial competition from alternative interpretations, throughout much of the twentieth century the Copenhagen interpretation had strong acceptance among physicists. Astrophysicist and science writer John Gribbin describes it as having fallen from primacy after the 1980s.[15]
and dramatically simpler than the Copenhagen interpretation
No, it is exactly as complicated. As demonstrated by its utilization of exactly the same mathematics.
. It rules out a lot of the abnormal conclusions that people draw from Copenhagen, e.g. ascribing mystical powers to consciousness, senses, or instruments.
It is not without its own extra entities of equally enormously additive nature however; and even and those abnormal conclusions are as valid from the CI as is quantum immortality from MWI.
No, it is exactly as complicated. As demonstrated by its utilization of exactly the same mathematics.
Not all formalizations that give the same observed predictions have the same Kolmogorov complexity, and this is true even for much less rigorous notions of complexity. For example, consider a computer program that when given a positive integer n, outputs the nth prime number. One simple thing it could do is simply use trial division. But another could use some more complicated process, like say brute force searching for a generator of (Z/pZ)*.
In this case, the math being used is pretty similar, so the complexity shouldn’t be that different. But that’s a more subtle and weaker claim.
Not all formalizations that give the same observed predictions have the same Kolmogorov complexity[.]
Is that true? I thought Kolmogorov complexity was “the length of the shortest program that produces the observations”—how can that not be a one place function of the observations?
Yes. In so far as the output is larger than the set of observations. Take MWI for example- the output includes all the parts of the wavebranch that we can’t see. In contrast, Copenhagen only has outputs that we by and large do see. So the key issue here is that outputs and observable outputs aren’t the same thing.
Not all formalizations that give the same observed predictions have the same Kolmogorov complexity, and this is true even for much less rigorous notions of complexity.
Sure. But MWI and CI use the same formulae. They take the same inputs and produce the same outputs.
Everything else is just that—interpretation.
One simple thing it could do is simply use trial division. But another could use some more complicated process, like say brute force searching for a generator of (Z/pZ)*.
And those would be different calculations.
In this case, the math being used is pretty similar,
The interpretation in this context can imply unobserved output. See the discussion with dlthomas below. Part of the issue is that the interpretation isn’t separate from the math.
“Entities must not be replicated beyond necessity”. Both interpretations violate this rule. The only question is which violates it more. And the answer to that seems to one purely of opinion.
So throwing out the extra stuff—they’re using exactly the same math.
Many-worlds is a clearly explicable interpretation of quantum mechanics and dramatically simpler than the Copenhagen interpretation revered in the mainstream. It rules out a lot of the abnormal conclusions that people draw from Copenhagen, e.g. ascribing mystical powers to consciousness, senses, or instruments. It is true enough to use as a model for what goes on in the world; but it is not true enough to lead us to any abnormal beliefs about, e.g., morality, “quantum immortality”, or “possible worlds” in the philosophers’ sense.
Cryonics is worth developing. The whole technology does not exist yet; and ambitions to create it should not be mistaken for an existing technology. That said, as far as I can tell, people who advocate cryonic preservation aren’t deluded about this.
Mainstream science is a social institution commonly mistaken for an epistemology. (We need both. Epistemologies, being abstractions, have a notorious inability to provide funding.) It is an imperfect social institution; reforms to it are likely to come from within, not by abolishing it and replacing it with some unspecified Bayesian upgrade. Reforms worth supporting include performing and publishing more replications of studies; open-access publishing; and registration of trials as a means to fight publication bias. Oh, and better training in probability, too, but everyone can use that. However, cursing “mainstream science” is a way to lose.
Consequentialism is the ground of morality; in a physical world, what else could be? However, human reasoning about morality is embodied in cognitive algorithms that focus on things like social rule-following and the cooperation of other agents. This is why it feels like deontological and virtue ethics have something going on. We kinda have to deal with those to get on with others.
I am not sure that my metaethics accord with Eliezer’s, because I am not entirely sure what Eliezer’s metaethics are. I have my own undeveloped crank theory of ethical claims as observations of symmetry among agents, which accords with Eliezer’s comments on fairness and also Hofstadter’s superrationality, so I’ll give this a pass. It strikes me as deeply unfortunate that game theory came so recently in human history — surprise, it turns out the Golden Rule isn’t “just” morality, it’s also algebra.
The “people are crazy” maxim is a good warning against rationalization; but there are a lot of rationality-hacks to be found that exploit specific biases, cognitive shortcuts, and other areas of improvability in human reasoning. It’s probably more useful as a warning against looking for complex explanations of social behaviors which arise from historical processes rather than reasoned ones.
Is it? I think that the most widely accepted interpretation among physicists is the shut-up-and-calculate interpretation.
There are quite a few people that actively do research and debate on QM foundations, and, among that group, there’s honestly no preferred interpretation. People are even looking for alternatives that bypass the problem entirely (e.g. GRW. The debate is fully open, at the moment.
Outside of this specific field, yes, it’s pretty much shut-up-and-calculate.
Yes, doing the calculation and getting the right result is worth of many interpretations, if not.all of them together.
Besides, interpretations usually give you more than the truth. What is awkward.
Unfortunately, in some cases it is not clear what exactly you should calculate to make a good prediction. Penrose interpretation and MWI can be used to decide—at least sometimes. Nobody has (yet) reached the scale where the difference would be easily testable, though.
The wikipedia page on the Copenhagen Interpretation says:
No, it is exactly as complicated. As demonstrated by its utilization of exactly the same mathematics.
It is not without its own extra entities of equally enormously additive nature however; and even and those abnormal conclusions are as valid from the CI as is quantum immortality from MWI.
-- I speak as someone who rejects both.
Not all formalizations that give the same observed predictions have the same Kolmogorov complexity, and this is true even for much less rigorous notions of complexity. For example, consider a computer program that when given a positive integer n, outputs the nth prime number. One simple thing it could do is simply use trial division. But another could use some more complicated process, like say brute force searching for a generator of (Z/pZ)*.
In this case, the math being used is pretty similar, so the complexity shouldn’t be that different. But that’s a more subtle and weaker claim.
Is that true? I thought Kolmogorov complexity was “the length of the shortest program that produces the observations”—how can that not be a one place function of the observations?
Yes. In so far as the output is larger than the set of observations. Take MWI for example- the output includes all the parts of the wavebranch that we can’t see. In contrast, Copenhagen only has outputs that we by and large do see. So the key issue here is that outputs and observable outputs aren’t the same thing.
Ah, fair. So in this case, we are imagining a sequence of additional observations (from a privileged position we cannot occupy) to explain.
Sure. But MWI and CI use the same formulae. They take the same inputs and produce the same outputs.
Everything else is just that—interpretation.
And those would be different calculations.
No, it’s the same math.
The interpretation in this context can imply unobserved output. See the discussion with dlthomas below. Part of the issue is that the interpretation isn’t separate from the math.
“Entities must not be replicated beyond necessity”. Both interpretations violate this rule. The only question is which violates it more. And the answer to that seems to one purely of opinion.
So throwing out the extra stuff—they’re using exactly the same math.
I agree with everything you said here, but I didn’t want to use so many words.