Eli: Nice post. I think your dichotomy between “rejecting scientific method” or “embracing insanity” is a bit excessive. I can see how some people feel that having all these multiple worlds around doesn’t seem like the “simplest” explanation. They accept Bayesian reasoning and Occam’s razor, but the notion of simplicity that they use is intuitive. Thus, I would view the essence of this post to be: if one views complexity in terms of minimum effective description length then WMI is a better explanation than Copenhagen.
I would also note that asking physicists to be strict Solomonoff/Bayesian/Occamists is asking for rather a lot considering that something like half the statisticians in the world are not Bayesian, and of those who are relatively few know of Solomonoff induction.
Finally, while this went part of the way to answering my question, the connection to AGI safety isn’t yet obvious to me.
Tim: “impractical theoretical model of serial computation”. Just because a theory isn’t practical doesn’t make it wrong. For example, should we define randomness in a way that is easy to test for? No, if we did it would break the very concept of what randomness means. Also, what does “serial” have to do with it? There is no concept of time in Kolmogorov complexity and a serial machine can emulate a parallel one, thus this distinction isn’t relevant.
Eli: Nice post. I think your dichotomy between “rejecting scientific method” or “embracing insanity” is a bit excessive. I can see how some people feel that having all these multiple worlds around doesn’t seem like the “simplest” explanation. They accept Bayesian reasoning and Occam’s razor, but the notion of simplicity that they use is intuitive. Thus, I would view the essence of this post to be: if one views complexity in terms of minimum effective description length then WMI is a better explanation than Copenhagen.
I would also note that asking physicists to be strict Solomonoff/Bayesian/Occamists is asking for rather a lot considering that something like half the statisticians in the world are not Bayesian, and of those who are relatively few know of Solomonoff induction.
Finally, while this went part of the way to answering my question, the connection to AGI safety isn’t yet obvious to me.
Tim: “impractical theoretical model of serial computation”. Just because a theory isn’t practical doesn’t make it wrong. For example, should we define randomness in a way that is easy to test for? No, if we did it would break the very concept of what randomness means. Also, what does “serial” have to do with it? There is no concept of time in Kolmogorov complexity and a serial machine can emulate a parallel one, thus this distinction isn’t relevant.