Upvoted. This is definitely the right question to ask here… thanks for reminding me.
I hesitate to speculate on what gaps exist in Scott Aaronson’s knowledge. His command of QM and complexity theory greatly exceed mine.
[...]
OK hesitation over. I will now proceed to impertinently speculate on possible gaps in Scott Aaronson’s knowledge and their implications!
Assuming he still believes that collapse postulate theories of QM are equally plausible to Many Worlds, I could say that he might not appreciate the complexity penalty that collapse theories require… except Scott Aaronson is the Head Zookeeper of the Complexity Zoo! So he knows about complexity classes and calculating complexity of algorithms inside out. Perhaps this knowledge doesn’t help him naturally calculate the informational complexity of the parts of scientific theories that are phrased in natural languages like English? I know my mind doesn’t automatically do this and it’s not a habit that most people have. Another possibility is that perhaps it’s not obvious to him that Occam’s razor should apply this broadly? So these would point to limitations in more fundamental layers of his scientific thinking ability. This could lead to him having trouble telling good new theories to spend time investigating from bad ones… or make forming compact representations for his own research findings more difficult. He consequently discovers less, more slowly, and describes what he discovers less well.
OK… wild speculation complete!
My actual take has always been that he probably understands things correctly in QM but is just exceedingly well-mannered and diplomatic with his academic colleagues. Even if he felt Many Worlds was now a more sound theory, he would probably avoid being a blow-hard about it. He doesn’t need to ruffle his buddies’ feathers—he has to work with these guys, go to conferences with them, and have his papers reviewed by them. Also, he may know it’s pointless to get others to switch to a new interpretation if they don’t see the fundamental reason why it’s right to switch. And the arguments needed to convince others have inference chains too long to present in most venues.
Scott Aaronson is the Head Zookeeper of the Complexity Zoo! So he knows about complexity classes and calculating complexity of algorithms inside out. Perhaps this knowledge doesn’t help him naturally calculate the informational complexity of the parts of scientific theories that are phrased in natural languages like English?
Just to be clear: there are two unrelated notions of “complexity” blurred together in the above comment. The Complexity Zoo discusses computational complexity theory—it discusses how the run-time of an algorithm scales with algorithm’s inputs (and thereby classes algorithms into P, EXPTIME, etc.).
Kolmogorov Complexity is unrelated: it is the minimum number of bits (in some fixed universal programming language) required to represent a given algorithm. Eliezer’s argument for MWI rests on Komogorov complexity and has nothing to do with computational complexity theory.
I’m sure Scort Aarsonson is familiar with both, of course; I just want to make sure LWers aren’t confused about it.
How would Aaronson benefit from believing in MWI, over and above knowing that it’s a valid interpretation?
Upvoted. This is definitely the right question to ask here… thanks for reminding me.
I hesitate to speculate on what gaps exist in Scott Aaronson’s knowledge. His command of QM and complexity theory greatly exceed mine.
[...]
OK hesitation over. I will now proceed to impertinently speculate on possible gaps in Scott Aaronson’s knowledge and their implications!
Assuming he still believes that collapse postulate theories of QM are equally plausible to Many Worlds, I could say that he might not appreciate the complexity penalty that collapse theories require… except Scott Aaronson is the Head Zookeeper of the Complexity Zoo! So he knows about complexity classes and calculating complexity of algorithms inside out. Perhaps this knowledge doesn’t help him naturally calculate the informational complexity of the parts of scientific theories that are phrased in natural languages like English? I know my mind doesn’t automatically do this and it’s not a habit that most people have. Another possibility is that perhaps it’s not obvious to him that Occam’s razor should apply this broadly? So these would point to limitations in more fundamental layers of his scientific thinking ability. This could lead to him having trouble telling good new theories to spend time investigating from bad ones… or make forming compact representations for his own research findings more difficult. He consequently discovers less, more slowly, and describes what he discovers less well.
OK… wild speculation complete!
My actual take has always been that he probably understands things correctly in QM but is just exceedingly well-mannered and diplomatic with his academic colleagues. Even if he felt Many Worlds was now a more sound theory, he would probably avoid being a blow-hard about it. He doesn’t need to ruffle his buddies’ feathers—he has to work with these guys, go to conferences with them, and have his papers reviewed by them. Also, he may know it’s pointless to get others to switch to a new interpretation if they don’t see the fundamental reason why it’s right to switch. And the arguments needed to convince others have inference chains too long to present in most venues.
Just to be clear: there are two unrelated notions of “complexity” blurred together in the above comment. The Complexity Zoo discusses computational complexity theory—it discusses how the run-time of an algorithm scales with algorithm’s inputs (and thereby classes algorithms into P, EXPTIME, etc.).
Kolmogorov Complexity is unrelated: it is the minimum number of bits (in some fixed universal programming language) required to represent a given algorithm. Eliezer’s argument for MWI rests on Komogorov complexity and has nothing to do with computational complexity theory.
I’m sure Scort Aarsonson is familiar with both, of course; I just want to make sure LWers aren’t confused about it.
Complexity is mentioned very often on LW but there is no post that works out the different notions?
http://lesswrong.com/lw/jp/occams_razor/ http://lesswrong.com/lw/q3/decoherence_is_simple/
http://en.wikipedia.org/wiki/Computational_complexity_theory
Ben G. had some interesting thoughts on the topic in 1993.