This is an example of Eliezer’s extreme overconfidence. As he rightly points out, we cannot in fact construct a quantum mechanical model of a 747. Yet he asserts as absolute fact that such a model would be more accurate than our usual models.
I think it would be too. But I don’t assert this as absolute fact, much less the universal claim that reality in no way has different levels in it; especially since, as Mitchell points out, one level of reality seems to be our mental representations, which cannot be said to be mere representations of representations. They are precisely real representations.
This is an example of Eliezer’s extreme overconfidence. As he rightly points out, we cannot in fact construct a quantum mechanical model of a 747. Yet he asserts as absolute fact that such a model would be more accurate than our usual models.
This is the point made in “A Different Universe” by Robert B Laughlin, a Nobel Prize-winning physicist. He is a solid state physicist and argues that
Going from a more “fundamental” to “higher” level requires computations that are inprinciple intractable. You cannot possibly avoid the use of levels of analysis. It is not just a matter of computational convenience. [I admit that the universe does the calculation but we have no idea how].
Laughton won his Nobel for “explaining” the fractional quantum Hall effect before anyone else did. But he casts scorn on such explanations, pointing out that of the 27 solid phases of water, not one was predicted, but all have been “explained” after the fact.
Phenomena at higher levels are often, even usually, insensitive to the nature of the levels below. A good example is the statistical mechanics of gases, which hardly changed when our view of the atoms that make up gases changed from hard Newtonian balls to fuzzy quantum blobs.
There is plenty of evidence that “fundamental” physics is just the statistical mechanics of a lower layer eg all those “virtual particles”—what are they all about? “Empty” space seems to be about as empty as the super bowl on the day of the big game. There is no evidence that “fundamental” physics is at all fundamental in fact. We don’t even have any indication how many layers there are before we get to the turtle at the bottom, if there is one.
Doesn’t the fact that the universe is carrying out these computations mean it is feasible in principle? Our current ignorance of how this is done is irrelevant. Am I missing something?
statistical mechanics of gases, which hardly changed when our view of the atoms that make up gases changed from hard Newtonian balls to fuzzy quantum blobs
This seems to be a map/territory confusion. A change in our model shouldn’t change what we observe. If our high level theories changed dramatically, that would be a bad sign.
I agree with your skepticism a QM model of classical realm mechanics being ipso facto more accurate. Since by unsurmountable algorithmic complexity problems we agree this is an untestable hypothesis, confidence should start out low. And there’s lots of circumstantial evidence that the farther you go down the levels of organization in order to explain the higher level, the less accuracy this yields. It’s easier to explain human behavior with pre-supposed cognitive constructs (like pattern recognition, cognitive biases, etc) than with neurological.
The map is not the terrain, but maybe the map for level 1 is the terrain for level 2.
This is an example of Eliezer’s extreme overconfidence. As he rightly points out, we cannot in fact construct a quantum mechanical model of a 747. Yet he asserts as absolute fact that such a model would be more accurate than our usual models.
I think it would be too. But I don’t assert this as absolute fact, much less the universal claim that reality in no way has different levels in it; especially since, as Mitchell points out, one level of reality seems to be our mental representations, which cannot be said to be mere representations of representations. They are precisely real representations.
This is the point made in “A Different Universe” by Robert B Laughlin, a Nobel Prize-winning physicist. He is a solid state physicist and argues that
Going from a more “fundamental” to “higher” level requires computations that are in principle intractable. You cannot possibly avoid the use of levels of analysis. It is not just a matter of computational convenience. [I admit that the universe does the calculation but we have no idea how].
Laughton won his Nobel for “explaining” the fractional quantum Hall effect before anyone else did. But he casts scorn on such explanations, pointing out that of the 27 solid phases of water, not one was predicted, but all have been “explained” after the fact.
Phenomena at higher levels are often, even usually, insensitive to the nature of the levels below. A good example is the statistical mechanics of gases, which hardly changed when our view of the atoms that make up gases changed from hard Newtonian balls to fuzzy quantum blobs.
There is plenty of evidence that “fundamental” physics is just the statistical mechanics of a lower layer eg all those “virtual particles”—what are they all about? “Empty” space seems to be about as empty as the super bowl on the day of the big game. There is no evidence that “fundamental” physics is at all fundamental in fact. We don’t even have any indication how many layers there are before we get to the turtle at the bottom, if there is one.
Doesn’t the fact that the universe is carrying out these computations mean it is feasible in principle? Our current ignorance of how this is done is irrelevant. Am I missing something?
This seems to be a map/territory confusion. A change in our model shouldn’t change what we observe. If our high level theories changed dramatically, that would be a bad sign.
It makes the universe Not a Computer in principle.
I agree with your skepticism a QM model of classical realm mechanics being ipso facto more accurate. Since by unsurmountable algorithmic complexity problems we agree this is an untestable hypothesis, confidence should start out low. And there’s lots of circumstantial evidence that the farther you go down the levels of organization in order to explain the higher level, the less accuracy this yields. It’s easier to explain human behavior with pre-supposed cognitive constructs (like pattern recognition, cognitive biases, etc) than with neurological.
The map is not the terrain, but maybe the map for level 1 is the terrain for level 2.
“Mere” is the problem.