I agree with your skepticism a QM model of classical realm mechanics being ipso facto more accurate. Since by unsurmountable algorithmic complexity problems we agree this is an untestable hypothesis, confidence should start out low. And there’s lots of circumstantial evidence that the farther you go down the levels of organization in order to explain the higher level, the less accuracy this yields. It’s easier to explain human behavior with pre-supposed cognitive constructs (like pattern recognition, cognitive biases, etc) than with neurological.
The map is not the terrain, but maybe the map for level 1 is the terrain for level 2.
I agree with your skepticism a QM model of classical realm mechanics being ipso facto more accurate. Since by unsurmountable algorithmic complexity problems we agree this is an untestable hypothesis, confidence should start out low. And there’s lots of circumstantial evidence that the farther you go down the levels of organization in order to explain the higher level, the less accuracy this yields. It’s easier to explain human behavior with pre-supposed cognitive constructs (like pattern recognition, cognitive biases, etc) than with neurological.
The map is not the terrain, but maybe the map for level 1 is the terrain for level 2.
“Mere” is the problem.