You still don’t get it. Correct beliefs don’t spring full-grown from the forehead of Omega—they come from observations. And to get observations, you have to be doing something… most likely, something useful.
That’s why your math is wrong for observed history—humans nearly always get “useful” first, then “correct”.
Or to put it another way, in theory you can get to practice from theory, but in practice, you almost never do.
Let’s assume that what you say is true, that utility precedes accuracy (and I happen to believe this is the case).
That does not in any way change the math. Perhaps you can give me some examples of (more) correct beliefs that are less useful than a related and corresponding (more) incorrect belief?
Perhaps you can give me some examples of (more) correct beliefs that are less useful than a related and corresponding (more) incorrect belief?
It doesn’t matter if you have an Einstein’s grasp of the physical laws, a Ford’s grasp of the mechanics, and a lawyer’s mastery of traffic law… you still have to practice in order to learn to drive.
Conversely, as long as you learn correct procedures, it doesn’t matter if you have a horrible or even ludicrously incorrect grasp of any of the theories involved.
This is why, when one defines “rationality” in terms of strictly abstract mentations and theoretical truths, one tends to lose in the “real world” to people who have actually practiced winning.
And I wasn’t arguing that definition, nor did I perceive any of the above discussion to be related to it. I’m arguing the relative utility of correct and incorrect beliefs, and the way in which the actual procedure of testing a position is related to the expected usefulness of that position.
To use your analogy, you and I certainly have to practice in order to learn to drive. If we’re building a robot to drive, though, it damn sure helps to have a ton of theory ready to use. Does this eliminate the need for testing? Of course not. But having a correct theory (to the necessary level of detail) means that testing can be done in months or years instead of decades.
To the extent that my argument and the one you mention here interact, I suppose I would say that “winning” should include not just individual instances, things we can practice explicitly, but success in areas with which we are unfamiliar. That, I suggest, is the role of theory and the pursuit of correct beliefs.
To use your analogy, you and I certainly have to practice in order to learn to drive. If we’re building a robot to drive, though, it damn sure helps to have a ton of theory ready to use. Does this eliminate the need for testing? Of course not. But having a correct theory (to the necessary level of detail) means that testing can be done in months or years instead of decades.
Actually, I suspect that this is not only wrong, but terribly wrong. I might be wrong, but it seems to me that robotics has gradually progressed from having lots of complicated theories and sophisticated machinery towards simple control systems and improved sensory perception… and that this progression happened because the theories didn’t work in practice.
So, AFAICT, the argument that “if you have a correct theory, things will go better” is itself one of those ideas that work better in theory than in practice, because usually the only way to get a correct theory is to go out and try stuff.
Hindsight bias tends to make us completely ignore the fact that most discoveries come about from essentially random ideas and tinkering. We don’t like the idea that it’s not our “intelligence” that’s responsible, and we can very easily say that, in hindsight, the robotics theories were wrong, and of course if they had the right theory, they wouldn’t have made those mistakes.
But this is delusion. In theory, you could have a correct theory before any practice, but in practice, you virtually never do. (And pointing to nuclear physics as a counterexample is like pointing to lottery winners as proof that you can win the lottery; in theory, you can win the lottery, but in practice, you don’t.)
Actually, I suspect that this is not only wrong, but terribly wrong. I might be wrong
You are wrong. The above is a myth promoted by the Culture of Chaos and the popular media. Advanced modern robots use advanced modern theory—e.g. particle filters to integrate multiple sensory streams to localize the robot (a Bayesian method).
And this is even more true when considering elements in the formation of a robot that need to be handled before the AI: physics, metallurgy, engineering, computer hardware design, etc.
Without theory—good, workably-correct theory—the search space for innovations is just too large. The more correct the theory, the less space has to be searched for solution concepts. If you’re going to build a rocket, you sure as hell better understand Newton’s laws. But things will go much smoother if you also know some chemistry, some material science, and some computer science.
For a solid example of theory taking previous experimental data and massively narrowing the search space, see RAND’s first report on the feasibility of satellites here.
Conversely, as long as you learn correct procedures, it doesn’t matter if you have a horrible or even ludicrously incorrect grasp of any of the theories involved.
Procedures are brittle. Theory lets you generalize procedures for new contexts, which you can then practice.
You still don’t get it. Correct beliefs don’t spring full-grown from the forehead of Omega—they come from observations. And to get observations, you have to be doing something… most likely, something useful.
That’s why your math is wrong for observed history—humans nearly always get “useful” first, then “correct”.
Or to put it another way, in theory you can get to practice from theory, but in practice, you almost never do.
Let’s assume that what you say is true, that utility precedes accuracy (and I happen to believe this is the case).
That does not in any way change the math. Perhaps you can give me some examples of (more) correct beliefs that are less useful than a related and corresponding (more) incorrect belief?
It doesn’t matter if you have an Einstein’s grasp of the physical laws, a Ford’s grasp of the mechanics, and a lawyer’s mastery of traffic law… you still have to practice in order to learn to drive.
Conversely, as long as you learn correct procedures, it doesn’t matter if you have a horrible or even ludicrously incorrect grasp of any of the theories involved.
This is why, when one defines “rationality” in terms of strictly abstract mentations and theoretical truths, one tends to lose in the “real world” to people who have actually practiced winning.
And I wasn’t arguing that definition, nor did I perceive any of the above discussion to be related to it. I’m arguing the relative utility of correct and incorrect beliefs, and the way in which the actual procedure of testing a position is related to the expected usefulness of that position.
To use your analogy, you and I certainly have to practice in order to learn to drive. If we’re building a robot to drive, though, it damn sure helps to have a ton of theory ready to use. Does this eliminate the need for testing? Of course not. But having a correct theory (to the necessary level of detail) means that testing can be done in months or years instead of decades.
To the extent that my argument and the one you mention here interact, I suppose I would say that “winning” should include not just individual instances, things we can practice explicitly, but success in areas with which we are unfamiliar. That, I suggest, is the role of theory and the pursuit of correct beliefs.
Actually, I suspect that this is not only wrong, but terribly wrong. I might be wrong, but it seems to me that robotics has gradually progressed from having lots of complicated theories and sophisticated machinery towards simple control systems and improved sensory perception… and that this progression happened because the theories didn’t work in practice.
So, AFAICT, the argument that “if you have a correct theory, things will go better” is itself one of those ideas that work better in theory than in practice, because usually the only way to get a correct theory is to go out and try stuff.
Hindsight bias tends to make us completely ignore the fact that most discoveries come about from essentially random ideas and tinkering. We don’t like the idea that it’s not our “intelligence” that’s responsible, and we can very easily say that, in hindsight, the robotics theories were wrong, and of course if they had the right theory, they wouldn’t have made those mistakes.
But this is delusion. In theory, you could have a correct theory before any practice, but in practice, you virtually never do. (And pointing to nuclear physics as a counterexample is like pointing to lottery winners as proof that you can win the lottery; in theory, you can win the lottery, but in practice, you don’t.)
You are wrong. The above is a myth promoted by the Culture of Chaos and the popular media. Advanced modern robots use advanced modern theory—e.g. particle filters to integrate multiple sensory streams to localize the robot (a Bayesian method).
And this is even more true when considering elements in the formation of a robot that need to be handled before the AI: physics, metallurgy, engineering, computer hardware design, etc.
Without theory—good, workably-correct theory—the search space for innovations is just too large. The more correct the theory, the less space has to be searched for solution concepts. If you’re going to build a rocket, you sure as hell better understand Newton’s laws. But things will go much smoother if you also know some chemistry, some material science, and some computer science.
For a solid example of theory taking previous experimental data and massively narrowing the search space, see RAND’s first report on the feasibility of satellites here.
IAWYC but
Procedures are brittle. Theory lets you generalize procedures for new contexts, which you can then practice.