@Will: The point is not that you should necessarily run the algorithm that would be optimal if you had unlimited computational resources. The point is that by understanding what that algorithm does, you have a better chance of coming up with a good approximation which you can run in a reasonable amount of time. If you are trying to build a locomotive it helps to understand Carnot Engines.
There are other scenarios when running the “optimal” algorithm is considered harmful. Consider a nascent sysop vaporising the oceans purely by trying to learn how to deal with humanity (if that amount of compute power is needed of course).
Probability theory was not designed about how to win, it was designed as way to get accurate statements about the world, assuming an observer whose computations have no impact on the world. This is a reasonable formalism for science, but only a fraction of how to win in the real world, and sometimes antithetical to winning. So if you want your system to win, don’t necessarily approximate it to the best of your ability.
Ideally we want a theory of how to change energy into winning, not information and a prior into accurate hypotheses about the world, which is what probability theory gives us, and is very good at.
Ideally we want a theory of how to change energy into winning, not information and a prior into accurate hypotheses about the world, which is what probability theory gives us, and is very good at.
You need accurate information about the world in order to figure out how to “change energy into winning.”
There are other scenarios when running the “optimal” algorithm is considered harmful. Consider a nascent sysop vaporising the oceans purely by trying to learn how to deal with humanity (if that amount of compute power is needed of course).
Probability theory was not designed about how to win, it was designed as way to get accurate statements about the world, assuming an observer whose computations have no impact on the world. This is a reasonable formalism for science, but only a fraction of how to win in the real world, and sometimes antithetical to winning. So if you want your system to win, don’t necessarily approximate it to the best of your ability.
Ideally we want a theory of how to change energy into winning, not information and a prior into accurate hypotheses about the world, which is what probability theory gives us, and is very good at.
You need accurate information about the world in order to figure out how to “change energy into winning.”