So you need to make many short cuts. And have heuristics and biases so you can predict your environment reasonably. So you only run statistics over certain parts of your inputs and internal workings. Discovering new places to run statistics on is hard, as if you don’t currently run statistics there, you have no reason to think running statistics over those variables is a good idea. It requires leaps of faith, and these can lead you down blind alleys.
“The car driving across the Nevada desert still strikes me as being closer to the thermostat or the toilet that regulates itself”
Sounds true to me; an ultra-narrow AI is more like a trivial optimization process like a thermostat than like a general intelligence.
Eliezer: good intuition pump. The level of argument about Einstein’s intelligence relative to the village idiot, instead of discussion of the larger point, is odd.
It seems to me there are so, so, so many apparently trivial things a village idiot knows/can do that a chimp doesn’t/can’t, the difference is indeed larger than between VI and Einstein, on most reasonable metrics. The point is not about any specific metric, but about the badness of our intuitions.
You can still do one heck of a lot better than a human. LOGI 3.1: Advantages of minds-in-general
Sounds true to me; an ultra-narrow AI is more like a trivial optimization process like a thermostat than like a general intelligence.
Eliezer: good intuition pump. The level of argument about Einstein’s intelligence relative to the village idiot, instead of discussion of the larger point, is odd.
It seems to me there are so, so, so many apparently trivial things a village idiot knows/can do that a chimp doesn’t/can’t, the difference is indeed larger than between VI and Einstein, on most reasonable metrics. The point is not about any specific metric, but about the badness of our intuitions.