For some of these, you can play the “magic wand” game to probe the connections between nodes in your belief network:
More hardware—suppose you waved a magic wand just now, and suddenly there were 10 times as many computers around (or they all got 10 times faster and bigger), how do you suppose would that get us closer to digital intelligence?
Bigger data—magic wand gives you access to every word ever spoken by a human, magically transcribed; how does that get us closer? From the perspective of AGI, statistical machine translation, no matter how wondrous-looking, is just plain dumb—it does not even pretend to be able to generalize insights.
Better algorithms—this should really be “faster algorithms”; by definition “better” is what gets us closer to AGI. But short of a breakthrough in complexity theory, optimized algorithms are just an equivalent of faster hardware. Precisely which algorithms would bring us closer to AI if we could speed them up a lot with the magic wand? I can’t really see a quicker sort, or matrix inverse, or even a faster traveling salesman (if that was the only algorithm in that class we knew to speed up).
More hardware [...] how do you suppose would that get us closer to digital intelligence?
IF Minsky’s “Society of Mind” is near to accurate, then if we had enough separate “narrow” agents operating, we could solve all problems that could be encountered—call this the “Eusocial Generalization” approach. That is, rather than actually solving the problem of general intelligence, just make programs that solve every last problem we can think of, individually—and then run them all at once.
Horridly inefficient, but if we had magically infinite computational power available we could at least implement it.
As to the “bigger data”—an element can be part of the solution without being capable of providing the entire solution. Highly rigorous relational databases allow pattern-matching algorithms to at least perform superior analysis.
For some of these, you can play the “magic wand” game to probe the connections between nodes in your belief network:
More hardware—suppose you waved a magic wand just now, and suddenly there were 10 times as many computers around (or they all got 10 times faster and bigger), how do you suppose would that get us closer to digital intelligence?
Bigger data—magic wand gives you access to every word ever spoken by a human, magically transcribed; how does that get us closer? From the perspective of AGI, statistical machine translation, no matter how wondrous-looking, is just plain dumb—it does not even pretend to be able to generalize insights.
Better algorithms—this should really be “faster algorithms”; by definition “better” is what gets us closer to AGI. But short of a breakthrough in complexity theory, optimized algorithms are just an equivalent of faster hardware. Precisely which algorithms would bring us closer to AI if we could speed them up a lot with the magic wand? I can’t really see a quicker sort, or matrix inverse, or even a faster traveling salesman (if that was the only algorithm in that class we knew to speed up).
IF Minsky’s “Society of Mind” is near to accurate, then if we had enough separate “narrow” agents operating, we could solve all problems that could be encountered—call this the “Eusocial Generalization” approach. That is, rather than actually solving the problem of general intelligence, just make programs that solve every last problem we can think of, individually—and then run them all at once.
Horridly inefficient, but if we had magically infinite computational power available we could at least implement it.
As to the “bigger data”—an element can be part of the solution without being capable of providing the entire solution. Highly rigorous relational databases allow pattern-matching algorithms to at least perform superior analysis.