It certainly doesn’t represent mine. The architectural shortcomings of narrow AI do not lend themselves to gradual improvement. At some point, you’re hamstrung by your inability to solve certain crucial mathematical issues.
You add a parallel module to solve the new issue and a supervisory module to arbitrate between them. There are more elaborate systems that could likely work better for many particular situations, but even this simple system suggests there is little substance to your criticism. See Minsky’s Society of Mind, or some papers on modularity in evolutionary psych, for more details.
Sure you can add more modules. Except that then you’ve got a car-driving module, and a walking module, and a stacking-small-objects module, and a guitar-playing module, and that’s all fine until somebody needs to talk to it. Then you’ve got to write a Turing-complete conversation module, and (as it turns out) having a self-driving car really doesn’t make that any easier.
I believe you, but intuitively the first objection that comes to my mind is that a car-driving AI doesn’t have the same type of “agent-ness” and introspection that an AGI would surely need. I’d love to read more about it.
It certainly doesn’t represent mine. The architectural shortcomings of narrow AI do not lend themselves to gradual improvement. At some point, you’re hamstrung by your inability to solve certain crucial mathematical issues.
You add a parallel module to solve the new issue and a supervisory module to arbitrate between them. There are more elaborate systems that could likely work better for many particular situations, but even this simple system suggests there is little substance to your criticism. See Minsky’s Society of Mind, or some papers on modularity in evolutionary psych, for more details.
Sure you can add more modules. Except that then you’ve got a car-driving module, and a walking module, and a stacking-small-objects module, and a guitar-playing module, and that’s all fine until somebody needs to talk to it. Then you’ve got to write a Turing-complete conversation module, and (as it turns out) having a self-driving car really doesn’t make that any easier.
Do you realize that human intelligence evolved exactly that way? A self-swimming fish brain with lots of modules haphazardly attached.
Evolution and human engineers don’t work in the same ways. It also took evolution three million years.
True enough, but there is no evidence that general intelligence is anything more than a large collection of specialized modules.
I believe you, but intuitively the first objection that comes to my mind is that a car-driving AI doesn’t have the same type of “agent-ness” and introspection that an AGI would surely need. I’d love to read more about it.