It’s possible that, if the feasibility just isn’t there yet no matter the funding, it’ll turn out like nanotechnology—funding for molecule-sized robots that gets spent on chemistry instead. (I wonder what the “instead” would be in this case.)
I think a working AGI is more likely to result from expanding or generalizing from a working driverless car than from an academic program somewhere. A program to improve the “judgement” of a working narrow AI strikes me as a much more plausible route to GAI.
Our evolutionary history would seem to support this view—to a first approximation, it would seem to me like general intelligence effectively evolved by stacking one narrow-intelligence module on top of another.
Spiders are pretty narrow intelligence, rats considerably less so.
There are proverbs about how trying to generalize your code will never get to AGI. These proverbs are true, and they’re still true when generalizing a driverless car. I might worry to some degree about free-form machine learning algorithms at hedge funds, but not about generalizing driverless cars.
Fear not. There is actual research being done on making self-driving cars more anthropomorphic, in order to enable better communication with pedestrians.
Narrow-AI driverless cars will probably not decide that they need to take over the world in order to get to their destination in the most efficient way. Even if it would be better, I would be very surprised if they decided to model the world that generally for the purposes of driving.
There’s only so much modeling of the world/general capability you need in order to solve very domain-specific problems.
The reason for expanding a narrow AI is the same for a tool agent not staying restricted; the narrow domain they are designed to function in is embedded in the complexity of the real world. Eventually someone is going to realize that the agent/AI can provide better service if they understand more about how their jobs fit into the broader concerns of their passengers/users/customers and decide to do something about it.
AIXI is able to be widely applicable because it tries to model every possible program that the universe could be running, and then it eventually starts finding programs that fit.
Driverless cars may start containing modeling things other than driving, and may even start trying to predict where their users are going to be, but I suspect that it would try and just track user habits or their smartphones, rather than trying to figure out their owner’s economic and psychological incentives for going to different places.
Trying to build a car that’s generally capable of driving and figuring out new things about driving might be dangerous, but there’s plenty of useful features to give people before they get there.
Just wondering, is your intuition coming from the tighter tie to reality that a driverless car would have?
“It was terrible, officer … my mother, she was so happy with her new automatic car! It seemed to anticipate her every need! Even when she forgot where she wanted to go, in her old age, the car would remember and take her there … she had been so lonely ever since da’ passed. I can’t even fathom how the car got into her bedroom, or what it was, oh god, what it was … doing to her! The car, it still … it didn’t know she was already … all that blood …”
It certainly doesn’t represent mine. The architectural shortcomings of narrow AI do not lend themselves to gradual improvement. At some point, you’re hamstrung by your inability to solve certain crucial mathematical issues.
You add a parallel module to solve the new issue and a supervisory module to arbitrate between them. There are more elaborate systems that could likely work better for many particular situations, but even this simple system suggests there is little substance to your criticism. See Minsky’s Society of Mind, or some papers on modularity in evolutionary psych, for more details.
Sure you can add more modules. Except that then you’ve got a car-driving module, and a walking module, and a stacking-small-objects module, and a guitar-playing module, and that’s all fine until somebody needs to talk to it. Then you’ve got to write a Turing-complete conversation module, and (as it turns out) having a self-driving car really doesn’t make that any easier.
I believe you, but intuitively the first objection that comes to my mind is that a car-driving AI doesn’t have the same type of “agent-ness” and introspection that an AGI would surely need. I’d love to read more about it.
Best case scenario, it’ll turn out like space travel: something that we “did already” but that wasn’t nearly as interesting as all those wild-eyed dreamers hoped.
I don’t see that happening in this context, though; with space travel, we “cheated” our way to a spectacular short-term goal by using politically motivated blank checks while ignoring longer-term economics. Competing venture capitalists are less likely to ignore long-term economics, and any “cheating” is likely to mean shortcuts with regards to safety, not sustainability.
if the feasibility just isn’t there yet no matter the funding
That’s a heck of a condition, and this condition failing seems like our best hope for survival, if the ‘spirit’ of the original hypothetical holds—that this work ends up really taking off, in practical systems.
It’s possible that, if the feasibility just isn’t there yet no matter the funding, it’ll turn out like nanotechnology—funding for molecule-sized robots that gets spent on chemistry instead. (I wonder what the “instead” would be in this case.)
Narrow AI and machine learning?
Sounds about right. With the occasional driverless car, which is really pretty amazing.
I think a working AGI is more likely to result from expanding or generalizing from a working driverless car than from an academic program somewhere. A program to improve the “judgement” of a working narrow AI strikes me as a much more plausible route to GAI.
Our evolutionary history would seem to support this view—to a first approximation, it would seem to me like general intelligence effectively evolved by stacking one narrow-intelligence module on top of another.
Spiders are pretty narrow intelligence, rats considerably less so.
And legoland is built of stacking bricks. But try deriving legoland by generalizing a 2x2 blue square.
Note that the driverless car itself came from “an academic program somewhere.”
There are proverbs about how trying to generalize your code will never get to AGI. These proverbs are true, and they’re still true when generalizing a driverless car. I might worry to some degree about free-form machine learning algorithms at hedge funds, but not about generalizing driverless cars.
There go my wild theories about Cars backstory.
Fear not. There is actual research being done on making self-driving cars more anthropomorphic, in order to enable better communication with pedestrians.
Current narrow AIs are unlikely to generalize into AGI, but they contain parts that can be used to build one :)
Narrow-AI driverless cars will probably not decide that they need to take over the world in order to get to their destination in the most efficient way. Even if it would be better, I would be very surprised if they decided to model the world that generally for the purposes of driving.
There’s only so much modeling of the world/general capability you need in order to solve very domain-specific problems.
The reason for expanding a narrow AI is the same for a tool agent not staying restricted; the narrow domain they are designed to function in is embedded in the complexity of the real world. Eventually someone is going to realize that the agent/AI can provide better service if they understand more about how their jobs fit into the broader concerns of their passengers/users/customers and decide to do something about it.
AIXI is able to be widely applicable because it tries to model every possible program that the universe could be running, and then it eventually starts finding programs that fit.
Driverless cars may start containing modeling things other than driving, and may even start trying to predict where their users are going to be, but I suspect that it would try and just track user habits or their smartphones, rather than trying to figure out their owner’s economic and psychological incentives for going to different places.
Trying to build a car that’s generally capable of driving and figuring out new things about driving might be dangerous, but there’s plenty of useful features to give people before they get there.
Just wondering, is your intuition coming from the tighter tie to reality that a driverless car would have?
“It was terrible, officer … my mother, she was so happy with her new automatic car! It seemed to anticipate her every need! Even when she forgot where she wanted to go, in her old age, the car would remember and take her there … she had been so lonely ever since da’ passed. I can’t even fathom how the car got into her bedroom, or what it was, oh god, what it was … doing to her! The car, it still … it didn’t know she was already … all that blood …”
Has LW, or some other forum, held any useful previous discussion on this topic?
Not that I know of, but I’m pretty sure billswift’s position does not represent that of most LWers.
It certainly doesn’t represent mine. The architectural shortcomings of narrow AI do not lend themselves to gradual improvement. At some point, you’re hamstrung by your inability to solve certain crucial mathematical issues.
You add a parallel module to solve the new issue and a supervisory module to arbitrate between them. There are more elaborate systems that could likely work better for many particular situations, but even this simple system suggests there is little substance to your criticism. See Minsky’s Society of Mind, or some papers on modularity in evolutionary psych, for more details.
Sure you can add more modules. Except that then you’ve got a car-driving module, and a walking module, and a stacking-small-objects module, and a guitar-playing module, and that’s all fine until somebody needs to talk to it. Then you’ve got to write a Turing-complete conversation module, and (as it turns out) having a self-driving car really doesn’t make that any easier.
Do you realize that human intelligence evolved exactly that way? A self-swimming fish brain with lots of modules haphazardly attached.
Evolution and human engineers don’t work in the same ways. It also took evolution three million years.
True enough, but there is no evidence that general intelligence is anything more than a large collection of specialized modules.
I believe you, but intuitively the first objection that comes to my mind is that a car-driving AI doesn’t have the same type of “agent-ness” and introspection that an AGI would surely need. I’d love to read more about it.
Best case scenario, it’ll turn out like space travel: something that we “did already” but that wasn’t nearly as interesting as all those wild-eyed dreamers hoped.
I don’t see that happening in this context, though; with space travel, we “cheated” our way to a spectacular short-term goal by using politically motivated blank checks while ignoring longer-term economics. Competing venture capitalists are less likely to ignore long-term economics, and any “cheating” is likely to mean shortcuts with regards to safety, not sustainability.
That’s a heck of a condition, and this condition failing seems like our best hope for survival, if the ‘spirit’ of the original hypothetical holds—that this work ends up really taking off, in practical systems.