What’s your reason for thinking weak AI leads to strong AI? Generally, weak AI seems to take the form of domain-specific creations, which provide only very weak general abstractions.
One example that people previously thought would lead to general AI was chess playing. And sure, the design of chess playing AI forced some interesting development of efficient traversing of large search spaces, but as far as I can tell it has only done so in a very weak way, and hasn’t contributed meaningfully to anything resembling the efficiency of human-style chunking.
Actually, let’s taboo weak and strong AI for a moment.
By weak AI I mean things like video game AI, self driving cars, WolframAlpha, etc.
By strong AI I think I mean something that can create weak AIs to solve problems. Something that does what I mean by this likely includes a general inference engine. While a self driving car can use its navigation programs to figure out lots of interesting routes from a to b, if you tell it to go from California to Japan it won’t start building a boat
I suspect that if people continue trying to improve self-driving cars, they will become closer and closer to building a boat (if building such a boat were necessary under the circumstances, which seems unlikely). For instance, it wouldn’t be that far from what we have for the car to check whether there is a ferry and go to it. That might be improved over time into finding a boat-supplier and buying a boat if there is no ferry. If the boat supplier were also automated, and they were in close communication with each other, it isn’t that different from your car being able to make a boat.
What’s your reason for thinking weak AI leads to strong AI? Generally, weak AI seems to take the form of domain-specific creations, which provide only very weak general abstractions.
One example that people previously thought would lead to general AI was chess playing. And sure, the design of chess playing AI forced some interesting development of efficient traversing of large search spaces, but as far as I can tell it has only done so in a very weak way, and hasn’t contributed meaningfully to anything resembling the efficiency of human-style chunking.
I doubt there is a sharp distinction between them, so I think probably trying to make increasingly useful weak AIs will lead to strong AI.
Actually, let’s taboo weak and strong AI for a moment.
By weak AI I mean things like video game AI, self driving cars, WolframAlpha, etc.
By strong AI I think I mean something that can create weak AIs to solve problems. Something that does what I mean by this likely includes a general inference engine. While a self driving car can use its navigation programs to figure out lots of interesting routes from a to b, if you tell it to go from California to Japan it won’t start building a boat
I suspect that if people continue trying to improve self-driving cars, they will become closer and closer to building a boat (if building such a boat were necessary under the circumstances, which seems unlikely). For instance, it wouldn’t be that far from what we have for the car to check whether there is a ferry and go to it. That might be improved over time into finding a boat-supplier and buying a boat if there is no ferry. If the boat supplier were also automated, and they were in close communication with each other, it isn’t that different from your car being able to make a boat.