A pertinent question is what problem a government or business (not including a general AI startup) may wish to solve with a general AI that is not more easily solved by developing a narrow AI. ‘Easy’ here factors in the risk of failure, which will at least be perceived as very high for a general AI project. Governments and businesses may fund basic research into general AI as part of a strategy to exploit high-risk high-reward opportunities, but are unlikely to do it in-house.
One could also try and figure out some prerequisites for a general AI, and see what would lead to them coming into play. So for instance, I’m pretty sure that a general AI is going to have long-term memory. What AIs are going to get long-term memory? A general AI is going to be able to generalize its knowledge across domains, and that’s probably only going to work properly if it can infer causation. What AIs are going to need to do that?
How much of the world do you need to understand to make reliably good investments?
Do you want your investment computer to be smart enough to say “there’s a rather non-obvious huge bubble in the derivatives based on real estate”? Smart enough to convince you when you don’t want to believe it?
A pertinent question is what problem a government or business (not including a general AI startup) may wish to solve with a general AI that is not more easily solved by developing a narrow AI. ‘Easy’ here factors in the risk of failure, which will at least be perceived as very high for a general AI project. Governments and businesses may fund basic research into general AI as part of a strategy to exploit high-risk high-reward opportunities, but are unlikely to do it in-house.
One could also try and figure out some prerequisites for a general AI, and see what would lead to them coming into play. So for instance, I’m pretty sure that a general AI is going to have long-term memory. What AIs are going to get long-term memory? A general AI is going to be able to generalize its knowledge across domains, and that’s probably only going to work properly if it can infer causation. What AIs are going to need to do that?
How much of the world do you need to understand to make reliably good investments?
Do you want your investment computer to be smart enough to say “there’s a rather non-obvious huge bubble in the derivatives based on real estate”? Smart enough to convince you when you don’t want to believe it?