Here are some plausible ways we could be trapped at a “sub adult human” AGI:
There is no such thing as “general intelligence”. For example, profoundly autistic humans have the same size brains as normal human beings, but their ability to navigate the world we live in is limited by their weaker social skills. Even an AI with many super-human skills could still fail to influence our world in this way.
Artificial intelligence is possible, but it is extremely expensive. Perhaps the first AGI requires an entire power-plant’s worth of electricity to run. Biological systems are much more efficient than manmade ones. If Moores law “stops”, we may be trapped in a future were only sub-human AI is affordable enough to be practical.
Legal barriers. Just as you are not legally allowed to carry a machine-gun wherever you please, AI may be regulated such that human-level AI is only allowed under very controlled circumstances. Nuclear power is a classic example of an industry where innovation stopped because of regulation.
Status Quo Bias. Future humans may simply not care as much about building AGI as present humans. Modern humans could undoubtedly build pyramids much taller than those in Egypt, but we don’t because we aren’t all that interested in pyramid-building.
Catastrophe. Near human AGI may trigger a catastrophe that prevents further progress. For example, the perception that “the first nation to build AGI will rule the world” may lead to an arms-race that ends in catastrophic world-war.
Unknown Unknowns. Predictions are hard, especially about the future.
#1 resonates with me somehow. Perhaps because I’ve witnessed a few people in real life, profoundly autistics, or disturbed, or on drugs, speak somewhat like an informal spoken variant of GPT-3, or is it the other way around?
Here are some plausible ways we could be trapped at a “sub adult human” AGI:
There is no such thing as “general intelligence”. For example, profoundly autistic humans have the same size brains as normal human beings, but their ability to navigate the world we live in is limited by their weaker social skills. Even an AI with many super-human skills could still fail to influence our world in this way.
Artificial intelligence is possible, but it is extremely expensive. Perhaps the first AGI requires an entire power-plant’s worth of electricity to run. Biological systems are much more efficient than manmade ones. If Moores law “stops”, we may be trapped in a future were only sub-human AI is affordable enough to be practical.
Legal barriers. Just as you are not legally allowed to carry a machine-gun wherever you please, AI may be regulated such that human-level AI is only allowed under very controlled circumstances. Nuclear power is a classic example of an industry where innovation stopped because of regulation.
Status Quo Bias. Future humans may simply not care as much about building AGI as present humans. Modern humans could undoubtedly build pyramids much taller than those in Egypt, but we don’t because we aren’t all that interested in pyramid-building.
Catastrophe. Near human AGI may trigger a catastrophe that prevents further progress. For example, the perception that “the first nation to build AGI will rule the world” may lead to an arms-race that ends in catastrophic world-war.
Unknown Unknowns. Predictions are hard, especially about the future.
#5 is an interesting survival possibility...
#1 resonates with me somehow. Perhaps because I’ve witnessed a few people in real life, profoundly autistics, or disturbed, or on drugs, speak somewhat like an informal spoken variant of GPT-3, or is it the other way around?