With AI winter I meant AGI winter. If current AGI projects (Ng/Baidu, Kurzweil/Google, human brain project and others) fail to deliver concepts for deep learning and fundamental understanding, financial support for AGI could be cut and funneled into less risky weak AI projects. Technology progresses, weak AI capabilities rise to high superintelligence in their domain. But like superheated water—without a nucleous for boiling—nothing happens. The heat rises above boiling point. One grain of salt into the superheated water creates the first bubble, triggering more bubbles into steam explosion.
AGI winter and intelligence explosion
If we let an AGI winter happen many diverse weak AIs might be developed, as depicted in the spiderweb chart. Being superintelligent in their domain these AIs have nearly no other skills. In this situation only a tiny nucleation stimulus is needed to start the intelligence explosion with a highly superintelligent AI. This stimulus could come from a small project that has no capabilties to engineer safeguarding measures.
AI has been so successful recently that enough financial support is available. We have to invest a significant amount into AGI and means for controlling and safeguarding AGI development. If we allow an AGI winter to happen we risk an uncontrollable intelligence explosion.
This seems like an interesting model, but it is complicated and not obvious, so I don’t agree with,
We have to invest a significant amount into AGI and means for controlling and safeguarding AGI development. If we allow an AGI winter to happen we risk an uncontrollable intelligence explosion.
For instance, it could be that having any two AIs is much like having an AI with both of their skills, such that you can’t really have weak AIs that carry out skills 1-5 without having a system which is close to the superintelligence you depict. Or that people reliably tend to build A+B, if it is useful and they have A and B. There might also be other effects of AGI funding than via this channel. Also, perhaps it would better to focus on investing less in narrow AI, which would give the same outcome on your model. Perhaps it is good for AGI to jump quickly from one level to another, to avert arms races for instance. etc.
Said AGI is up in the world of those narrow AIs and products of their use. AGI taxes on your imagination resources, leaving little left for contemporary tech, which it ravages like a machinegun armoured horse against roman soldiers.
With AI winter I meant AGI winter. If current AGI projects (Ng/Baidu, Kurzweil/Google, human brain project and others) fail to deliver concepts for deep learning and fundamental understanding, financial support for AGI could be cut and funneled into less risky weak AI projects. Technology progresses, weak AI capabilities rise to high superintelligence in their domain. But like superheated water—without a nucleous for boiling—nothing happens. The heat rises above boiling point. One grain of salt into the superheated water creates the first bubble, triggering more bubbles into steam explosion.
AGI winter and intelligence explosion
If we let an AGI winter happen many diverse weak AIs might be developed, as depicted in the spiderweb chart. Being superintelligent in their domain these AIs have nearly no other skills. In this situation only a tiny nucleation stimulus is needed to start the intelligence explosion with a highly superintelligent AI. This stimulus could come from a small project that has no capabilties to engineer safeguarding measures.
AI has been so successful recently that enough financial support is available. We have to invest a significant amount into AGI and means for controlling and safeguarding AGI development. If we allow an AGI winter to happen we risk an uncontrollable intelligence explosion.
This seems like an interesting model, but it is complicated and not obvious, so I don’t agree with,
For instance, it could be that having any two AIs is much like having an AI with both of their skills, such that you can’t really have weak AIs that carry out skills 1-5 without having a system which is close to the superintelligence you depict. Or that people reliably tend to build A+B, if it is useful and they have A and B. There might also be other effects of AGI funding than via this channel. Also, perhaps it would better to focus on investing less in narrow AI, which would give the same outcome on your model. Perhaps it is good for AGI to jump quickly from one level to another, to avert arms races for instance. etc.
Said AGI is up in the world of those narrow AIs and products of their use. AGI taxes on your imagination resources, leaving little left for contemporary tech, which it ravages like a machinegun armoured horse against roman soldiers.