This seems like an interesting model, but it is complicated and not obvious, so I don’t agree with,
We have to invest a significant amount into AGI and means for controlling and safeguarding AGI development. If we allow an AGI winter to happen we risk an uncontrollable intelligence explosion.
For instance, it could be that having any two AIs is much like having an AI with both of their skills, such that you can’t really have weak AIs that carry out skills 1-5 without having a system which is close to the superintelligence you depict. Or that people reliably tend to build A+B, if it is useful and they have A and B. There might also be other effects of AGI funding than via this channel. Also, perhaps it would better to focus on investing less in narrow AI, which would give the same outcome on your model. Perhaps it is good for AGI to jump quickly from one level to another, to avert arms races for instance. etc.
This seems like an interesting model, but it is complicated and not obvious, so I don’t agree with,
For instance, it could be that having any two AIs is much like having an AI with both of their skills, such that you can’t really have weak AIs that carry out skills 1-5 without having a system which is close to the superintelligence you depict. Or that people reliably tend to build A+B, if it is useful and they have A and B. There might also be other effects of AGI funding than via this channel. Also, perhaps it would better to focus on investing less in narrow AI, which would give the same outcome on your model. Perhaps it is good for AGI to jump quickly from one level to another, to avert arms races for instance. etc.