Some pessimists expect a linear grown in sofistication, with a few insights along the way, but not sequential insights from one single group. Safe AI designs are more hard than unsafe ones. Normally hard problems are solved after the easy, and by different people. If FAI is a hard-rather-than-easy project, the results will appear after the unsafe AGIs. However, this could not be the case, if AI research change.
Some pessimists expect a linear grown in sofistication, with a few insights along the way, but not sequential insights from one single group. Safe AI designs are more hard than unsafe ones. Normally hard problems are solved after the easy, and by different people. If FAI is a hard-rather-than-easy project, the results will appear after the unsafe AGIs. However, this could not be the case, if AI research change.