(1) There is a way to make an AI that is useful and provably not-unfriendly
(2) This requires a subset of the breakthroughs required for a true FAI
(3) It can be used to provide extra leverage towards building a FAI (i.e. using it to generate prestige and funds for hiring and training the best brains available. How? Start by solving protein folding or something.)
Then this safe & useful AI should certainly be a milestone on the way towards FAI.
Just barely possible, but any such system is also a recipe for destroying the universe, if mixed in slightly different proportions. Which on the net makes the plan wrong (destroy-the-universe wrong).
If:
(1) There is a way to make an AI that is useful and provably not-unfriendly
(2) This requires a subset of the breakthroughs required for a true FAI
(3) It can be used to provide extra leverage towards building a FAI (i.e. using it to generate prestige and funds for hiring and training the best brains available. How? Start by solving protein folding or something.)
Then this safe & useful AI should certainly be a milestone on the way towards FAI.
Just barely possible, but any such system is also a recipe for destroying the universe, if mixed in slightly different proportions. Which on the net makes the plan wrong (destroy-the-universe wrong).
I just don’t think that this assertion has been adequately backed up.