I’d guess 1%. The small minority of AI researchers working on FAI will have to find the right solutions to a set of extremely difficult problems on the first try, before the (much better funded!) majority of AI researchers solve the vastly easier problem of Unfriendly AGI.
“Friendliness” is a rag-bag of different things—benevolence, absence of malevolence, the ability to control a system whether it’s benevolent or malevolent , and so on. So the question is somewhat ill-posed.
As far as control goes, all AI projects involve an element of control, because if you can;t get the AI to do what you want, it is useless.So the idea that AI and FAI are disjoint is wrong.
I’d guess 1%. The small minority of AI researchers working on FAI will have to find the right solutions to a set of extremely difficult problems on the first try, before the (much better funded!) majority of AI researchers solve the vastly easier problem of Unfriendly AGI.
1%? Shouldn’t your basic uncertainty over models and paradigms be great enough to increase that substantially?
“Friendliness” is a rag-bag of different things—benevolence, absence of malevolence, the ability to control a system whether it’s benevolent or malevolent , and so on. So the question is somewhat ill-posed.
As far as control goes, all AI projects involve an element of control, because if you can;t get the AI to do what you want, it is useless.So the idea that AI and FAI are disjoint is wrong.