What? What are your definitions of FAI and unfriendly AI?
Artificial general intelligence (AGI) = any intelligence with near-human capabilities of learning and innovation
Friendly AI (FAI)= an AGI which understands human values
unfriendly AI = an AGI which is not FAI
FAI will be harder to develop than unfriendly AI, but the question is how much harder?
Reasons why FAI is harder than AGI have been extensively discussed by Yudkowski and others. These include:
Human values are hard to understand
The growth mechanism of FAI must have low variability to ensure that the FAI remains friendly. But high-variability growth mechanisms (such as mutation and selection) could be the easiest path to AGI.
Artificial general intelligence (AGI) = any intelligence with near-human capabilities of learning and innovation
Friendly AI (FAI)= an AGI which understands human values
unfriendly AI = an AGI which is not FAI
FAI will be harder to develop than unfriendly AI, but the question is how much harder?
Reasons why FAI is harder than AGI have been extensively discussed by Yudkowski and others. These include:
Human values are hard to understand
The growth mechanism of FAI must have low variability to ensure that the FAI remains friendly. But high-variability growth mechanisms (such as mutation and selection) could be the easiest path to AGI.
Doesn’t this conflict with your other statement?
I see your point. I have modified my comment to