A lot of the technical issues are the same in both cases, and the solutions could be re-used. You need the AI to be capable of recursive self-improvement without compromising its goal systems, avoid the wireheading problem, etc. Even a lot of the workable content-level solutions (a mechanism to extract morality from a set of human minds) would probably be the same.
Where the problems differ, it’s mostly in that the society-level FAI case is harder: there’s additional subproblems like interpersonal disagreements to deal with. So I strongly suspect that if you have a society-level FAI solution, you could very easily hack it into an one-specific-human-FAI solution. But I could be wrong about that, and you’re right that my original use of terminology was sloppy.
A lot of the technical issues are the same in both cases, and the solutions could be re-used. You need the AI to be capable of recursive self-improvement without compromising its goal systems, avoid the wireheading problem, etc. Even a lot of the workable content-level solutions (a mechanism to extract morality from a set of human minds) would probably be the same.
Where the problems differ, it’s mostly in that the society-level FAI case is harder: there’s additional subproblems like interpersonal disagreements to deal with. So I strongly suspect that if you have a society-level FAI solution, you could very easily hack it into an one-specific-human-FAI solution. But I could be wrong about that, and you’re right that my original use of terminology was sloppy.