If we assume that super-intelligent AI is a thing, you have to engineer a global social system thats stable over milllions of years and where no one makes ASI in that time.
Well this requirement doesn’t appear to be particularly stringent compared to the ability to suppress overpopulation and other dysgenic pressures that would be necessary for such a global social system. It would have to be totalitarian anyway (though not necessarily centralized).
It is also a useful question to ask whether there are alternative existential opportunities if super-intelligent AI doesn’t turn out to be a thing. What makes the FAI problem unique isn’t that it’s an existential threat—there are plenty of those to go around—but that it’s also an existential opportunity. The only one we know of thus far.
If we assume that super-intelligent AI is a thing, you have to engineer a global social system thats stable over milllions of years and where no one makes ASI in that time.
Well this requirement doesn’t appear to be particularly stringent compared to the ability to suppress overpopulation and other dysgenic pressures that would be necessary for such a global social system. It would have to be totalitarian anyway (though not necessarily centralized).
It is also a useful question to ask whether there are alternative existential opportunities if super-intelligent AI doesn’t turn out to be a thing. What makes the FAI problem unique isn’t that it’s an existential threat—there are plenty of those to go around—but that it’s also an existential opportunity. The only one we know of thus far.