What happens after a FAI is built? There’s a lot of discussion on how to build one, and what traits it needs to have, but little on what happens afterward. How does the world/humanity transition from the current systems of government to a better one? Do we just assume that the FAI is capable of handling a peaceful and voluntary global transition, or are there some risks involved? How do you go about convincing the entirety of humanity that the AI that has been created is “safe” and to put our trust in it?
Local thinking about FAI is predicated on the assumption that an AI is probably capable of (and will initiate) extremely rapid self-improvement (the local jargon is “FOOMing,” which doesn’t stand for anything as far as I know, it just sounds evocative), such that it rapidly becomes a significantly superhuman intelligence, and thereafter all such decisions can profitably left up to the FAI itself.
Relatedly, local thinking about why FAI is important is largely predicated on the same assumption… if AIs will probably FOOM, then UFAI will probably irrecoverably destroy value on an unimaginable scale unless pre-empted by FAI, because intelligence differentials are powerful. If AIs don’t FOOM, this is not so much true… after all, the world today is filled with human-level Unfriendly intelligences, and we seem to manage; Unfriendly AI is only an existential threat if it’s significantly more intelligent than we are. (Well, assuming that things dumber than we are aren’t existential threats, which I’m not sure is justified, but never mind that for now.)
Of course, if we instead posit either that we are incapable of producing a human-level artificial intelligence (and therefore that any intelligence we produce, being not as smart as we are, is also incapable of it (which of course depends on an implausibly linear view of intelligence, but never mind that for now)), or that diminishing returns set in quickly enough that the most we get is human-level or slightly but not significantly superhuman AIs, then it makes sense to ask how those AIs (whether FAI or UFAI) integrate with the rest of us.
Robin Hanson (who thinks about this stuff and doesn’t find the FOOM scenario likely) has written a fair bit about that scenario.
What happens after a FAI is built? There’s a lot of discussion on how to build one, and what traits it needs to have, but little on what happens afterward. How does the world/humanity transition from the current systems of government to a better one? Do we just assume that the FAI is capable of handling a peaceful and voluntary global transition, or are there some risks involved? How do you go about convincing the entirety of humanity that the AI that has been created is “safe” and to put our trust in it?
Local thinking about FAI is predicated on the assumption that an AI is probably capable of (and will initiate) extremely rapid self-improvement (the local jargon is “FOOMing,” which doesn’t stand for anything as far as I know, it just sounds evocative), such that it rapidly becomes a significantly superhuman intelligence, and thereafter all such decisions can profitably left up to the FAI itself.
Relatedly, local thinking about why FAI is important is largely predicated on the same assumption… if AIs will probably FOOM, then UFAI will probably irrecoverably destroy value on an unimaginable scale unless pre-empted by FAI, because intelligence differentials are powerful. If AIs don’t FOOM, this is not so much true… after all, the world today is filled with human-level Unfriendly intelligences, and we seem to manage; Unfriendly AI is only an existential threat if it’s significantly more intelligent than we are. (Well, assuming that things dumber than we are aren’t existential threats, which I’m not sure is justified, but never mind that for now.)
Of course, if we instead posit either that we are incapable of producing a human-level artificial intelligence (and therefore that any intelligence we produce, being not as smart as we are, is also incapable of it (which of course depends on an implausibly linear view of intelligence, but never mind that for now)), or that diminishing returns set in quickly enough that the most we get is human-level or slightly but not significantly superhuman AIs, then it makes sense to ask how those AIs (whether FAI or UFAI) integrate with the rest of us.
Robin Hanson (who thinks about this stuff and doesn’t find the FOOM scenario likely) has written a fair bit about that scenario.