We humans seem at best just barely smart enough to build a superintelligent UFAI. Wouldn’t it be surprising that the intelligence threshold for building UFAI and FAI turn out to be the same?
I think people who would contest the direction of this post (probably Eli and Nesov) would point out that if humanity is over the intelligence threshold for UFAI, the economic-political-psychological forces would drive it to be built in a timeframe of few decades. Anything that does not address this directly will destroy the future. Building smarter humans is likely not fast enough (plus who says smarter humans will not be driven by the same forces to build UFAI?).
The problem is that building FAI is also likely not fast enough, given that UFAI looks significantly easier than FAI. And there are additional unique downsides to attempting to build FAI: since many humans are naturally competitive, it provides additional psychological motivation for others to build AGI; unless the would-be FAI builders have near perfect secrecy and security, they will leak ideas and code to AGI builders not particularly concerned with Friendliness; the FAI builders may themselves accidentally build UFAI; it’s hard to do anti-AI PR/politics (to delay UFAI) while you’re trying to build an AI yourself.
ETA: Also, the difficulty of building smarter humans seems logically independent of the difficulty of building UFAI, whereas the difficulty of building FAI is surely at least as great as the difficulty of building UFAI. So it seems the likelihood that building smarter humans is fast enough is higher.
plus who says smarter humans will not be driven by the same forces to build UFAI?
Smarter humans will see the difficulty gap between FAI and UFAI as smaller, so they’ll be less motivated to “save time and effort” by not taking taking safety/Friendliness seriously. The danger of UFAI will also be more obvious to them.
I think people who would contest the direction of this post (probably Eli and Nesov) would point out that if humanity is over the intelligence threshold for UFAI, the economic-political-psychological forces would drive it to be built in a timeframe of few decades. Anything that does not address this directly will destroy the future. Building smarter humans is likely not fast enough (plus who says smarter humans will not be driven by the same forces to build UFAI?).
The problem is that building FAI is also likely not fast enough, given that UFAI looks significantly easier than FAI. And there are additional unique downsides to attempting to build FAI: since many humans are naturally competitive, it provides additional psychological motivation for others to build AGI; unless the would-be FAI builders have near perfect secrecy and security, they will leak ideas and code to AGI builders not particularly concerned with Friendliness; the FAI builders may themselves accidentally build UFAI; it’s hard to do anti-AI PR/politics (to delay UFAI) while you’re trying to build an AI yourself.
ETA: Also, the difficulty of building smarter humans seems logically independent of the difficulty of building UFAI, whereas the difficulty of building FAI is surely at least as great as the difficulty of building UFAI. So it seems the likelihood that building smarter humans is fast enough is higher.
Smarter humans will see the difficulty gap between FAI and UFAI as smaller, so they’ll be less motivated to “save time and effort” by not taking taking safety/Friendliness seriously. The danger of UFAI will also be more obvious to them.