It is also worth noting that expertise tends to be quite narrow, and a person can be genuinely excellent in one area and clueless in another
What are the chances the first AGI created suffers a similar issue, allowing us to defeat it by exploiting that weakness? I predict if we experience one obvious, high-profile, and terrifying near-miss with a potentially x-class AGI, governance of compute becomes trivial after that, and we’ll be safe for a while.
What are the chances the first AGI created suffers a similar issue, allowing us to defeat it by exploiting that weakness? I predict if we experience one obvious, high-profile, and terrifying near-miss with a potentially x-class AGI, governance of compute becomes trivial after that, and we’ll be safe for a while.
The first AGI? Very high. The first superintelligence? Not so much.