Could that just shift the problem a bit? If we get a class of really smart people they can subjugate everyone else pretty easily too—perhaps even better than some AGI as they start with a really good understanding of human nature, cultures, failing and how to exploit for their own purposes. Or they could simply be better suited to taking advantage of and surviving with a more dangerous AI on the loose. We end up in some hybrid world where humanity is not extinct but most peoples’ life is pretty poor.
I suppose one might say that the speed and magnitude of the advances here might be such that we get to corrigible AI before we get incorrigible super humans.
I’m currious about your thought.
Quick, caveate, I’m trying to say all futures are bleak and no efforts lead where we want. I’m actually pretty positive about our future, even with AI (perhaps naively). We clearly already live in a world where the most intelligent could be said to “rule” but the rest of us average Joes are not slaves or surfs everywhere. Where the problems exist is more where we have cultural and legal failings rather than just outright subjugation by the brighter bulbs. But going back to the darker side here, the one’s that tend to successfully exploit/game/or ignore the rules are the smarter ones in the room.
If governments subsidize embryo selection, we should get a general uplift of everyone’s IQ (or everyone who decides to participate) so the resulting social dynamics shouldn’t be too different from today’s. Repeat that for a few generations, then build AGI (or debate/decide what else to do next). That’s the best scenario I can think of (aside from the “we luck out” ones).
Could that just shift the problem a bit? If we get a class of really smart people they can subjugate everyone else pretty easily too—perhaps even better than some AGI as they start with a really good understanding of human nature, cultures, failing and how to exploit for their own purposes. Or they could simply be better suited to taking advantage of and surviving with a more dangerous AI on the loose. We end up in some hybrid world where humanity is not extinct but most peoples’ life is pretty poor.
I suppose one might say that the speed and magnitude of the advances here might be such that we get to corrigible AI before we get incorrigible super humans.
I’m currious about your thought.
Quick, caveate, I’m trying to say all futures are bleak and no efforts lead where we want. I’m actually pretty positive about our future, even with AI (perhaps naively). We clearly already live in a world where the most intelligent could be said to “rule” but the rest of us average Joes are not slaves or surfs everywhere. Where the problems exist is more where we have cultural and legal failings rather than just outright subjugation by the brighter bulbs. But going back to the darker side here, the one’s that tend to successfully exploit/game/or ignore the rules are the smarter ones in the room.
If governments subsidize embryo selection, we should get a general uplift of everyone’s IQ (or everyone who decides to participate) so the resulting social dynamics shouldn’t be too different from today’s. Repeat that for a few generations, then build AGI (or debate/decide what else to do next). That’s the best scenario I can think of (aside from the “we luck out” ones).