I don’t see how we could have a “the” AGI. Unlike humans, AI doesn’t need to grow copies. As soon as we have one, we have legion. I don’t think we (humanity as a collective) could manage one AI, let alone limitless numbers, right? I mean this purely logistically, not even in a “could we control it” way. We have a hard time agreeing on stuff, which is alluded to here with the “value” bit (forever a great concept to think about), so I don’t have much hope for some kind of “all the governments in the world coming together to manage AI’ collective (even if there was some terrible occurrence that made it clear we needed that— but I digress).
I would argue that alignment per se is perhaps impossible, which would prevent it from being a given, as it were.
I don’t see how we could have a “the” AGI. Unlike humans, AI doesn’t need to grow copies. As soon as we have one, we have legion. I don’t think we (humanity as a collective) could manage one AI, let alone limitless numbers, right? I mean this purely logistically, not even in a “could we control it” way. We have a hard time agreeing on stuff, which is alluded to here with the “value” bit (forever a great concept to think about), so I don’t have much hope for some kind of “all the governments in the world coming together to manage AI’ collective (even if there was some terrible occurrence that made it clear we needed that— but I digress).
I would argue that alignment per se is perhaps impossible, which would prevent it from being a given, as it were.