How does this help anything or change anything? That’s just the world we’re in now, where we have GPT-3 instead of AGI. Eventually the systems get more powerful and dangerous than GPT-3 and then the world ends. You’re just describing the way things already are.
I’m imagining that systems get much stronger without getting much more “aimable”, if that makes sense; they solve problems, but when you ask them to solve things they keep solving the wrong problem in a way that’s sufficiently obvious that makes actually using them pointless. Instead of getting the equivalent of paperclip maximizers, you get a random mind that “wants” things that are so incoherent that they don’t do much of anything at all, and this fact forces people to give up and decide that investing further in general AI capacity without first making investments in AI control/”alignment” is useless.
Maybe that’s just my confusion or stupidity talking, though. And I did call it a “miracle” that the ability to make a seemingly useful AGI ends up bottlenecking on alignment research rather than raw capacity research because the default unaligned AGI is an incoherent mess that does random ineffective things when operating “out of sample” rather than a powerful optimization process that destroys the world.
How does this help anything or change anything? That’s just the world we’re in now, where we have GPT-3 instead of AGI. Eventually the systems get more powerful and dangerous than GPT-3 and then the world ends. You’re just describing the way things already are.
I’m imagining that systems get much stronger without getting much more “aimable”, if that makes sense; they solve problems, but when you ask them to solve things they keep solving the wrong problem in a way that’s sufficiently obvious that makes actually using them pointless. Instead of getting the equivalent of paperclip maximizers, you get a random mind that “wants” things that are so incoherent that they don’t do much of anything at all, and this fact forces people to give up and decide that investing further in general AI capacity without first making investments in AI control/”alignment” is useless.
Maybe that’s just my confusion or stupidity talking, though. And I did call it a “miracle” that the ability to make a seemingly useful AGI ends up bottlenecking on alignment research rather than raw capacity research because the default unaligned AGI is an incoherent mess that does random ineffective things when operating “out of sample” rather than a powerful optimization process that destroys the world.