Ah, the good old days post-GPT-2 when “GPT-3” was the future example :P
I think back then I still thoroughly understimated how useful natural-language “simulation” of human reasoning would be. I agree with janus that we have plenty of information telling us that yes, you can ride this same training procedure to very general problem solving (though I think including more modalities, active leaning, etc. will be incorporated before anyone really pushes brute force “GPT-N go brrr” to the extreme).
This is somewhat of a concern for alignment. I more or less stand by that comment you linked and its children; in particular, I said
The search thing is a little subtle. It’s not that search or optimization is automatically dangerous—it’s that I think the danger is that search can turn up adversarial examples / surprising solutions.
I mentioned how I think the particular kind of idiot-proofness that natural language processing might have is “won’t tell an idiot a plan to blow up the world if they ask for something else.” Well, I think that as soon as the AI is doing a deep search through outcomes to figure out how to make Alzheimer’s go away, you lose a lot of that protection and I think the AI is back in the category of Oracles that might tell an idiot a plan to blow up the world.
Simulating a reasoner who quickly finds a cure for Alzheimer’s is not by default safe (even though simulating a human writing in their diary is safe). Optimization processes that quickly find cures for Alzheimer’s are not humans, they must be doing some inhuman reasoning, and they’re capable of having lots of clever ideas with tight coupling to the real world.
I want to have confidence in the alignment properties of any powerful optimizers we unleash, and I imagine we can gain that confidence by knowing how they’re constructed, and trying them out in toy problems while inspecting their inner workings, and having them ask humans for feedback about how they should weigh moral options, etc. These are all things it’s hard to do for emergent simulands inside predictive simulators. I’m not saying it’s impossible for things to go well, I’m about evenly split on how much I think this is actually harder, versus how much I think this is just a new paradigm for thinking about alignment that doesn’t have much work in it yet.
Ah, the good old days post-GPT-2 when “GPT-3” was the future example :P
I think back then I still thoroughly understimated how useful natural-language “simulation” of human reasoning would be. I agree with janus that we have plenty of information telling us that yes, you can ride this same training procedure to very general problem solving (though I think including more modalities, active leaning, etc. will be incorporated before anyone really pushes brute force “GPT-N go brrr” to the extreme).
This is somewhat of a concern for alignment. I more or less stand by that comment you linked and its children; in particular, I said
Simulating a reasoner who quickly finds a cure for Alzheimer’s is not by default safe (even though simulating a human writing in their diary is safe). Optimization processes that quickly find cures for Alzheimer’s are not humans, they must be doing some inhuman reasoning, and they’re capable of having lots of clever ideas with tight coupling to the real world.
I want to have confidence in the alignment properties of any powerful optimizers we unleash, and I imagine we can gain that confidence by knowing how they’re constructed, and trying them out in toy problems while inspecting their inner workings, and having them ask humans for feedback about how they should weigh moral options, etc. These are all things it’s hard to do for emergent simulands inside predictive simulators. I’m not saying it’s impossible for things to go well, I’m about evenly split on how much I think this is actually harder, versus how much I think this is just a new paradigm for thinking about alignment that doesn’t have much work in it yet.