Yes? Not all of it, but definitely much of it is. It’s unfair to complain about GPT-3′s lack of ability to simulate you to get out of the box, etc. since it’s way too stupid for that, and the whole point of AI safety is to prepare for when AI systems are smart. There’s a whole chunk of the literature now on “Prosaic AI safety” which is designed to deal with exactly the sort of thing GPT-3 is pretty much. And even the more abstract agent foundations stuff is still relevant; for example, the “Universal prior is malign” stuff shows that in the limit GPT-N would likely be catastrophic, and that insight was gleaned from thinking a lot about solomonoff induction, which is a very agent-foundationsy thing to be doing.
for example, the “Universal prior is malign” stuff shows that in the limit GPT-N would likely be catastrophic,
If you have a chance, I’d be interested in your line of thought here.
My initial model of GPT-3, and probably the model of the OP, is basically: GPT-3 is good at producing text that it would have been unsurprising to find on the internet. If we keep training up larger and larger models, using larger and larger datasets, it will produce text that it would be less-and-less surprising to find on the internet. Insofar as there are safety concerns, these mostly have to do with misuse—or with people using GPT-N as a starting point for developing systems with more dangerous behaviors.
I’m aware that people who are more worried do have arguments in mind, related to stuff like inner optimizers or the characteristics of the universal prior, but I don’t feel I understand them well—and am, perhaps unfairly, beginning from a place of skepticism.
It’s unfair to complain about GPT-3′s lack of ability to simulate you to get out of the box, etc. since it’s way too stupid for that, and the whole point of AI safety is to prepare for when AI systems are smart.
I think that OP’s question is sort about whether this way of speaking/thinking about GPT-3 makes sense, in the first place.
Intentionally silly example: Suppose that people were expressing concern about the safety of graphing calculators, saying things like: “OK, the graphing calculator that you own is safe. But that’s just because it’s too stupid to recognize that it has an incentive to murder you, in order to achieve its goal of multiplying numbers together. The stupidity of your graphing calculator is the only thing keeping you alive. If we keep improving our graphing calculators, without figuring out how to better align their goals, then you will likely die at the hands of graphing-calculator-N.”
Obviously, something would be off about this line of thought, although it’s a little hard to articulate exactly what. In some way, it seems, the speaker’s use of certain concepts (like “goals” and “stupidity”) is probably to blame. I think that it’s possible that there is an analogous problem, although certainly a less obvious one, with some of the safety discussion around GPT-3.
I think it’s a reasonable and well-articulated worry you raise.
My response is that for the graphing calculator, we know enough about the structure of the program and the way in which it will be enhanced that we can be pretty sure it will be fine. In particular, we know it’s not goal-directed or even building world-models in any significant way, it’s just performing specific calculations directly programmed by the software engineers.
By contrast, with GPT-3 all we know is that it’s a neural net that was positively reinforced to the extent that it correctly predicted words from the internet during training, and negatively reinforced to the extent that it didn’t. So it’s entirely possible that it does, or will eventually, have a world-model and/or goal-directed behavior. It’s not guaranteed, but there are arguments to be made that “eventually” it would have both, i.e. if we keep making it bigger and giving it more internet text and training it for longer. I’m rather uncertain about the arguments that it would have goal-directed behavior, but I’m fairly confident in the argument that eventually it would have a really good model of the world. The next question is then how this model is chosen. There are infinitely many world-models that are equally good at predicting any given dataset, but that diverge in important ways when it comes to predicting whatever is coming next. It comes down to what “implicit prior” is used. And if the implicit prior is anything like the universal prior, then doom. Now, it probably isn’t the universal prior. But maybe the same worries apply.
Yes? Not all of it, but definitely much of it is. It’s unfair to complain about GPT-3′s lack of ability to simulate you to get out of the box, etc. since it’s way too stupid for that, and the whole point of AI safety is to prepare for when AI systems are smart. There’s a whole chunk of the literature now on “Prosaic AI safety” which is designed to deal with exactly the sort of thing GPT-3 is pretty much. And even the more abstract agent foundations stuff is still relevant; for example, the “Universal prior is malign” stuff shows that in the limit GPT-N would likely be catastrophic, and that insight was gleaned from thinking a lot about solomonoff induction, which is a very agent-foundationsy thing to be doing.
If you have a chance, I’d be interested in your line of thought here.
My initial model of GPT-3, and probably the model of the OP, is basically: GPT-3 is good at producing text that it would have been unsurprising to find on the internet. If we keep training up larger and larger models, using larger and larger datasets, it will produce text that it would be less-and-less surprising to find on the internet. Insofar as there are safety concerns, these mostly have to do with misuse—or with people using GPT-N as a starting point for developing systems with more dangerous behaviors.
I’m aware that people who are more worried do have arguments in mind, related to stuff like inner optimizers or the characteristics of the universal prior, but I don’t feel I understand them well—and am, perhaps unfairly, beginning from a place of skepticism.
I think that OP’s question is sort about whether this way of speaking/thinking about GPT-3 makes sense, in the first place.
Intentionally silly example: Suppose that people were expressing concern about the safety of graphing calculators, saying things like: “OK, the graphing calculator that you own is safe. But that’s just because it’s too stupid to recognize that it has an incentive to murder you, in order to achieve its goal of multiplying numbers together. The stupidity of your graphing calculator is the only thing keeping you alive. If we keep improving our graphing calculators, without figuring out how to better align their goals, then you will likely die at the hands of graphing-calculator-N.”
Obviously, something would be off about this line of thought, although it’s a little hard to articulate exactly what. In some way, it seems, the speaker’s use of certain concepts (like “goals” and “stupidity”) is probably to blame. I think that it’s possible that there is an analogous problem, although certainly a less obvious one, with some of the safety discussion around GPT-3.
I think it’s a reasonable and well-articulated worry you raise.
My response is that for the graphing calculator, we know enough about the structure of the program and the way in which it will be enhanced that we can be pretty sure it will be fine. In particular, we know it’s not goal-directed or even building world-models in any significant way, it’s just performing specific calculations directly programmed by the software engineers.
By contrast, with GPT-3 all we know is that it’s a neural net that was positively reinforced to the extent that it correctly predicted words from the internet during training, and negatively reinforced to the extent that it didn’t. So it’s entirely possible that it does, or will eventually, have a world-model and/or goal-directed behavior. It’s not guaranteed, but there are arguments to be made that “eventually” it would have both, i.e. if we keep making it bigger and giving it more internet text and training it for longer. I’m rather uncertain about the arguments that it would have goal-directed behavior, but I’m fairly confident in the argument that eventually it would have a really good model of the world. The next question is then how this model is chosen. There are infinitely many world-models that are equally good at predicting any given dataset, but that diverge in important ways when it comes to predicting whatever is coming next. It comes down to what “implicit prior” is used. And if the implicit prior is anything like the universal prior, then doom. Now, it probably isn’t the universal prior. But maybe the same worries apply.