I think this article is very interesting and there are certain points that are well-argued, but (at the risk of my non-existent karma here) I feel you miss the point and are arguing points that are basically non-existent/irrelevant.
First, while surely some not-very-articulate folks argue that AGI will lead to doom, that isn’t an argument that is seriously made (at least, a serious argument to that effect is not that short and sweet). The problem isn’t artificial general intelligence in and of itself. The problem is superintelligence, however it might be achieved.
A human-level AGI is just a smart friend of mine who happens to run on silicon and electrons, not nicotine, coffee, and Hot Pockets. But a superintelligent AGI is no longer capable of being my friend for long. It will soon
To put this into context: What folks are concerned about right now is that LLMs were, even to people experienced with them, a silly tool “AI” useful for creative writing or generating disinformation and little else. (Disinformation is a risk, of course, but not a generally existential one.) Just a lark.
GPT-2 interesting, GPT-3 useful in certain categorization tasks and other linguistic tricks, GPT-3.5 somewhat more useful but still a joke/not trustworthy… AND THEN… Umm… whoa… how is GPT-4 NOT a self-improving AI that blows past human-level intelligence?
(The question is only partly rhetorical.)
This might, in fact, not be an accident on OpenAI’s part but a shrewd move that furthers an objective of educating “normal humans” about AI risk. If, so, bravo. GPT-4 in the form of ChatGPT Plus is insanely useful and likely the best 20 bucks/mo I’ve ever spent.
Step functions are hard to understand. If you’ve not (or haven’t in a while), please go (re)read Bostrom’s “Superintelligence”. The rebuttal to your post is all in there and covered more deeply than anyone here could/should manage or would/should bother.
Aside: As others have noted here, if you could push a button that would cause your notion of humanity’s “coherent extrapolated volition” to manifest, you’d do so at the drop of a hat. I note that there are others (me, for example) that have wildly different notions of the CEV and would also push the button for their notion of the CEV at the drop of a hat, but mine does not have anything to do with the long-term survival of fleshy people.
(To wit: What is the “meaning” of the universe and of life itself? What is the purpose? The purpose [which Bostrom does not come right out and say, much is the pity] is that there be but one being to apprehend the universe. They characterize this purpose as the “cosmic endowment” and assign to that endowment a meaning that corresponds to the number of sentient minds of fleshy form in the universe. But I feel differently and will gladly push the button if it assures the survival of a single entity that can apprehend the universe. This is the existential threat that superintelligence poses. It has nothing to do with paths between A and B in your diagrams and the threat is already manifest.)
I think this article is very interesting and there are certain points that are well-argued, but (at the risk of my non-existent karma here) I feel you miss the point and are arguing points that are basically non-existent/irrelevant.
First, while surely some not-very-articulate folks argue that AGI will lead to doom, that isn’t an argument that is seriously made (at least, a serious argument to that effect is not that short and sweet). The problem isn’t artificial general intelligence in and of itself. The problem is superintelligence, however it might be achieved.
A human-level AGI is just a smart friend of mine who happens to run on silicon and electrons, not nicotine, coffee, and Hot Pockets. But a superintelligent AGI is no longer capable of being my friend for long. It will soon
To put this into context: What folks are concerned about right now is that LLMs were, even to people experienced with them, a silly tool “AI” useful for creative writing or generating disinformation and little else. (Disinformation is a risk, of course, but not a generally existential one.) Just a lark.
GPT-2 interesting, GPT-3 useful in certain categorization tasks and other linguistic tricks, GPT-3.5 somewhat more useful but still a joke/not trustworthy… AND THEN… Umm… whoa… how is GPT-4 NOT a self-improving AI that blows past human-level intelligence?
(The question is only partly rhetorical.)
This might, in fact, not be an accident on OpenAI’s part but a shrewd move that furthers an objective of educating “normal humans” about AI risk. If, so, bravo. GPT-4 in the form of ChatGPT Plus is insanely useful and likely the best 20 bucks/mo I’ve ever spent.
Step functions are hard to understand. If you’ve not (or haven’t in a while), please go (re)read Bostrom’s “Superintelligence”. The rebuttal to your post is all in there and covered more deeply than anyone here could/should manage or would/should bother.
Aside: As others have noted here, if you could push a button that would cause your notion of humanity’s “coherent extrapolated volition” to manifest, you’d do so at the drop of a hat. I note that there are others (me, for example) that have wildly different notions of the CEV and would also push the button for their notion of the CEV at the drop of a hat, but mine does not have anything to do with the long-term survival of fleshy people.
(To wit: What is the “meaning” of the universe and of life itself? What is the purpose? The purpose [which Bostrom does not come right out and say, much is the pity] is that there be but one being to apprehend the universe. They characterize this purpose as the “cosmic endowment” and assign to that endowment a meaning that corresponds to the number of sentient minds of fleshy form in the universe. But I feel differently and will gladly push the button if it assures the survival of a single entity that can apprehend the universe. This is the existential threat that superintelligence poses. It has nothing to do with paths between A and B in your diagrams and the threat is already manifest.)