This is a really complicated issue because different priors and premises can lead you to extremely different conclusions.
For example, I see the following as a typical view on AI among the general public: (the common person is unlikely to go this deep into his reasoning, but could come to these arguments if he had to debate on it)
Premises: “Judging by how nature produced intelligence, and by the incremental progress we are seeing in LLMs, artificial intelligence is likely to be achieved by packing more connections into a digital system. This will allow the AI to generate associations between ideas and find creative solutions to problems more easily, think faster, have greater memory, or be more error-proof. This will at one point generate an intelligence superior to ours, but it will not be fundamentally different. It will still consist of an entangled network of connections, more powerful and effective than ever, but incapable of “jumping out of the system”. These same connections will, in a sense, limit and prevent it from turning the universe into a factory of paperclips when asked to produce more paperclips. If bigger brains hadn’t made childbirth dangerous or hadn’t been more energy-consuming, nature could have produced a greater, more complex intelligence, without the risk of it destroying the Earth. Maybe this is not the only way to build an artificial superintelligence, but it seems a feasible way and the most likely path in light of the developments to date. Key issues will need to be settled — regarding AI consciousness, its training data, or the subsequent social changes it will bring —, but the AI will not be existentially threatening. In fact, greater existential risks would come from having to specify the functions and rules of the AI, as in GOFAI, where you would be more likely to stumble upon the control problem and the like. But in any case, GOFAI would take far too long to develop to be concerning right now.”
Conclusion: “Stopping the development of AIs would make sense to solve the above problems, but not at the risk of creating big power conflicts or even of postponing the advent of the benefits of AI.”
I do not endorse these views of AI (although I assign a non-negligible probability to superintelligence first coming through this gradual and connectivist, and existentially harmless, increase in capabilities), but if its main cruxes are not clarified and disputed, we might be unable to make people come to different conclusions. So while the Overton window does need to be widened to make the existential concerns of AI have any chance of influencing policies, it might require a greater effort that involves clarifying the core arguments and spreading ideas to e.g. overcome the mind projection fallacy or understand why artificial superintelligence is qualitatively different from human intelligence.
This is a really complicated issue because different priors and premises can lead you to extremely different conclusions.
For example, I see the following as a typical view on AI among the general public:
(the common person is unlikely to go this deep into his reasoning, but could come to these arguments if he had to debate on it)
Premises: “Judging by how nature produced intelligence, and by the incremental progress we are seeing in LLMs, artificial intelligence is likely to be achieved by packing more connections into a digital system. This will allow the AI to generate associations between ideas and find creative solutions to problems more easily, think faster, have greater memory, or be more error-proof.
This will at one point generate an intelligence superior to ours, but it will not be fundamentally different. It will still consist of an entangled network of connections, more powerful and effective than ever, but incapable of “jumping out of the system”. These same connections will, in a sense, limit and prevent it from turning the universe into a factory of paperclips when asked to produce more paperclips. If bigger brains hadn’t made childbirth dangerous or hadn’t been more energy-consuming, nature could have produced a greater, more complex intelligence, without the risk of it destroying the Earth.
Maybe this is not the only way to build an artificial superintelligence, but it seems a feasible way and the most likely path in light of the developments to date. Key issues will need to be settled — regarding AI consciousness, its training data, or the subsequent social changes it will bring —, but the AI will not be existentially threatening. In fact, greater existential risks would come from having to specify the functions and rules of the AI, as in GOFAI, where you would be more likely to stumble upon the control problem and the like. But in any case, GOFAI would take far too long to develop to be concerning right now.”
Conclusion: “Stopping the development of AIs would make sense to solve the above problems, but not at the risk of creating big power conflicts or even of postponing the advent of the benefits of AI.”
I do not endorse these views of AI (although I assign a non-negligible probability to superintelligence first coming through this gradual and connectivist, and existentially harmless, increase in capabilities), but if its main cruxes are not clarified and disputed, we might be unable to make people come to different conclusions. So while the Overton window does need to be widened to make the existential concerns of AI have any chance of influencing policies, it might require a greater effort that involves clarifying the core arguments and spreading ideas to e.g. overcome the mind projection fallacy or understand why artificial superintelligence is qualitatively different from human intelligence.