Now that the risks of AI are getting mainstream traction, we can expect the people who want to rush forward with AI research to increase their efforts to influence public opinion. In particular, most people will come to rely heavily on large language models to get information much like they rely heavily on search engines today, and the most popular large language models (LLMs) will probably be tuned so as to downplay the risks of AI research (particularly the argument that AI research is so dangerous that it should be halted for a few decades). It is not too early to think about how to counter that.
Now that the risks of AI are getting mainstream traction, we can expect the people who want to rush forward with AI research to increase their efforts to influence public opinion. In particular, most people will come to rely heavily on large language models to get information much like they rely heavily on search engines today, and the most popular large language models (LLMs) will probably be tuned so as to downplay the risks of AI research (particularly the argument that AI research is so dangerous that it should be halted for a few decades). It is not too early to think about how to counter that.