First, you have to have in mind that when people are talking about “AI” in industry and policymaking, they usually have mostly non-deep learning or vision deep learning techniques in mind simply because they mostly don’t know the ML academic field but they have heard that “AI” was becoming important in industry. So this sentence is little evidence that Russia (or any other country) is trying to build AGI, and I’m at ~60% Putin wasn’t thinking about AGI when he said that.
If anyone who could play any role at all in developing AGI (or uncontrollable AI as I prefer to call it) isn’t trying to develop it by now, I doubt very much that any amount of public communication will change that.
I think that you’re deeply wrong about this. Policymakers and people in industry, at least till ChatGPT had no idea what was going on (e.g at the AI World Summit, 2 months ago very few people even knew about GPT-3). SOTA large language models are not really properly deployed, so nobody cared about them or even knew about them (till ChatGPT at least). The level of investment right now in top training runs probably doesn’t go beyond $200M. The GDP of the US is 20 trillion. Likewise for China. Even a country like France could unilaterally put $50 billion in AGI development and accelerate timelines quite a lot within a couple of years.
Even post ChatGPT, people are very bad at projecting what it means for next years and still have a prior on the fact that human intelligence is very specific and can’t be beaten which prevents them from realizing all the power of this technology.
I really strongly encourage you to go talk to actual people from industry and policy to get a sense of their knowledge on the topic. And I would strongly recommend not publishing your book as long as you haven’t done that. I also hope that a lot of people who have thought about these issues have proofread your book because it’s the kind of thing that could really increase P(doom) substantially.
I think that to make your point, it would be easier to defend the line that “even if more governments got involved, that wouldn’t change much”. I don’t think that’s right because if you gave $10B more to some labs, it’s likely they’d move way faster. But I think that it’s less clear.
a common, clear understanding of the dangers
I agree that it would be something good to have. But the question is: is it even possible to have such a thing?
I think that within the scientific community, it’s roughly possible (but then your book/outreach medium must be highly targeted towards that community). Within the general public, I think that it’s ~impossible. Climate change, which is a problem which is much easier to understand and explain is already way too complex for the general public to have a good idea of what are the risks and what are the promising solutions to these risks (e.g. a lot people’s top priorities is to eat organic food, recycle and decrease plastic consumption).
I agree that communicating with the scientific community is good, which is why I said that you should avoid publicizing only among “the general public”. If you really want to publish a book, I’d recommend targeting the scientific community, which is not at all the same public as the general public.
“On the other hand, if most people think that strong AI poses a significant risk to their future and that of their kids, this might change how AI capabilities researchers are seen, and how they see themselves”
I agree with this theory of change and I think that it points a lot more towards “communicate in the ML community” than “communicate towards the general public”. Publishing great AI capabilities is mostly cool for other AI researchers and not that much for the general public. People in San Francisco (where most of the AGI labs are) also don’t care much about the general public and whatever it thinks ; the subculture there and what is considered to be “cool” is really different from what the general public thinks is cool. As a consequence, I think they mostly care about what their peers are thinking about them. So if you want to change the incentives, I’d recommend focusing your efforts on the scientific & the tech community.
Policymakers and people in industry, at least till ChatGPT had no idea what was going on (e.g at the AI World Summit, 2 months ago very few people even knew about GPT-3). SOTA large language models are not really properly deployed, so nobody cared about them or even knew about them (till ChatGPT at least).
As you point out yourself, what makes people interested in developing AGI is progress in AI, not the public discussion of potential dangers. “Nobody cared about” LLMs is certainly not true—I’m pretty sure the relevant people watched them closely. That many people aren’t concerned about AGI or doubting its feasibility by now only means that THOSE people will not pursue it, and any public discussion will probably not change their minds. There are others who think very differently, like the people at OpenAI, Deepmind, Google, and (I suspect) a lot of others who communicate less openly about what they do.
I agree that [a common understanding of the dangers] would be something good to have. But the question is: is it even possible to have such a thing?
I think that within the scientific community, it’s roughly possible (but then your book/outreach medium must be highly targeted towards that community). Within the general public, I think that it’s ~impossible.
I don’t think you can easily separate the scientific community from the general public. Even scientific papers are read by journalists, who often publish about them in a simplified or distorted way. Already there are many alarming posts and articles out there, as well as books like Stuart Russell’s “Human Compatible” (which I think is very good and helpful), so keeping the lid on the possibility of AGI and its profound impacts is way too late (it was probably too late already when Arthur C. Clarke wrote “2001 - A Space Odyssey”). Not talking about the dangers of uncontrollable AI for fear that this may lead to certain actors investing even more heavily in the field is both naive and counterproductive in my view.
And I would strongly recommend not publishing your book as long as you haven’t done that.
I will definitely publish it, but I doubt very much that it will have a large impact. There are many other writers out there with a much larger audience who write similar books.
I also hope that a lot of people who have thought about these issues have proofread your book because it’s the kind of thing that could really increase P(doom) substantially.
I’m currently in the process of translating it to English so I can do just that. I’ll send you a link as soon as I’m finished. I’ll also invite everyone else in the AI safety community (I’m probably going to post an invite on LessWrong).
Concerning the Putin quote, I don’t think that Russia is at the forefront of development, but China certainly is. Xi has said similar things in public, and I doubt very much that we know how much they currently spend on training their AIs. The quotes are not relevant, though, I just mentioned them to make the point that there is already a lot of discussion about the enormous impact AI will have on our future. I really can’t see how discussing the risks should be damaging, while discussing the great potential of AGI for humanity should not.
Thanks for your comment!
First, you have to have in mind that when people are talking about “AI” in industry and policymaking, they usually have mostly non-deep learning or vision deep learning techniques in mind simply because they mostly don’t know the ML academic field but they have heard that “AI” was becoming important in industry. So this sentence is little evidence that Russia (or any other country) is trying to build AGI, and I’m at ~60% Putin wasn’t thinking about AGI when he said that.
I think that you’re deeply wrong about this. Policymakers and people in industry, at least till ChatGPT had no idea what was going on (e.g at the AI World Summit, 2 months ago very few people even knew about GPT-3). SOTA large language models are not really properly deployed, so nobody cared about them or even knew about them (till ChatGPT at least). The level of investment right now in top training runs probably doesn’t go beyond $200M. The GDP of the US is 20 trillion. Likewise for China. Even a country like France could unilaterally put $50 billion in AGI development and accelerate timelines quite a lot within a couple of years.
Even post ChatGPT, people are very bad at projecting what it means for next years and still have a prior on the fact that human intelligence is very specific and can’t be beaten which prevents them from realizing all the power of this technology.
I really strongly encourage you to go talk to actual people from industry and policy to get a sense of their knowledge on the topic. And I would strongly recommend not publishing your book as long as you haven’t done that. I also hope that a lot of people who have thought about these issues have proofread your book because it’s the kind of thing that could really increase P(doom) substantially.
I think that to make your point, it would be easier to defend the line that “even if more governments got involved, that wouldn’t change much”. I don’t think that’s right because if you gave $10B more to some labs, it’s likely they’d move way faster. But I think that it’s less clear.
I agree that it would be something good to have. But the question is: is it even possible to have such a thing?
I think that within the scientific community, it’s roughly possible (but then your book/outreach medium must be highly targeted towards that community). Within the general public, I think that it’s ~impossible. Climate change, which is a problem which is much easier to understand and explain is already way too complex for the general public to have a good idea of what are the risks and what are the promising solutions to these risks (e.g. a lot people’s top priorities is to eat organic food, recycle and decrease plastic consumption).
I agree that communicating with the scientific community is good, which is why I said that you should avoid publicizing only among “the general public”. If you really want to publish a book, I’d recommend targeting the scientific community, which is not at all the same public as the general public.
“On the other hand, if most people think that strong AI poses a significant risk to their future and that of their kids, this might change how AI capabilities researchers are seen, and how they see themselves”
I agree with this theory of change and I think that it points a lot more towards “communicate in the ML community” than “communicate towards the general public”. Publishing great AI capabilities is mostly cool for other AI researchers and not that much for the general public. People in San Francisco (where most of the AGI labs are) also don’t care much about the general public and whatever it thinks ; the subculture there and what is considered to be “cool” is really different from what the general public thinks is cool. As a consequence, I think they mostly care about what their peers are thinking about them. So if you want to change the incentives, I’d recommend focusing your efforts on the scientific & the tech community.
As you point out yourself, what makes people interested in developing AGI is progress in AI, not the public discussion of potential dangers. “Nobody cared about” LLMs is certainly not true—I’m pretty sure the relevant people watched them closely. That many people aren’t concerned about AGI or doubting its feasibility by now only means that THOSE people will not pursue it, and any public discussion will probably not change their minds. There are others who think very differently, like the people at OpenAI, Deepmind, Google, and (I suspect) a lot of others who communicate less openly about what they do.
I don’t think you can easily separate the scientific community from the general public. Even scientific papers are read by journalists, who often publish about them in a simplified or distorted way. Already there are many alarming posts and articles out there, as well as books like Stuart Russell’s “Human Compatible” (which I think is very good and helpful), so keeping the lid on the possibility of AGI and its profound impacts is way too late (it was probably too late already when Arthur C. Clarke wrote “2001 - A Space Odyssey”). Not talking about the dangers of uncontrollable AI for fear that this may lead to certain actors investing even more heavily in the field is both naive and counterproductive in my view.
I will definitely publish it, but I doubt very much that it will have a large impact. There are many other writers out there with a much larger audience who write similar books.
I’m currently in the process of translating it to English so I can do just that. I’ll send you a link as soon as I’m finished. I’ll also invite everyone else in the AI safety community (I’m probably going to post an invite on LessWrong).
Concerning the Putin quote, I don’t think that Russia is at the forefront of development, but China certainly is. Xi has said similar things in public, and I doubt very much that we know how much they currently spend on training their AIs. The quotes are not relevant, though, I just mentioned them to make the point that there is already a lot of discussion about the enormous impact AI will have on our future. I really can’t see how discussing the risks should be damaging, while discussing the great potential of AGI for humanity should not.