I commented on the post Ilya Sutskever and Jan Leike resign from OpenAI [updated] and received quite a lot disagreement (-18). Since then Aschenbrenner have posted his report and been vocal about his beliefs that it is imperative for the security of US (and the rest of the western world...) that US beats China in the AGI race i.e. AI tech is about military capabilities.
So, do many of you still disagree with my old comment? If so, then I am curios to know why you believe that what I wrote in the old comment is so far fetched?
The old comment: ”Without resorting to exotic conspiracy theories, is it that unlikely to assume that Altman et al. are under tremendous pressure from the military and intelligence agencies to produce results to not let China or anyone else win the race for AGI? I do not for a second believe that Altman et al. are reckless idiots that do not understand what kind of fire they might be playing with, that they would risk wiping out humanity just to beat Google on search. There must be bigger forces at play here, because that is the only thing that makes sense when reading Leike’s comment and observing Open AI’s behavior.”
I don’t think that intelligence and military are likely to be much more of reckless idiots than Altman and co., what seems more probable is that their interests and attitudes genuinely align.
Of course they are not idiots, but I am talking about the pressure to produce results fast without having doomers and skeptics holding them back. A 1,2 or 3 year delay for one party could mean that they loose.
If it would have been publicly known that the Los Alamos team were building a new bomb capable of destroying cities and that they were not sure if the first detonation could lead to an uncontrollable chain reaction destroying earth, don’t you think there would have been quite a lot of debate and a long delay in the Manhattan project?
If the creation of AGI is one of the biggest events on earth since the advent of life, and that those who get it first can (will) be the all-power full masters, why would that not entice people to take bigger risks than they otherwise would have?
Current median p(doom) among Ai scientists seem to be 5-10%. How can it NOT be reckless to pursue something without extreme caution that is believed by people with the most knowledge in the field to be close to a round of Russian roulette for humankind?
Imagine for a second that I am a world leading scientist dabbling with viruses at home that potentially could give people eternal life and health, but that I publicly would state that “based on my current knowledge and expertise there is maybe a 10% risk that I accidently wipeout all humans in the process, because I have no real idea how to control the virus”. Would you then:
A) Call me a reckless idiot, send a SWAT team that put me behind bars, and destroy my lab and other labs that might be dabbling with the same biotech.
How can it NOT be reckless to pursue something without extreme caution that is believed by people with the most knowledge in the field to be close to a round of Russian roulette for humankind?
“Anyone thinks they’re a reckless idiot” is far too easy a bar to reach for any public figure. I do not know of major anti-Altman currents in my country, but considering surveys consistently show a majority of people worried about AI risk, a normal distribution of extremeness of opinion on the subject ensures there’ll be many who do consider Sam Altman a reckless idiot (for good or bad reason—I expect a majority of them to consider Sam Altman to have any negative trait that comes to their attention because it is just that easy to have a narrow hateful opinion on a subject for a large portion of the population).
For the record. I do not mean to single out Altman. I am talking in general about leading figures (i.e. Altman et al.) in the AI space for which Altman have become a convenient proxy since he is a very public figure.
I commented on the post Ilya Sutskever and Jan Leike resign from OpenAI [updated] and received quite a lot disagreement (-18). Since then Aschenbrenner have posted his report and been vocal about his beliefs that it is imperative for the security of US (and the rest of the western world...) that US beats China in the AGI race i.e. AI tech is about military capabilities.
So, do many of you still disagree with my old comment? If so, then I am curios to know why you believe that what I wrote in the old comment is so far fetched?
The old comment:
”Without resorting to exotic conspiracy theories, is it that unlikely to assume that Altman et al. are under tremendous pressure from the military and intelligence agencies to produce results to not let China or anyone else win the race for AGI? I do not for a second believe that Altman et al. are reckless idiots that do not understand what kind of fire they might be playing with, that they would risk wiping out humanity just to beat Google on search. There must be bigger forces at play here, because that is the only thing that makes sense when reading Leike’s comment and observing Open AI’s behavior.”
I don’t think that intelligence and military are likely to be much more of reckless idiots than Altman and co., what seems more probable is that their interests and attitudes genuinely align.
Of course they are not idiots, but I am talking about the pressure to produce results fast without having doomers and skeptics holding them back. A 1,2 or 3 year delay for one party could mean that they loose.
If it would have been publicly known that the Los Alamos team were building a new bomb capable of destroying cities and that they were not sure if the first detonation could lead to an uncontrollable chain reaction destroying earth, don’t you think there would have been quite a lot of debate and a long delay in the Manhattan project?
If the creation of AGI is one of the biggest events on earth since the advent of life, and that those who get it first can (will) be the all-power full masters, why would that not entice people to take bigger risks than they otherwise would have?
Do you think Sam Altman is seen as a reckless idiot by anyone aside from the pro-pause people in the Lesswrong circle?
Current median p(doom) among Ai scientists seem to be 5-10%. How can it NOT be reckless to pursue something without extreme caution that is believed by people with the most knowledge in the field to be close to a round of Russian roulette for humankind?
Imagine for a second that I am a world leading scientist dabbling with viruses at home that potentially could give people eternal life and health, but that I publicly would state that “based on my current knowledge and expertise there is maybe a 10% risk that I accidently wipeout all humans in the process, because I have no real idea how to control the virus”. Would you then:
A) Call me a reckless idiot, send a SWAT team that put me behind bars, and destroy my lab and other labs that might be dabbling with the same biotech.
B) Say “let the boy play”.
It doesn’t follow that he is seen as reckless even by those giving the 5-10% answer on the human extinction question, and this is a distinct fact from actually being reckless.
“Anyone thinks they’re a reckless idiot” is far too easy a bar to reach for any public figure.
I do not know of major anti-Altman currents in my country, but considering surveys consistently show a majority of people worried about AI risk, a normal distribution of extremeness of opinion on the subject ensures there’ll be many who do consider Sam Altman a reckless idiot (for good or bad reason—I expect a majority of them to consider Sam Altman to have any negative trait that comes to their attention because it is just that easy to have a narrow hateful opinion on a subject for a large portion of the population).
For the record. I do not mean to single out Altman. I am talking in general about leading figures (i.e. Altman et al.) in the AI space for which Altman have become a convenient proxy since he is a very public figure.