I think there’s some truth to this framing, but I’m not sure that people’s views cluster as neatly as this. In particular, I think there is a ‘how dangerous is existential risk’ axis and a ‘how much should we worry about AI and Power’ axis. I think you rightly identify the ‘booster’ cluster (x-risk fake, AI +power nothing to worry about) and ‘realist’ (x-risk sci-fi, AI + power very concerning) but I think you are missing quite a lot of diversity in people’s positions along other axes that make this arguably even more confusing for people. For example, I would characterise Bengio as being fairly concerned about both x-risk and AI+power, wheras Yudkowsky is extremely concerned about x-risk and fairly relaxed about AI+power.
I also think it’s misleading to group even ‘doomers’ as one cluster because there’s a lot of diversity in the policy asks of people who think x-risk is a real concern, from ‘more research needed’ to ‘shut it all down’. One very important group you are missing are people who are simultaneously quite (publicly) concerned about x-risk, but also quite enthusiastic about pursuing AI development and deployment. This group is important because it includes Sam Altman, Dario Amodei and Demis Hassabis (leadership of the big AI labs), as well as quite a lot people who work developing AI or work on AI safety. You might summarise this position as ‘AI is risky, but if we get it right it will save us all’. As they are often working at big tech, I think these people are mostly fairly un-worried or neutral about AI + power. This group is obviously important because they work directly on the technology, but also because this gives them a loud voice in policy and the public sphere.
You might think of this as a ‘how hard is mitigating x-risk’ axis. This is another key source of disagreement : going from public statements alone, I think (say) Sam Altman and Eliezer Yudkowsky agree on the ‘how dangerous’ axis and are both fairly relaxed Silicon Valley libertarians on the ‘AI+power’ axis, and mainly disagree on how difficult is it to solve x-risk. Obviously people’s disagreements on this question have a big impact on their desired policy!
I am quite sure that in a world where friendly tool AIs were provably easy to build and everyone was gonna build them instead of something else and the idea even made sense, basically a world where we know we don’t need to be concerned about x-risk, Yudkowsky would be far less “relaxed” about AI+power. In absolute terms maybe he’s just as concerned as everyone else about AI+power, but that concern is swamped by an even larger concern.
Maybe I shouldn’t have used EY as an example, I don’t have any special insight into how he thinks about AI and power imbalances. Generally I get the vibe from his public statements that he’s pretty libertarian and thinks pros outweigh cons on most technology which he thinks isn’t x-risky. I think I’m moderately confident that hes more relaxed about, say, misinformation or big tech platforms dominance than (say) Melanie Mitchell but maybe i’m wrong about that.
Thanks for that feedback. Perhaps this is another example of the tradeoffs in the “how many clusters are there in this group?” decision. I’m kind of thinking of this as a way to explain, e.g., to smart friends and family members, a basic idea of what is going on. For that purpose I tend, I guess, to lean in favor of fewer rather than more groups, but of course there is always a danger there of oversimplifying.
I think I may also need to do a better job distinguishing between describing positions vs describing people. Most of the people thinking and writing about this have complicated, evolving views on lots of topics, and perhaps many don’t fit neatly, as you say. Since the Munk Debate, I’ve been trying to learn more about, e.g. Melanie Mitchell’s views, and in at least one interview I heard, she acknowledged that existential risk was a possibility, she just thought it was a lower priority than other issues.
I need to think more about the “existential risk is a real problem but we are very confident that we can solve it on our current path” typified by Sam Altman and (maybe?) the folks at Anthropic. Thanks for raising that.
As you note, this view contrasts importantly with both the (1) boosters and (2) the doomers.
My read is that the booster arguments put forth by, Marc Andreessen or Yann LeCun, argue that “existential risk” concerns are like worrying about “what happens if Aliens invade our future colony on Mars?”—view that “this is going to be airplane development—yes there are risks but we are going to handle it!”
I think you’ve already explained very well the difference between the Sam Altman view and the Doomer view. Maybe this needs to be a 2 by 2 matrix? OTOH, perhaps there, in the oversimplified framework, there are two “booster” positions on why we shouldn’t be inordinatetly worried about existential risk: (1) it’s just not a likely possibility (Andreessen, LeCun) (2) “yes it’s a problem but we are going to solve it and so we don’t need to, e.g. shut down AI development” (Altman)
Thinking about another debate question, I wonder about the question
“We should pour vastly more money and resources into fixing [eta: solving] the alignment problem”
I think(??) that Altman and Yudkowsky would both argue YES, and that Andreessen and LeCun would (I think?) argue NO.
Any post along the lines of yours needs a ‘political compass’ diagram lol.
I mean it’s hard to say what Altman would think in your hypothetical debate: assuming he has reasonable freedom of action at OpenAI his revealed preference seems to be to devote ⇐ 20% of the resources available to his org to ‘the alignment problem’. If he wanted to assign more resources into ‘solving alignment’ he could probably do so. I think Altman thinks he’s basically doing the right thing in terms of risk levels. Maybe that’s a naive analysis, but I think it’s probably reasonable to take him more or less at face value.
I also think that it’s worth saying that easily the most confusing argument for the general public is exactly the Anthropic/OpenAI argument that ‘AI is really risky but also we should build it really fast’.
I think you can steelman this argument more than I’ve done here, and many smart people do, but there’s no denying it sounds pretty weird, and I think it’s why many people struggle to take it at face value when people like Altman talk about x-risk—it just sounds really insane!
In constrast, while people often think it’s really difficult and technical, I think yudkowsky’s basic argument (building stuff smarter than you seems dangerous) is pretty easy for normal people to get, and many people agree with general ‘big tech bad’ takes that the ‘realists’ like to make.
I think a lot of boosters who are skeptical of AI risk basically think ‘AI risk is a load of horseshit’ for various not always very consistent reasons. It’s hard to overstate how much ‘don’t anthropomorphise’ and ‘thinking about AGI is distracting sillyness by people who just want to sit around and talk all day’ are frequently baked deep into the souls of ML veterans like LeCun. But I think people who would argue no to your proposed alignment debate would, for example, probably strongly disagree that ‘the alignment problem’ is like a coherent thing to be solved.
I think there’s some truth to this framing, but I’m not sure that people’s views cluster as neatly as this. In particular, I think there is a ‘how dangerous is existential risk’ axis and a ‘how much should we worry about AI and Power’ axis. I think you rightly identify the ‘booster’ cluster (x-risk fake, AI +power nothing to worry about) and ‘realist’ (x-risk sci-fi, AI + power very concerning) but I think you are missing quite a lot of diversity in people’s positions along other axes that make this arguably even more confusing for people. For example, I would characterise Bengio as being fairly concerned about both x-risk and AI+power, wheras Yudkowsky is extremely concerned about x-risk and fairly relaxed about AI+power.
I also think it’s misleading to group even ‘doomers’ as one cluster because there’s a lot of diversity in the policy asks of people who think x-risk is a real concern, from ‘more research needed’ to ‘shut it all down’. One very important group you are missing are people who are simultaneously quite (publicly) concerned about x-risk, but also quite enthusiastic about pursuing AI development and deployment. This group is important because it includes Sam Altman, Dario Amodei and Demis Hassabis (leadership of the big AI labs), as well as quite a lot people who work developing AI or work on AI safety. You might summarise this position as ‘AI is risky, but if we get it right it will save us all’. As they are often working at big tech, I think these people are mostly fairly un-worried or neutral about AI + power. This group is obviously important because they work directly on the technology, but also because this gives them a loud voice in policy and the public sphere. You might think of this as a ‘how hard is mitigating x-risk’ axis. This is another key source of disagreement : going from public statements alone, I think (say) Sam Altman and Eliezer Yudkowsky agree on the ‘how dangerous’ axis and are both fairly relaxed Silicon Valley libertarians on the ‘AI+power’ axis, and mainly disagree on how difficult is it to solve x-risk. Obviously people’s disagreements on this question have a big impact on their desired policy!
I am quite sure that in a world where friendly tool AIs were provably easy to build and everyone was gonna build them instead of something else and the idea even made sense, basically a world where we know we don’t need to be concerned about x-risk, Yudkowsky would be far less “relaxed” about AI+power. In absolute terms maybe he’s just as concerned as everyone else about AI+power, but that concern is swamped by an even larger concern.
Maybe I shouldn’t have used EY as an example, I don’t have any special insight into how he thinks about AI and power imbalances. Generally I get the vibe from his public statements that he’s pretty libertarian and thinks pros outweigh cons on most technology which he thinks isn’t x-risky. I think I’m moderately confident that hes more relaxed about, say, misinformation or big tech platforms dominance than (say) Melanie Mitchell but maybe i’m wrong about that.
Thanks for that feedback. Perhaps this is another example of the tradeoffs in the “how many clusters are there in this group?” decision. I’m kind of thinking of this as a way to explain, e.g., to smart friends and family members, a basic idea of what is going on. For that purpose I tend, I guess, to lean in favor of fewer rather than more groups, but of course there is always a danger there of oversimplifying.
I think I may also need to do a better job distinguishing between describing positions vs describing people. Most of the people thinking and writing about this have complicated, evolving views on lots of topics, and perhaps many don’t fit neatly, as you say. Since the Munk Debate, I’ve been trying to learn more about, e.g. Melanie Mitchell’s views, and in at least one interview I heard, she acknowledged that existential risk was a possibility, she just thought it was a lower priority than other issues.
I need to think more about the “existential risk is a real problem but we are very confident that we can solve it on our current path” typified by Sam Altman and (maybe?) the folks at Anthropic. Thanks for raising that.
As you note, this view contrasts importantly with both the (1) boosters and (2) the doomers.
My read is that the booster arguments put forth by, Marc Andreessen or Yann LeCun, argue that “existential risk” concerns are like worrying about “what happens if Aliens invade our future colony on Mars?”—view that “this is going to be airplane development—yes there are risks but we are going to handle it!”
I think you’ve already explained very well the difference between the Sam Altman view and the Doomer view. Maybe this needs to be a 2 by 2 matrix? OTOH, perhaps there, in the oversimplified framework, there are two “booster” positions on why we shouldn’t be inordinatetly worried about existential risk: (1) it’s just not a likely possibility (Andreessen, LeCun) (2) “yes it’s a problem but we are going to solve it and so we don’t need to, e.g. shut down AI development” (Altman)
Thinking about another debate question, I wonder about the question
“We should pour vastly more money and resources into
fixing[eta: solving] the alignment problem”I think(??) that Altman and Yudkowsky would both argue YES, and that Andreessen and LeCun would (I think?) argue NO.
Any post along the lines of yours needs a ‘political compass’ diagram lol.
I mean it’s hard to say what Altman would think in your hypothetical debate: assuming he has reasonable freedom of action at OpenAI his revealed preference seems to be to devote ⇐ 20% of the resources available to his org to ‘the alignment problem’. If he wanted to assign more resources into ‘solving alignment’ he could probably do so. I think Altman thinks he’s basically doing the right thing in terms of risk levels. Maybe that’s a naive analysis, but I think it’s probably reasonable to take him more or less at face value.
I also think that it’s worth saying that easily the most confusing argument for the general public is exactly the Anthropic/OpenAI argument that ‘AI is really risky but also we should build it really fast’. I think you can steelman this argument more than I’ve done here, and many smart people do, but there’s no denying it sounds pretty weird, and I think it’s why many people struggle to take it at face value when people like Altman talk about x-risk—it just sounds really insane!
In constrast, while people often think it’s really difficult and technical, I think yudkowsky’s basic argument (building stuff smarter than you seems dangerous) is pretty easy for normal people to get, and many people agree with general ‘big tech bad’ takes that the ‘realists’ like to make.
I think a lot of boosters who are skeptical of AI risk basically think ‘AI risk is a load of horseshit’ for various not always very consistent reasons. It’s hard to overstate how much ‘don’t anthropomorphise’ and ‘thinking about AGI is distracting sillyness by people who just want to sit around and talk all day’ are frequently baked deep into the souls of ML veterans like LeCun. But I think people who would argue no to your proposed alignment debate would, for example, probably strongly disagree that ‘the alignment problem’ is like a coherent thing to be solved.