Any post along the lines of yours needs a ‘political compass’ diagram lol.
I mean it’s hard to say what Altman would think in your hypothetical debate: assuming he has reasonable freedom of action at OpenAI his revealed preference seems to be to devote ⇐ 20% of the resources available to his org to ‘the alignment problem’. If he wanted to assign more resources into ‘solving alignment’ he could probably do so. I think Altman thinks he’s basically doing the right thing in terms of risk levels. Maybe that’s a naive analysis, but I think it’s probably reasonable to take him more or less at face value.
I also think that it’s worth saying that easily the most confusing argument for the general public is exactly the Anthropic/OpenAI argument that ‘AI is really risky but also we should build it really fast’.
I think you can steelman this argument more than I’ve done here, and many smart people do, but there’s no denying it sounds pretty weird, and I think it’s why many people struggle to take it at face value when people like Altman talk about x-risk—it just sounds really insane!
In constrast, while people often think it’s really difficult and technical, I think yudkowsky’s basic argument (building stuff smarter than you seems dangerous) is pretty easy for normal people to get, and many people agree with general ‘big tech bad’ takes that the ‘realists’ like to make.
I think a lot of boosters who are skeptical of AI risk basically think ‘AI risk is a load of horseshit’ for various not always very consistent reasons. It’s hard to overstate how much ‘don’t anthropomorphise’ and ‘thinking about AGI is distracting sillyness by people who just want to sit around and talk all day’ are frequently baked deep into the souls of ML veterans like LeCun. But I think people who would argue no to your proposed alignment debate would, for example, probably strongly disagree that ‘the alignment problem’ is like a coherent thing to be solved.
Any post along the lines of yours needs a ‘political compass’ diagram lol.
I mean it’s hard to say what Altman would think in your hypothetical debate: assuming he has reasonable freedom of action at OpenAI his revealed preference seems to be to devote ⇐ 20% of the resources available to his org to ‘the alignment problem’. If he wanted to assign more resources into ‘solving alignment’ he could probably do so. I think Altman thinks he’s basically doing the right thing in terms of risk levels. Maybe that’s a naive analysis, but I think it’s probably reasonable to take him more or less at face value.
I also think that it’s worth saying that easily the most confusing argument for the general public is exactly the Anthropic/OpenAI argument that ‘AI is really risky but also we should build it really fast’. I think you can steelman this argument more than I’ve done here, and many smart people do, but there’s no denying it sounds pretty weird, and I think it’s why many people struggle to take it at face value when people like Altman talk about x-risk—it just sounds really insane!
In constrast, while people often think it’s really difficult and technical, I think yudkowsky’s basic argument (building stuff smarter than you seems dangerous) is pretty easy for normal people to get, and many people agree with general ‘big tech bad’ takes that the ‘realists’ like to make.
I think a lot of boosters who are skeptical of AI risk basically think ‘AI risk is a load of horseshit’ for various not always very consistent reasons. It’s hard to overstate how much ‘don’t anthropomorphise’ and ‘thinking about AGI is distracting sillyness by people who just want to sit around and talk all day’ are frequently baked deep into the souls of ML veterans like LeCun. But I think people who would argue no to your proposed alignment debate would, for example, probably strongly disagree that ‘the alignment problem’ is like a coherent thing to be solved.