@L Rudolf L can talk on his own, but for me, a crux probably is I don’t expect either unaligned superintelligence singleton or a value aligned superintelligence creating utopia as the space of likely outcomes within the next few decades.
For the unaligned superintelligence point, my basic reasons is I now believe the alignment problem got significantly easier compared to 15 years ago, I’ve become more bullish on AI control working out since o3, and I’ve come to think instrumental convergence is probably correct for some AIs we build in practice, but that instrumental drives are more constrainable on the likely paths to AGI and ASI.
For the alignment point, a big reason for this is I now think a lot of what makes an AI aligned is primarily data, compared to inductive biases, and one of my biggest divergences with the LW community comes down to me thinking that inductive bias is way less necessary for alignment than people usually think, especially compared to 15 years ago.
For AI control, one update I’ve made for o3 is that I believe OpenAI managed to get the RL loop working in domains where outcomes are easily verifiable, but not in domains where verifying is hard, and programming/mathematics are such domains where verifying is easy, but the tie-in is that capabilities will be more spikey/narrow than you may think, and this matters since I believe narrow/tool AI has a relevant role to play in an intelligence explosion, so you can actually affect the outcome by building narrow capabilities AI for a few years, and the fact that AI capabilities are spikey in domains where we can easily verify outcomes is good for eliciting AI capabilities, which is a part of AI control.
For the singleton point, it’s probably because I believe takeoff is both slow and somewhat distributed enough such that multiple superintelligent AIs can arise.
For the value-aligned superintelligence creating a utopia for everyone, my basic reason for why I don’t really believe in this is because I believe value conflicts are effectively irresolvable due to moral subjectivism, which forces the utopia to be a utopia for some people, and I expect the set of people that are in an individual utopia to be small in practice (because value conflicts become more relevant for AIs that can create nation-states all by themselves.)
For why humans are decision makers, this is probably because AI is either controlled or certain companies have chosen to make AIs follow instruction-following drives, and that actually succeeding.
@L Rudolf L can talk on his own, but for me, a crux probably is I don’t expect either unaligned superintelligence singleton or a value aligned superintelligence creating utopia as the space of likely outcomes within the next few decades.
For the unaligned superintelligence point, my basic reasons is I now believe the alignment problem got significantly easier compared to 15 years ago, I’ve become more bullish on AI control working out since o3, and I’ve come to think instrumental convergence is probably correct for some AIs we build in practice, but that instrumental drives are more constrainable on the likely paths to AGI and ASI.
For the alignment point, a big reason for this is I now think a lot of what makes an AI aligned is primarily data, compared to inductive biases, and one of my biggest divergences with the LW community comes down to me thinking that inductive bias is way less necessary for alignment than people usually think, especially compared to 15 years ago.
For AI control, one update I’ve made for o3 is that I believe OpenAI managed to get the RL loop working in domains where outcomes are easily verifiable, but not in domains where verifying is hard, and programming/mathematics are such domains where verifying is easy, but the tie-in is that capabilities will be more spikey/narrow than you may think, and this matters since I believe narrow/tool AI has a relevant role to play in an intelligence explosion, so you can actually affect the outcome by building narrow capabilities AI for a few years, and the fact that AI capabilities are spikey in domains where we can easily verify outcomes is good for eliciting AI capabilities, which is a part of AI control.
For the singleton point, it’s probably because I believe takeoff is both slow and somewhat distributed enough such that multiple superintelligent AIs can arise.
For the value-aligned superintelligence creating a utopia for everyone, my basic reason for why I don’t really believe in this is because I believe value conflicts are effectively irresolvable due to moral subjectivism, which forces the utopia to be a utopia for some people, and I expect the set of people that are in an individual utopia to be small in practice (because value conflicts become more relevant for AIs that can create nation-states all by themselves.)
For why humans are decision makers, this is probably because AI is either controlled or certain companies have chosen to make AIs follow instruction-following drives, and that actually succeeding.