Very well put, and I couldn’t agree more with this. I’ve been reading and thinking more and more about the AI situation over the past year or so, starting when that AI researcher at Google became convinced that he had created a conscious being. Things are now accelerating at a shocking pace, and what once seemed like speculation that wasn’t immediately relevant is now crucially urgent. Time is of the essence. Moreover, I’m becoming increasingly convinced that AI containment, if it is achieved, will be done through political solutions rather than technological solutions. Things are just moving way too fast, and I don’t see how technical alignment will keep up when the pool of alignment researchers is so tiny compared to the enormous number of AI capabilities researchers.
For those of us deeply worried about AI risk, we’re going to have to prepare for a rapid change in the discourse. Public persuasion will be crucial, as if we win it will be achieved by a combination of public persuasion and effective wielding of the levers of power. This means a paradigm shift in how capital-R Rationalists talk about this issue. Rationalists have a very distinctive mode of discourse which, despite having undeniable benefits, is fundamentally incongruent with more typical modes of thinking. We need to be willing to meet people where they are at, empathize with their concerns (including people worried about AI taking their jobs or making life meaningless—this seems to be quite common), and adopt non-Rationalist methods of persuasion and effective use of power that are known to be effective. Memetic warfare, one could call it. This will probably feel very dirty to some, and understandably so, but the paradigm has shifted and now is the time.
The methods of Rationality can still be very useful in this—it’s an effective way to interrogate one’s own assumptions and preexisting biases. But people have to be willing and able to use these methods in service of effective persuasion. Keeping our eyes on the prize will also be crucial—if this new limelight ends up getting used to advance other popular Rationalist causes and viewpoints such as atheism and wild animal suffering, I do not see how this could possibly go well.
Very well put, and I couldn’t agree more with this. I’ve been reading and thinking more and more about the AI situation over the past year or so, starting when that AI researcher at Google became convinced that he had created a conscious being. Things are now accelerating at a shocking pace, and what once seemed like speculation that wasn’t immediately relevant is now crucially urgent. Time is of the essence. Moreover, I’m becoming increasingly convinced that AI containment, if it is achieved, will be done through political solutions rather than technological solutions. Things are just moving way too fast, and I don’t see how technical alignment will keep up when the pool of alignment researchers is so tiny compared to the enormous number of AI capabilities researchers.
For those of us deeply worried about AI risk, we’re going to have to prepare for a rapid change in the discourse. Public persuasion will be crucial, as if we win it will be achieved by a combination of public persuasion and effective wielding of the levers of power. This means a paradigm shift in how capital-R Rationalists talk about this issue. Rationalists have a very distinctive mode of discourse which, despite having undeniable benefits, is fundamentally incongruent with more typical modes of thinking. We need to be willing to meet people where they are at, empathize with their concerns (including people worried about AI taking their jobs or making life meaningless—this seems to be quite common), and adopt non-Rationalist methods of persuasion and effective use of power that are known to be effective. Memetic warfare, one could call it. This will probably feel very dirty to some, and understandably so, but the paradigm has shifted and now is the time.
The methods of Rationality can still be very useful in this—it’s an effective way to interrogate one’s own assumptions and preexisting biases. But people have to be willing and able to use these methods in service of effective persuasion. Keeping our eyes on the prize will also be crucial—if this new limelight ends up getting used to advance other popular Rationalist causes and viewpoints such as atheism and wild animal suffering, I do not see how this could possibly go well.