I think one of the major failure modes here was politicizing the climate change movement, as it led to 40 years of blocking climate solutions.
Now climate change is being solved, slower than people would like, but conditional on the world where AI alignment is as easy as the post suggests, serious changes would be in order for LW, and the Alignment forum should close up shop.
Yes, I agree that the politicisation is the central issue. But this is exactly why I wrote the first part—I feel that this section is true despite it (I didn’t claim that most people agree with the solution, only that the elites, experts, and the reader’s social bubble does!).
So one question I’m trying to understand is: since politicisation happened to climate change, why do we think that it won’t happen to AI governance? I.e. the point is that pursuing goals by political means might just usually end up like that, because of the basic structure of the political discourse (you get points for opposing the other side, etc).
I think one of the major failure modes here was politicizing the climate change movement, as it led to 40 years of blocking climate solutions.
Now climate change is being solved, slower than people would like, but conditional on the world where AI alignment is as easy as the post suggests, serious changes would be in order for LW, and the Alignment forum should close up shop.
Yes, I agree that the politicisation is the central issue. But this is exactly why I wrote the first part—I feel that this section is true despite it (I didn’t claim that most people agree with the solution, only that the elites, experts, and the reader’s social bubble does!).
So one question I’m trying to understand is: since politicisation happened to climate change, why do we think that it won’t happen to AI governance? I.e. the point is that pursuing goals by political means might just usually end up like that, because of the basic structure of the political discourse (you get points for opposing the other side, etc).