Writing this post as if it’s about AI risk specifically seems weirdly narrow.
I disagree. Parts 2-5 wouldn’t make sense to argue for a random other cause area that people go to college hoping to revolutionize. Parts 2-5 are about how AI is changing rapidly, and going to continue changing rapidly, and those changes result in changes to discourse, such that it’s more-of-a-mistake-than-for-other-areas to treat humanity as a purely static entity that either does or doesn’t take AI x-risk seriously enough.
By contrast, animal welfare is another really important area that kids go to college hoping to revolutionize and end up getting disillusioned, exactly as you describe. But the facts-on-the-ground and facts-being-discussed about animal welfare are not going to change as drastically over the next 10 years as the facts about AI. Generalizing the way you’re generalizing from other cause areas to AI is not valid, because AI is in fact going to be more impactful than most other things that ambitious young people try to revolutionize. Even arguments of the form “But gain of function research still hasn’t been banned” aren’t fully applicable, because AI is (I claim, and I suspect you believe) going to be more impactful than synthetic biology over the next ~10 years, and that impact creates opportunities for discourse that could be even more impactful than COVID was.
To be clear, I’m not trying to argue “everything is going to be okay because discourse will catch up”. I’m just saying that discourse around AI specifically is not as static as the FAE might lead one to feel/assume, and that I think the level of faith in changing discourse among the ~30 people I’m thinking of when writing this post seems miscalibratedly low.
I agree parts 2-5 wouldn’t make sense for all the random cause areas, but they would for a decent chunk of them. CO2-driven climate change, for example, would have been an excellent fit for those sections about 10 years ago.
That said, insofar as we’re mainly talking about level of discourse, I at least partially buy your argument. On the other hand, the OP makes it sound like you’re arguing against pessimism about shifting institutions in general, which is a much harder problem than discourse alone (as evidenced by the climate change movement, for instance).
the level of faith in changing discourse among the ~30 people I’m thinking of when writing this post seems miscalibratedly low.
The discourse that you’re referring to seems likely to be being Goodharted, so it’s not a good proxy for whether institutions will make sane decisions about world-ending AI technology. A test that would distinguish these variables would be to make logical arguments on a point that’s not widely accepted. If the response is updating or logical counterargument, that’s promising; if the response is some form of dismissal, that’s evidence the underlying generators of non-logic-processing are still there.
I disagree. Parts 2-5 wouldn’t make sense to argue for a random other cause area that people go to college hoping to revolutionize. Parts 2-5 are about how AI is changing rapidly, and going to continue changing rapidly, and those changes result in changes to discourse, such that it’s more-of-a-mistake-than-for-other-areas to treat humanity as a purely static entity that either does or doesn’t take AI x-risk seriously enough.
By contrast, animal welfare is another really important area that kids go to college hoping to revolutionize and end up getting disillusioned, exactly as you describe. But the facts-on-the-ground and facts-being-discussed about animal welfare are not going to change as drastically over the next 10 years as the facts about AI. Generalizing the way you’re generalizing from other cause areas to AI is not valid, because AI is in fact going to be more impactful than most other things that ambitious young people try to revolutionize. Even arguments of the form “But gain of function research still hasn’t been banned” aren’t fully applicable, because AI is (I claim, and I suspect you believe) going to be more impactful than synthetic biology over the next ~10 years, and that impact creates opportunities for discourse that could be even more impactful than COVID was.
To be clear, I’m not trying to argue “everything is going to be okay because discourse will catch up”. I’m just saying that discourse around AI specifically is not as static as the FAE might lead one to feel/assume, and that I think the level of faith in changing discourse among the ~30 people I’m thinking of when writing this post seems miscalibratedly low.
I agree parts 2-5 wouldn’t make sense for all the random cause areas, but they would for a decent chunk of them. CO2-driven climate change, for example, would have been an excellent fit for those sections about 10 years ago.
That said, insofar as we’re mainly talking about level of discourse, I at least partially buy your argument. On the other hand, the OP makes it sound like you’re arguing against pessimism about shifting institutions in general, which is a much harder problem than discourse alone (as evidenced by the climate change movement, for instance).
(Agree again)
To add:
The discourse that you’re referring to seems likely to be being Goodharted, so it’s not a good proxy for whether institutions will make sane decisions about world-ending AI technology. A test that would distinguish these variables would be to make logical arguments on a point that’s not widely accepted. If the response is updating or logical counterargument, that’s promising; if the response is some form of dismissal, that’s evidence the underlying generators of non-logic-processing are still there.