I found this thread interesting and useful, but I feel a key point has been omitted thus far (from what I’ve read):
Public, elite, and policymaker beliefs and attitudes related to AI risk aren’t just a variable we (members of the EA/longtermist/AI safety communities) have to bear in mind and operate in light of, but instead also a variable we can intervene on.
And so far I’d say we have (often for very good reasons) done significantly less to intervene on that variable than we could’ve or than we could going forward.
So it seems plausible that actually these people are fairly convincible if exposed to better efforts to really explain the arguments in a compelling way.
We’ve definitely done a significant amount of this kind of work, but I think we’ve often (a) deliberately held back on doing so or on conveying key parts of the arguments, due to reasonable downside risk concerns, and (b) not prioritized this. And I think there’s significantly more we could do if we wanted to, especially after a period of actively building capacity for this.
Important caveats / wet blankets:
I think there are indeed strong arguments against trying to shift relevant beliefs and attitudes in a more favorable direction, including not just costs and plausibly low upside but also multiple major plausible downside risks.[1]
So I wouldn’t want anyone to take major steps in this direction without checking in with multiple people working on AI safety/governance first.
And it’s not at all obvious to me we should be doing more of that sort of work. (Though I think whether, how, & when we should is an important question and I’m aware of and excited about a couple small research projects that are happening on that.)
All I really want to convey in this comment is what I said in my first paragraph: we may be able to significantly push beliefs and opinions in favorable directions relative to where they are now or would be n future by default.
I found this thread interesting and useful, but I feel a key point has been omitted thus far (from what I’ve read):
Public, elite, and policymaker beliefs and attitudes related to AI risk aren’t just a variable we (members of the EA/longtermist/AI safety communities) have to bear in mind and operate in light of, but instead also a variable we can intervene on.
And so far I’d say we have (often for very good reasons) done significantly less to intervene on that variable than we could’ve or than we could going forward.
So it seems plausible that actually these people are fairly convincible if exposed to better efforts to really explain the arguments in a compelling way.
We’ve definitely done a significant amount of this kind of work, but I think we’ve often (a) deliberately held back on doing so or on conveying key parts of the arguments, due to reasonable downside risk concerns, and (b) not prioritized this. And I think there’s significantly more we could do if we wanted to, especially after a period of actively building capacity for this.
Important caveats / wet blankets:
I think there are indeed strong arguments against trying to shift relevant beliefs and attitudes in a more favorable direction, including not just costs and plausibly low upside but also multiple major plausible downside risks.[1]
So I wouldn’t want anyone to take major steps in this direction without checking in with multiple people working on AI safety/governance first.
And it’s not at all obvious to me we should be doing more of that sort of work. (Though I think whether, how, & when we should is an important question and I’m aware of and excited about a couple small research projects that are happening on that.)
All I really want to convey in this comment is what I said in my first paragraph: we may be able to significantly push beliefs and opinions in favorable directions relative to where they are now or would be n future by default.
Due to time constraints, I’ll just point to this vague overview.