I think the case for AI being an x-risk is highly disjunctive (see below), so if someone engages with the arguments in detail, they’re pretty likely to find at least one line of argument convincing. (It might be that one of these lines of arguments, namely local FOOM of a utility maximizer, got emphasized a bit too much so some outsiders dismiss the field thinking that’s the only argument.)
We need to clarify and strengthen the case for AI-Xrisk.
Holden Karnofsky used to be a critic, but then changed his mind.
I think the case for AI being an x-risk is highly disjunctive (see below), so if someone engages with the arguments in detail, they’re pretty likely to find at least one line of argument convincing. (It might be that one of these lines of arguments, namely local FOOM of a utility maximizer, got emphasized a bit too much so some outsiders dismiss the field thinking that’s the only argument.)
Have you seen this and this (including my comment)?