My feeling is that the current ways that the most prominent AI risk people make their cases don’t emphasize the disjunctive nature of AI risk enough, and tend to focus too much on one particular line of argument that they’re especially confident in (e.g., intelligence explosion / fast takeoff). As you say, “If they decide to hear out a first round of arguments but don’t find them compelling enough, they drop out of the process.” Well that doesn’t tell me much if they only heard about one line of argument in that first round.
My feeling is that the current ways that the most prominent AI risk people make their cases don’t emphasize the disjunctive nature of AI risk enough, and tend to focus too much on one particular line of argument that they’re especially confident in (e.g., intelligence explosion / fast takeoff). As you say, “If they decide to hear out a first round of arguments but don’t find them compelling enough, they drop out of the process.” Well that doesn’t tell me much if they only heard about one line of argument in that first round.
To be clear, the author is Philip Trammell, not me. Added quotes to make it clearer.