If you’re worried about an oncoming problem and discussing it with others to plan, your ideal interlocutor, generally, is someone who agrees with you about the danger.
and I’d like to add the disclaimer ‘...if you want to focus on the problem’. Which you might want to as in you given main example of AI risk. It might not be the best way in general (and you explicitly say “general” there). It might not be the best way if the pro and con positions are more well-known or more equally distributed in the general population (or at least in that part of the population that is educated such things).
You write
and I’d like to add the disclaimer ‘...if you want to focus on the problem’. Which you might want to as in you given main example of AI risk. It might not be the best way in general (and you explicitly say “general” there). It might not be the best way if the pro and con positions are more well-known or more equally distributed in the general population (or at least in that part of the population that is educated such things).
>If you’re worried about an oncoming problem and discussing it with others to plan