I think I didn’t articulate my argument clearly, I tried to clarify it in my reply to Jessica.
I think my argument might be especially relevant to the effort of persuading AI researchers not to build goal-directed systems.
If a result of this effort is convincing more AI researchers in the general premise that x-risk from AI is something worth worrying about, then that’s a very strong argument in favor of carrying out the effort (and I agree this result should correlate with convincing AI researchers not to build goal-directed systems—if that’s what you argued in your comment).
Yeah, I was imagining that we would convince AI researchers that goal-directed systems are dangerous, and that we should build the non-goal-directed versions instead.
I think I didn’t articulate my argument clearly, I tried to clarify it in my reply to Jessica.
I think my argument might be especially relevant to the effort of persuading AI researchers not to build goal-directed systems.
If a result of this effort is convincing more AI researchers in the general premise that x-risk from AI is something worth worrying about, then that’s a very strong argument in favor of carrying out the effort (and I agree this result should correlate with convincing AI researchers not to build goal-directed systems—if that’s what you argued in your comment).
Yeah, I was imagining that we would convince AI researchers that goal-directed systems are dangerous, and that we should build the non-goal-directed versions instead.