I’m all for improving the details. Which part of the framing seems focused on persuasion vs. “fast, effective communication”? How would you formalize “fast, effective communication” in a gradeable sense? (Persuasion seems gradeable via “we used this argument on X people; how seriously they took AI risk increased from A to B on a 5-point scale”.)
Maybe you could measure how effectively people pass e.g. a multiple choice version of an Intellectual Turing Test (on how well they can emulate the viewpoint of people concerned by AI safety) after hearing the proposed explanations.
[Edit: To be explicit, this would help further John’s goals (as I understand them) because it ideally tests whether the AI safety viewpoint is being communicated in such a way that people can understand and operate the underlying mental models. This is better than testing how persuasive the arguments are because it’s a) more in line with general principles of epistemic virtue and b) is more likely to persuade people iff the specific mental models underlying AI safety concern are correct.
One potential issue would be people bouncing off the arguments early and never getting around to building their own mental models, so maybe you could test for succinct/high-level arguments that successfully persuade target audiences to take a deeper dive into the specifics? That seems like a much less concerning persuasion target to optimize, since the worst case is people being wrongly persuaded to “waste” time thinking about the same stuff the LW community has been spending a ton of time thinking about for the last ~20 years]
I’m all for improving the details. Which part of the framing seems focused on persuasion vs. “fast, effective communication”? How would you formalize “fast, effective communication” in a gradeable sense? (Persuasion seems gradeable via “we used this argument on X people; how seriously they took AI risk increased from A to B on a 5-point scale”.)
Maybe you could measure how effectively people pass e.g. a multiple choice version of an Intellectual Turing Test (on how well they can emulate the viewpoint of people concerned by AI safety) after hearing the proposed explanations.
[Edit: To be explicit, this would help further John’s goals (as I understand them) because it ideally tests whether the AI safety viewpoint is being communicated in such a way that people can understand and operate the underlying mental models. This is better than testing how persuasive the arguments are because it’s a) more in line with general principles of epistemic virtue and b) is more likely to persuade people iff the specific mental models underlying AI safety concern are correct.
One potential issue would be people bouncing off the arguments early and never getting around to building their own mental models, so maybe you could test for succinct/high-level arguments that successfully persuade target audiences to take a deeper dive into the specifics? That seems like a much less concerning persuasion target to optimize, since the worst case is people being wrongly persuaded to “waste” time thinking about the same stuff the LW community has been spending a ton of time thinking about for the last ~20 years]