If the world is literally ending, and political persuasion seems on the critical path to preventing that, and rationality-based political persuasion has thus far failed while the empirical track record of persuasion for its own sake is far superior, and most of the people most familiar with articulating AI risk arguments are on LW/AF, is it not the rational thing to do to post this here?
I understand wanting to uphold community norms, but this strikes me as in a separate category from “posts on the details of AI risk”. I don’t see why this can’t also be permitted.
TBC, I’m not saying the contest shouldn’t be posted here. When something with downsides is nonetheless worthwhile, complaining about it but then going ahead with it is often the right response—we want there to be enough mild stigma against this sort of thing that people don’t do it lightly, but we still want people to do it if it’s really clearly worthwhile. Thus my kvetching.
(In this case, I’m not sure it is worthwhile, compared to some not-too-much-harder alternative. Specifically, it’s plausible to me that the framing of this contest could be changed to not have such terrible epistemics while still preserving the core value—i.e. make it about fast, memorable communication rather than persuasion. But I’m definitely not close to 100% sure that would capture most of the value.
Fortunately, the general policy of imposing a complaint-tax on really bad epistemics does not require me to accurately judge the overall value of the proposal.)
I’m all for improving the details. Which part of the framing seems focused on persuasion vs. “fast, effective communication”? How would you formalize “fast, effective communication” in a gradeable sense? (Persuasion seems gradeable via “we used this argument on X people; how seriously they took AI risk increased from A to B on a 5-point scale”.)
Maybe you could measure how effectively people pass e.g. a multiple choice version of an Intellectual Turing Test (on how well they can emulate the viewpoint of people concerned by AI safety) after hearing the proposed explanations.
[Edit: To be explicit, this would help further John’s goals (as I understand them) because it ideally tests whether the AI safety viewpoint is being communicated in such a way that people can understand and operate the underlying mental models. This is better than testing how persuasive the arguments are because it’s a) more in line with general principles of epistemic virtue and b) is more likely to persuade people iff the specific mental models underlying AI safety concern are correct.
One potential issue would be people bouncing off the arguments early and never getting around to building their own mental models, so maybe you could test for succinct/high-level arguments that successfully persuade target audiences to take a deeper dive into the specifics? That seems like a much less concerning persuasion target to optimize, since the worst case is people being wrongly persuaded to “waste” time thinking about the same stuff the LW community has been spending a ton of time thinking about for the last ~20 years]
This comment thread did convince me to put it on personal blog (previously we’ve frontpaged writing-contents and went ahead and unreflectively did it for this post)
Frontpage posts must meet the criteria of being broadly relevant to LessWrong’s main interests; timeless, i.e. not about recent events; and are attempts to explain not persuade.
Technically the contest is asking for attempts to persuade not explain, rather than itself attempting to persuade not explain, but the principle obviously applies.
As with my own comment, I don’t think keeping the post off the frontpage is meant to be a judgement that the contest is net-negative in value; it may still be very net positive. It makes sense to have standard rules which create downsides for bad epistemics, and if some bad epistemics are worthwhile anyway, then people can pay the price of those downsides and move forward.
Raemon and I discussed whether it should be frontpage this morning. Prizes are kind of an edge case in my mind. They don’t properly fulfill the frontpage criteria but also it feels like they deserve visibility in a way that posts on niche topics don’t, so we’ve more than once made an exception for them.
I didn’t think too hard about the epistemics of the post when I made the decision to frontpage, but after John pointed out the suss epistemics, I’m inclined to agree, and concurred with Raemon moving it back to Personal.
----
I think the prize could be improved simply by rewarding the best arguments in favor and against AI risk. This might actually be more convincing to the skeptics – we paid people to argue against this position and now you can see the best they came up with.
If the world is literally ending, and political persuasion seems on the critical path to preventing that, and rationality-based political persuasion has thus far failed while the empirical track record of persuasion for its own sake is far superior, and most of the people most familiar with articulating AI risk arguments are on LW/AF, is it not the rational thing to do to post this here?
I understand wanting to uphold community norms, but this strikes me as in a separate category from “posts on the details of AI risk”. I don’t see why this can’t also be permitted.
TBC, I’m not saying the contest shouldn’t be posted here. When something with downsides is nonetheless worthwhile, complaining about it but then going ahead with it is often the right response—we want there to be enough mild stigma against this sort of thing that people don’t do it lightly, but we still want people to do it if it’s really clearly worthwhile. Thus my kvetching.
(In this case, I’m not sure it is worthwhile, compared to some not-too-much-harder alternative. Specifically, it’s plausible to me that the framing of this contest could be changed to not have such terrible epistemics while still preserving the core value—i.e. make it about fast, memorable communication rather than persuasion. But I’m definitely not close to 100% sure that would capture most of the value.
Fortunately, the general policy of imposing a complaint-tax on really bad epistemics does not require me to accurately judge the overall value of the proposal.)
I’m all for improving the details. Which part of the framing seems focused on persuasion vs. “fast, effective communication”? How would you formalize “fast, effective communication” in a gradeable sense? (Persuasion seems gradeable via “we used this argument on X people; how seriously they took AI risk increased from A to B on a 5-point scale”.)
Maybe you could measure how effectively people pass e.g. a multiple choice version of an Intellectual Turing Test (on how well they can emulate the viewpoint of people concerned by AI safety) after hearing the proposed explanations.
[Edit: To be explicit, this would help further John’s goals (as I understand them) because it ideally tests whether the AI safety viewpoint is being communicated in such a way that people can understand and operate the underlying mental models. This is better than testing how persuasive the arguments are because it’s a) more in line with general principles of epistemic virtue and b) is more likely to persuade people iff the specific mental models underlying AI safety concern are correct.
One potential issue would be people bouncing off the arguments early and never getting around to building their own mental models, so maybe you could test for succinct/high-level arguments that successfully persuade target audiences to take a deeper dive into the specifics? That seems like a much less concerning persuasion target to optimize, since the worst case is people being wrongly persuaded to “waste” time thinking about the same stuff the LW community has been spending a ton of time thinking about for the last ~20 years]
This comment thread did convince me to put it on personal blog (previously we’ve frontpaged writing-contents and went ahead and unreflectively did it for this post)
I don’t understand the logic here? Do you see it as bad for the contest to get more attention and submissions?
No, it’s just the standard frontpage policy:
Technically the contest is asking for attempts to persuade not explain, rather than itself attempting to persuade not explain, but the principle obviously applies.
As with my own comment, I don’t think keeping the post off the frontpage is meant to be a judgement that the contest is net-negative in value; it may still be very net positive. It makes sense to have standard rules which create downsides for bad epistemics, and if some bad epistemics are worthwhile anyway, then people can pay the price of those downsides and move forward.
Raemon and I discussed whether it should be frontpage this morning. Prizes are kind of an edge case in my mind. They don’t properly fulfill the frontpage criteria but also it feels like they deserve visibility in a way that posts on niche topics don’t, so we’ve more than once made an exception for them.
I didn’t think too hard about the epistemics of the post when I made the decision to frontpage, but after John pointed out the suss epistemics, I’m inclined to agree, and concurred with Raemon moving it back to Personal.
----
I think the prize could be improved simply by rewarding the best arguments in favor and against AI risk. This might actually be more convincing to the skeptics – we paid people to argue against this position and now you can see the best they came up with.
Ah, instrumental and epistemic rationality clash again