Hey John, thank you for your feedback. As per the post, we’re not accepting misleading arguments. We’re looking for the subset of sound arguments that are also effective.
We’re happy to consider concrete suggestions which would help this competition reduce x-risk.
Thanks for the idea, Jacob. Not speaking on behalf of the group here—but my first thought is that enforcing symmetry on discussion probably isn’t a condition for good epistemics, especially since the distribution of this community’s opinions is skewed. I think I’d be more worried if particular arguments that were misleading went unchallenged, but we’ll be vetting submissions as they come in, and I’d also encourage anyone who has concerns with a given submission to talk with the author and/or us. My second thought is that we’re planning a number of practical outreach projects that will make use of the arguments generated here—we’re not trying to host an intra-community debate about the legitimacy of AI risk—so we’d ideally have the prize structure reflect the outreach value for which arguments are responsible.
I’m potentially up to opening the contest to arguments for or against AI risk, and allowing the distribution of responses to reflect the distribution of the opinions of the community. Will discuss with the rest of the group.