That idea seems reasonable at first glance, but upon reflection, I think it’s a really bad idea. It’s one thing to run a red-teaming competition, it’s another to spend money building rhetorically optimised tools for the other side. If we do that, then maybe there was no point running the competition in the first place as it might all cancel out.
This makes sense if you assume things are symmetric. Hopefully there’s enough interest in truth and valid reasoning that if the “AI is dangerous” conclusion is correct, it’ll have better arguments on its side.
That idea seems reasonable at first glance, but upon reflection, I think it’s a really bad idea. It’s one thing to run a red-teaming competition, it’s another to spend money building rhetorically optimised tools for the other side. If we do that, then maybe there was no point running the competition in the first place as it might all cancel out.
This makes sense if you assume things are symmetric. Hopefully there’s enough interest in truth and valid reasoning that if the “AI is dangerous” conclusion is correct, it’ll have better arguments on its side.