This makes sense if you assume things are symmetric. Hopefully there’s enough interest in truth and valid reasoning that if the “AI is dangerous” conclusion is correct, it’ll have better arguments on its side.
This makes sense if you assume things are symmetric. Hopefully there’s enough interest in truth and valid reasoning that if the “AI is dangerous” conclusion is correct, it’ll have better arguments on its side.