I agree with HiddenTruth and prase. The original post is flawed, because it starts with a perfectly good idea: “if there were a group that ‘did science’ but was always wrong, it would be a good control group to compare to ‘real science’”, but then blows it by assuming parapsychologists are indeed always wrong.
FWIW, I too believe parapsychologists are probably almost always wrong, but so what? Who cares what I believe? No one does, and no one should (without evidence), and that’s the point.
Sorry, I’m not too familiar with the community, so not sure if this question is about AI alignment in particular or risks more broadly. Assuming the latter: I think the most overlooked problem is politics. I worry about rich and powerful sociopaths being able to do evil without consequences or even without being detected (except by the victims, of course). We probably can’t do much about the existence of sociopaths themselves, but I think we can and should think about the best ways to increase transparency and reduce inequality. For what it’s worth, I’m a negative utilitarian.