Yeah, this is basically the thing I’m terrified about. If someone has been convinced of AI risk with arguments which do not track truth, then I find it incredibly hard to believe that they’d ever be able to contribute useful alignment research, not to mention the general fact that if you recruit using techniques that select for people with bad epistemics you will end up with a community with shitty epistemics and wonder what went wrong.
Yeah, this is basically the thing I’m terrified about. If someone has been convinced of AI risk with arguments which do not track truth, then I find it incredibly hard to believe that they’d ever be able to contribute useful alignment research, not to mention the general fact that if you recruit using techniques that select for people with bad epistemics you will end up with a community with shitty epistemics and wonder what went wrong.