but I think many people can contribute to reducing AI x-risk without reading them.
I think the tough thing here is it’s very hard to evaluate who, if anyone, is making any useful contributions. After all, no one has successfully aligned a superintelligence to date. Maybe it’s all way-off track. All else equal, I trust people who’ve read the Sequences to be better judges of whether we’re making progress in the absence of proper end-to-end feedback than those who haven’t.
Caveat: I am not someone who could plausibly claim to have made any potential contribution myself. :P
I think the tough thing here is it’s very hard to evaluate who, if anyone, is making any useful contributions. After all, no one has successfully aligned a superintelligence to date. Maybe it’s all way-off track. All else equal, I trust people who’ve read the Sequences to be better judges of whether we’re making progress in the absence of proper end-to-end feedback than those who haven’t.
Caveat: I am not someone who could plausibly claim to have made any potential contribution myself. :P