Alright, I’ve given a comment on why I think AI risk from misalignment is very unlikely here, and also give an example of an epistemic error @Eliezer Yudkowsky made in that post.
This also implicitly means that delaying it is not nearly as good as LWers thought in the past like Nate Soares and Eliezer Yudkowsky.
It’s a long comment, so do try to read it in full:
Alright, I’ve given a comment on why I think AI risk from misalignment is very unlikely here, and also give an example of an epistemic error @Eliezer Yudkowsky made in that post.
This also implicitly means that delaying it is not nearly as good as LWers thought in the past like Nate Soares and Eliezer Yudkowsky.
It’s a long comment, so do try to read it in full:
https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/#Gcigdmuje4EacwirD