This comment is a bit off topic for this post – LessWrong is built around a lot of shared background discussion on AI as well as rationality. Part of the goal is to be able to talk about specific subproblems within AI Safety without having to rehash all the arguments every single time.
Hey amsgrober! Welcome to LessWrong.
This comment is a bit off topic for this post – LessWrong is built around a lot of shared background discussion on AI as well as rationality. Part of the goal is to be able to talk about specific subproblems within AI Safety without having to rehash all the arguments every single time.
If you want to talk about this specific issue you can write a separate blogpost about it, although you’ll probably get a better response if you’ve read up a bit on the past discussion on this topic. This post on the Orthogonality thesis is easiest link I have handy, and there’s a lot of expanded discussion in the Rationality:A-Z sequences.