You’re being downvoted and nobody’s telling you why :-(, so I thought I’d give some notes.
You’re not talking to the right audience. Few groups are more emotionally in favor of a glorious transhumanist future than people on LessWrong. This is not technophobes who are afraid of change. It’s technophiles who have realized, in harsh constrat to the conclusion they emotionally want, that making a powerful AI would likely be bad for humanity.
Yes, it’s important to overly anthropomorphize AIs, and you are doing that all over the place in your argument.
These arguments have been rehashed a lot. It’s fine to argue that the LessWrong consensus opinion is wrong, but you should indicate you’re familiar with why the LessWrong consensus opinion is what it is.
(To think about what it might not settle on a cooperative post-enlightenment philosophy, read, I don’t know, correct heaps?)
You’re being downvoted and nobody’s telling you why :-(, so I thought I’d give some notes.
You’re not talking to the right audience. Few groups are more emotionally in favor of a glorious transhumanist future than people on LessWrong. This is not technophobes who are afraid of change. It’s technophiles who have realized, in harsh constrat to the conclusion they emotionally want, that making a powerful AI would likely be bad for humanity.
Yes, it’s important to overly anthropomorphize AIs, and you are doing that all over the place in your argument.
These arguments have been rehashed a lot. It’s fine to argue that the LessWrong consensus opinion is wrong, but you should indicate you’re familiar with why the LessWrong consensus opinion is what it is.
(To think about what it might not settle on a cooperative post-enlightenment philosophy, read, I don’t know, correct heaps?)