Getting it to understand psychology may be difficult, but as it isn’t a goal, just something it learns while growing, there is no risk in it. The risk comes from the goal. I’m trying to reduce the risk of coming up with a flawed goal for the AI by using its knowledge of psychology.
From what I understand, that’s actually the hard part of the Friendliness problem.
It’s a hard problem, but a riskless one.
Getting it to understand psychology may be difficult, but as it isn’t a goal, just something it learns while growing, there is no risk in it. The risk comes from the goal. I’m trying to reduce the risk of coming up with a flawed goal for the AI by using its knowledge of psychology.