What are the values you judge those as “wrong” by, if not human? Yes, it’s a terrible idea to build an AI that’s just a really intelligent/fast human, because humans have all sorts of biases, and bugs that are activated by having lots of power, that would prevent them from optimizing for the values we actually care about. Finding out what values we actually care about though, to implement them (directly, or indirectly through CEV-like programs) is definitely a task that’s going to involve looking at human brains.
What are the values you judge those as “wrong” by, if not human? Yes, it’s a terrible idea to build an AI that’s just a really intelligent/fast human, because humans have all sorts of biases, and bugs that are activated by having lots of power, that would prevent them from optimizing for the values we actually care about. Finding out what values we actually care about though, to implement them (directly, or indirectly through CEV-like programs) is definitely a task that’s going to involve looking at human brains.