There’s a certain probability that it would do the right thing anyway, a certain probability that it wouldn’t and so on. The probability of an AGI turning unfriendly depends on those other probabilities, although very little attention has been given to moral realism/objectivism/convergence by MIRI.
There’s a certain probability that it would do the right thing anyway, a certain probability that it wouldn’t and so on. The probability of an AGI turning unfriendly depends on those other probabilities, although very little attention has been given to moral realism/objectivism/convergence by MIRI.