OK. That’s much better. Current AI research is anthropomorphic, because AI researchers only have the human mind as a model of intelligence. MIRI considers anthropomirphic assumptions a mistake, which is mistaken,
A MIRI type AI won’t have the problem you indicated, because it it is not anthropomirphic, and only has the values that are explicitly programmed into it, so there will be no conflict.
But adding in constraints to an anthropomorphic .AI, if anyone wants to do that, could be a problem.
OK. That’s much better. Current AI research is anthropomorphic, because AI researchers only have the human mind as a model of intelligence. MIRI considers anthropomirphic assumptions a mistake, which is mistaken,
A MIRI type AI won’t have the problem you indicated, because it it is not anthropomirphic, and only has the values that are explicitly programmed into it, so there will be no conflict.
But adding in constraints to an anthropomorphic .AI, if anyone wants to do that, could be a problem.
But I don’t think that MIRI will succeed at building an FAI by non-anthropomorphic means in time.
I still don’t see why you are considering a combination of non MIRI AI and MRI friendliness solution.