As far as I understand, MIRI did not assume that we’re just able to give the AI a utility function directly.
There’s lots of material that does assume that, even if there is some that doesnt.
The Risks from Learned Optimization paper was written mainly by people from MIRI
There’s lots of material that does assume that, even if there is some that doesnt.