All the issues of how future AIs will actually perform in the real world depend on how far they diverge from utility maximizers.
That seems highly inaccurate to me. AIs will more closely approximate rational utilitarian agents than current organisms—so the expected utility maximisation framework will become a better predictor of behaviour as time passes.
Obviously, the utility function of AIs will not be to produce paper clips.
That seems highly inaccurate to me. AIs will more closely approximate rational utilitarian agents than current organisms—so the expected utility maximisation framework will become a better predictor of behaviour as time passes.
Obviously, the utility function of AIs will not be to produce paper clips.