That looks very similar to what I was writing about, though I’ve tried to be rather more formal/mathematical about it instead of coming up with ad-hoc notions of “human”, “behavior”, “perception”, “belief”, etc. I would want the learning algorithm to have uncertain/probabilistic beliefs about the learned utility function, and if I was going to reason about individual human minds I would rather just model those minds directly (as done in Indirect Normativity).
That looks very similar to what I was writing about, though I’ve tried to be rather more formal/mathematical about it instead of coming up with ad-hoc notions of “human”, “behavior”, “perception”, “belief”, etc. I would want the learning algorithm to have uncertain/probabilistic beliefs about the learned utility function, and if I was going to reason about individual human minds I would rather just model those minds directly (as done in Indirect Normativity).