Describing such a prior abstractly is easy (just take the Solomonoff prior over programs)
Could you say more? I’m skeptical. You’re saying, the utility function is a function from what inputs to what outputs? How do you update your beliefs in the true utility function based on experience; i.e., given a set of programs that each putatively output the true utility given some input, and given some experience (camera inputs, introspective memories, memories of actions taken) how do you compute the likelihood ratios to apply to those programs?
Could you say more? I’m skeptical. You’re saying, the utility function is a function from what inputs to what outputs? How do you update your beliefs in the true utility function based on experience; i.e., given a set of programs that each putatively output the true utility given some input, and given some experience (camera inputs, introspective memories, memories of actions taken) how do you compute the likelihood ratios to apply to those programs?