For example, it could have more computational power than you to search for plans which maximize expected utility according to your probability and utility functions. Then, it could tell you the answer
Would it, though? It’s not evaluating actions on my future probutility, otherwise it would wirehead me. It’s evaluating actions on my present probutility. So now the answer seems to depend on whether we allow “tell me the right answer” as a primitive action, or if it is evaluated as “tell me [String],” which has low probutility.
But of course, if tell me the right answer is primitive, how do we stop “do the right thing” from being primitive, which lands us right back in the hot water of strong optimization of ‘utility’ this proposal was supposed to prevent? So I think it should evaluate the specific output, which has low probability(human), and therefore not tell you.
Would it, though? It’s not evaluating actions on my future probutility, otherwise it would wirehead me. It’s evaluating actions on my present probutility. So now the answer seems to depend on whether we allow “tell me the right answer” as a primitive action, or if it is evaluated as “tell me [String],” which has low probutility.
But of course, if tell me the right answer is primitive, how do we stop “do the right thing” from being primitive, which lands us right back in the hot water of strong optimization of ‘utility’ this proposal was supposed to prevent? So I think it should evaluate the specific output, which has low probability(human), and therefore not tell you.
I’ll try and write up a proof that it can do what I think it can.