The more obvious problem for utility maximisers is fake utility.
...but your characterisation of the behaviour of reward maximizers and utility maximisers seems ratther like a projection to me. IMO, actual behaviour will depend on what the systems believe their purpose is when they come to adjusting their brains. Since they both lack knowledge of the design purpose of their own goal systems, ISTM that the outcome could potentially vary. Maybe they will wirehead, maybe they won’t.
Ah, I see. Thanks for taking the time to discuss this—you’ve raised some helpful points about how my argument will need to be strengthened (“universal action” is good food for thought) and clarified (clearly, my account of wireheading is unconvincing).
The paper’s been accepted, and I have a ton of editing to do (need to cut four pages!), so I may not be very quick to respond for the time being. I didn’t want to disappear without warning, and without saying thanks for your time!
OK. I am skepical that the wirehead problem can be solved simply by invoking expected utillity maximisation. IMO, there are at least two problems that go beyond that:
How do you tell the system to maximise (say) temperature—and not some kind of proxy or perception of temperature?
How do you construct a practical inductive inference engine without using reinforcement learning?
FWIW, my current position is that this probably isn’t our problem. The wirehead problem doesn’t become serious until relatively late on—leaving plenty of scope for transforming the world into a smarter place in the mean time.
...but your characterisation of the behaviour of reward maximizers and utility maximisers seems ratther like a projection to me. IMO, actual behaviour will depend on what the systems believe their purpose is when they come to adjusting their brains. Since they both lack knowledge of the design purpose of their own goal systems, ISTM that the outcome could potentially vary. Maybe they will wirehead, maybe they won’t.
Ah, I see. Thanks for taking the time to discuss this—you’ve raised some helpful points about how my argument will need to be strengthened (“universal action” is good food for thought) and clarified (clearly, my account of wireheading is unconvincing).
The paper’s been accepted, and I have a ton of editing to do (need to cut four pages!), so I may not be very quick to respond for the time being. I didn’t want to disappear without warning, and without saying thanks for your time!
OK. I am skepical that the wirehead problem can be solved simply by invoking expected utillity maximisation. IMO, there are at least two problems that go beyond that:
How do you tell the system to maximise (say) temperature—and not some kind of proxy or perception of temperature?
How do you construct a practical inductive inference engine without using reinforcement learning?
FWIW, my current position is that this probably isn’t our problem. The wirehead problem doesn’t become serious until relatively late on—leaving plenty of scope for transforming the world into a smarter place in the mean time.