Where I’ve seen people use PDUs in AI or philosophy, they weren’t confused, but rather chose to make the assumption of perception-determined utility functions (or even more restrictive assumptions) in order to prove some theorems.
Well, here’s a recent SIAI paper that uses perception-determined utility functions, but apparently not in order to prove theorems (since the paper contains no theorems). The author was advised by Peter de Blanc, who two years ago wrote the OP arguing against PDUs. Which makes me confused: does the author (Daniel Dewey) really think that PDUs are a good idea, and does Peter now agree?
I don’t think that human values are well described by a PDU. I remember Daniel talking about a hidden reward tape at one point, but I guess that didn’t make it into this paper.
An adult agent has access to its internal state and its perceptions. If we model its access to its internal state as via internal sensors, then sense data are all it has access too—its only way of knowing about the world outside of its genetic heritage.
In that case, utility functions can only accept sense data as inputs—since that is the only thing that any agent ever has access to.
If you have a world-determined utility function, then—at some stage—the state of the world would first need to be reconstructed from perceptions before the function could be applied. That makes the world-determined utility functions an agent can calculate into a subset of perception-determined ones.
I wrote earlier:
Well, here’s a recent SIAI paper that uses perception-determined utility functions, but apparently not in order to prove theorems (since the paper contains no theorems). The author was advised by Peter de Blanc, who two years ago wrote the OP arguing against PDUs. Which makes me confused: does the author (Daniel Dewey) really think that PDUs are a good idea, and does Peter now agree?
I don’t think that human values are well described by a PDU. I remember Daniel talking about a hidden reward tape at one point, but I guess that didn’t make it into this paper.
An adult agent has access to its internal state and its perceptions. If we model its access to its internal state as via internal sensors, then sense data are all it has access too—its only way of knowing about the world outside of its genetic heritage.
In that case, utility functions can only accept sense data as inputs—since that is the only thing that any agent ever has access to.
If you have a world-determined utility function, then—at some stage—the state of the world would first need to be reconstructed from perceptions before the function could be applied. That makes the world-determined utility functions an agent can calculate into a subset of perception-determined ones.