It doesn’t work against Clippy’s example of an exponential discounter who doesn’t want to work today and knows that tomorrow he still won’t want to work today, but still claims to want to work someday, even though he can’t say when.
Almost. It depends on the agent’s computational abilities. From the criteria I specified, it is unclear whether the agent realizes that tomorrow its decision theory will output the same action every day (i.e. that it recognizes the symmetry between today and tomorrow under the current decision theory).
If you assume the agent correctly infers that its current decision theory will lead it to perpetually defer work, then it will recognize that the outcome is suboptimal and search for a better decision theory. However, if the agent is unable to reach sufficient (correct) logical certainty about tomorrow’s action, then it is vulnerable to the money pump that User:Will_Sawin described.
I was working from the assumption that the agent is able to recognize the symmetry with future actions and so did not consider the money pump that User:Will_Sawin described. Such an agent is still, in theory, exploitable, because (under my assumptions about how such an agent could fail), the agent will sometimes conclude that it ought to work, and sometimes that it ought not, with the money-pumper profiting from the (statistically) predictable shifts.
Even so, that would require that the agent I specified use one more predicate in its decision theory—some source of randomness.
Almost. It depends on the agent’s computational abilities. From the criteria I specified, it is unclear whether the agent realizes that tomorrow its decision theory will output the same action every day (i.e. that it recognizes the symmetry between today and tomorrow under the current decision theory).
If you assume the agent correctly infers that its current decision theory will lead it to perpetually defer work, then it will recognize that the outcome is suboptimal and search for a better decision theory. However, if the agent is unable to reach sufficient (correct) logical certainty about tomorrow’s action, then it is vulnerable to the money pump that User:Will_Sawin described.
I was working from the assumption that the agent is able to recognize the symmetry with future actions and so did not consider the money pump that User:Will_Sawin described. Such an agent is still, in theory, exploitable, because (under my assumptions about how such an agent could fail), the agent will sometimes conclude that it ought to work, and sometimes that it ought not, with the money-pumper profiting from the (statistically) predictable shifts.
Even so, that would require that the agent I specified use one more predicate in its decision theory—some source of randomness.