Nick Beckstead has this to say in his dissertation on decision theory and x-risk:
we never hear about people who get money pumped. Why? One possibility is that people never get offered these trades that would trigger money pumps. A more plausible answer is that people do not act on their preferences in the inexible way this argument assumes. When they get into a situation where they see that their preferences would lead them to get money pumped, they either change their preferences or refuse to continue to act on some of those preferences. Because of this, money pump arguments do not illustrate a practical danger for humans. It is plausible that having preferences which would be theoretically susceptible to a money pump displays a failure of perfect rationality, but, once again, that a certain approach is imperfect does not imply that an improved approach is meaningfully available.
Very interesting! I actually started having similar thoughts about money pumps and utility functions after learning Haskell. Specifically, that you can avoid the intransitivity → money-pumpable implication if you just assume (quite reasonably) that humans’ utility functions are lazily evaluated and have side effects (i.e. are impure functions).
In other words, humans don’t instantly know the implication of their utility function for every possible decision (which would imply logical omniscience), but rather, evaluate it only as the need arises; and once they evaluate it for a given input, the function can change because it was so evaluated so that it has a different I/O mapping on future evaluations (the impure part).
EY has actually said as much about morality and human values, but used the term abstract idealized dynamic.
Anyone know how badly (or if at all) the standard implications of the VNM utility axioms break down if you take away the requirement that the utility function must be strictly evaluated and pure?
Edit: Do you have a cite for that quote? I google it and only get your post.
Economists have pointed out that technical functions (i.e. the functions which yield the “outputs” for any given resource inputs and production techniques) are also explored lazily, as it were. It’s quite likely that the existing literature on machine learning and search theory has extensively considered the implications of such exploration on the resulting behavior.
It’s possible that many people can detect many forms of money pump and wish to avoid them more than they wish to engage in the individual choices which form the pump.
Nick Beckstead has this to say in his dissertation on decision theory and x-risk:
Very interesting! I actually started having similar thoughts about money pumps and utility functions after learning Haskell. Specifically, that you can avoid the intransitivity → money-pumpable implication if you just assume (quite reasonably) that humans’ utility functions are lazily evaluated and have side effects (i.e. are impure functions).
In other words, humans don’t instantly know the implication of their utility function for every possible decision (which would imply logical omniscience), but rather, evaluate it only as the need arises; and once they evaluate it for a given input, the function can change because it was so evaluated so that it has a different I/O mapping on future evaluations (the impure part).
EY has actually said as much about morality and human values, but used the term abstract idealized dynamic.
Anyone know how badly (or if at all) the standard implications of the VNM utility axioms break down if you take away the requirement that the utility function must be strictly evaluated and pure?
Edit: Do you have a cite for that quote? I google it and only get your post.
Beckstead’s dissertation isn’t online yet, and he asked me not to upload it.
Thanks for sharing the connections between human utility functions and programming functions.
Other works on that subject are Muehlhauser (2012) and Nielsen & Jensen (2004), both of which I cited in IEME, and also Srivastava & Schrater (2012), which was recently brought to my attention by Jacob Steinhardt.
Economists have pointed out that technical functions (i.e. the functions which yield the “outputs” for any given resource inputs and production techniques) are also explored lazily, as it were. It’s quite likely that the existing literature on machine learning and search theory has extensively considered the implications of such exploration on the resulting behavior.
It’s possible that many people can detect many forms of money pump and wish to avoid them more than they wish to engage in the individual choices which form the pump.