I wonder if, and when, we should behave as if we were VNM-rational. It seems vital to act VNM-rational if we’re interacting with Omega or for that matter anyone else who is aware of VNM-rationality and capable of creating money pumps. But as you point out we don’t have VNM-utility functions. Therefore, there exist some VNM-rational decisions that will make us unhappy. The big question is whether we can be happy about a plan to change all of our actual preferences so that we become VNM-rational, and if not, is there a way to remain happy while strategically avoiding money pumps and other pitfalls of being not-entirely-VNM-rational.
I expect problems in my VMN-rational future for any preference that is not maximal. If I VNM-prefer A over B then at some point I will have to answer the question “Why should I ever use any resources to increase the probability of B in a lottery between A and B?” At some point I probably won’t do B anymore, although I’ll have VNM-rationally more A to make up for it. Except that I like variety and probably have the circular preference of sometimes liking B more than A, which leads directly to money pumping if some other agent knows exactly how my preferences work. I also occasionally enjoy surprises, but I am pretty sure that is equivalent to having a “specific utility of gambling”.
Perhaps I can use VNM-rationality to achieve the maximal happiness of this non-VNM-rational me. Even if that involves satisfying a lot of circular preferences in a sufficiently pseudo-random order to trigger my desire for variability. But that implies that ultimately I do have a utility function, or at least there exists a utility function that I wish to be maximized. Its preferences are not my actual preferences. I think it’s a type error to even try to compare my preferences with its preferences. It is not my utility that is being maximized but rather the utility of a process that makes the world as awesome and happy for me as possible by VNM-irrationally causing my happiness. If I am not careful in designing this process it will wirehead me or simply tile the universe with my smiling face. But those things would not make me happy. Is there a way to make a VNM-rational utility function care about a VNM-irrational being in such a way that the irrational preferences are satisfied to the greatest extent without violating the wireheading-unhappiness or pointless-tiling-of-the-universe-unhappiness preferences? In my opinion that is the practical question to answer when investigating Moral Philosophy in terms of FAI.
I wonder if, and when, we should behave as if we were VNM-rational. It seems vital to act VNM-rational if we’re interacting with Omega or for that matter anyone else who is aware of VNM-rationality and capable of creating money pumps. But as you point out we don’t have VNM-utility functions. Therefore, there exist some VNM-rational decisions that will make us unhappy. The big question is whether we can be happy about a plan to change all of our actual preferences so that we become VNM-rational, and if not, is there a way to remain happy while strategically avoiding money pumps and other pitfalls of being not-entirely-VNM-rational.
I expect problems in my VMN-rational future for any preference that is not maximal. If I VNM-prefer A over B then at some point I will have to answer the question “Why should I ever use any resources to increase the probability of B in a lottery between A and B?” At some point I probably won’t do B anymore, although I’ll have VNM-rationally more A to make up for it. Except that I like variety and probably have the circular preference of sometimes liking B more than A, which leads directly to money pumping if some other agent knows exactly how my preferences work. I also occasionally enjoy surprises, but I am pretty sure that is equivalent to having a “specific utility of gambling”.
Perhaps I can use VNM-rationality to achieve the maximal happiness of this non-VNM-rational me. Even if that involves satisfying a lot of circular preferences in a sufficiently pseudo-random order to trigger my desire for variability. But that implies that ultimately I do have a utility function, or at least there exists a utility function that I wish to be maximized. Its preferences are not my actual preferences. I think it’s a type error to even try to compare my preferences with its preferences. It is not my utility that is being maximized but rather the utility of a process that makes the world as awesome and happy for me as possible by VNM-irrationally causing my happiness. If I am not careful in designing this process it will wirehead me or simply tile the universe with my smiling face. But those things would not make me happy. Is there a way to make a VNM-rational utility function care about a VNM-irrational being in such a way that the irrational preferences are satisfied to the greatest extent without violating the wireheading-unhappiness or pointless-tiling-of-the-universe-unhappiness preferences? In my opinion that is the practical question to answer when investigating Moral Philosophy in terms of FAI.