I like this idea, but I would also, it seems, need to consider the (probabilistic) length of time each utility function would last.
That doesn’t change your basic point, though, which seems reasonable.
The one question I have is this: In cases where I can choose whether or not to change my utility function—cases where I can choose to an extent the probability of a configuration appearing—couldn’t I maximize expected utility by arranging for my most-likely utility function at any given time to match the most-likely universe at that time? It seems that would make life utterly pointless, but I don’t have a rational basis for that—it’s just a reflexive emotional response to the suggestion.
Yeah I agree that you would have to consider time. However, my feeling is that for the utility calculation to be performed at all (that is, even in the context of a fixed utility), you must also consider time through the state of being in all subsequent states, so now you just add and expected utility calculation to each of those subsequent states (and therefore implicitly capture the length of time it lasts) instead of the fixed utility. It is possible, I suppose, that the probability could be conditional on the previous state’s utility function too. That is, if you’re really into math one day it’s more likely that you could switch to statistics rather than history following that, but if you have it conditioned on having already switched to literature, maybe history would be more likely then. That makes for a more complex analysis, but again, approximations and all would help :p
Regarding your second question, let me make sure I’ve understood it correctly. You’re basically saying couldn’t you change the utility function, what you value, on the whims of what is most possible? For instance, if you were likely to wind up stuck in a log cabin that for entertainment only had books on the civil war, that you change your utility to valuing civil war books? Assuming I understood that correctly, if you could do that, I suppose changing your utility to reflect your world would be the best choice. Personally, I don’t think humans are quite that malleable and so you’re to an extent kind of stuck with who you are. Ultimately, you might also find that some things are objectively better or worse than others; that regardless of the utility function some things are worse. Things that are damaging to society, for instance, might be objectively worse than alternatives because the consequential reproductions for you will almost always be bad (jail, a society that doesn’t function as well because you just screwed it up, etc.). If true, you still would have some constant guiding principles, it would just mean that there are a set of other paths that are in a way equally good.
I’m not saying I can change to liking civil war books. I’m saying if I could choose between
A) continuing to like scifi and having fantasy books, or
B) liking civil war books and having civil war books,
I should choose B, even though I currently value scifi>stats>civil war. By extension, if I could choose
A) continuing to value specific complex interactions and having different complex interactions, or
B) liking smiley faces and building a smiley-face maximizer
I should choose B even though it’s counterintuitive. This one is somewhat more plausible, as it seems it’d be easier to build an AI that could change my values to smiley faces and make smiley faces than it would be to build one that works toward my current complicated (and apparently inconsistent) utility function.
I don’t think society-damaging actions are “objectively” bad in the way you say. Stealing something might be worse than just having it, due to negative repercussions, but that just changes the relative ordering. Depending on the value of the thing, it might still be higher-ordered than buying it.
Right, so if you can choose your utility function, then it’s better to choose one that can be better maximized. Interestingly though, if we ever had this capability, I think we could just reduce the problem by using an unbiased utility function. That is, explicit preferences (such as liking math versus history) would be removed and instead we’d work with a more fundamental utility function. For instance, death is pretty much a universal stop point since you cannot gain any utility if you’re dead, regardless of your function. This would be in a sense the basis of your utility function. We also find that death is better avoided when society works together and develops new technology. Your actions then might be dictated by what you are best at doing to facilitate the functioning and growth of society. This is why I brought up society damaning as being potentially objectively worse. You might be able to come up with specific instances of actions that we associate as society-damaging that seem okay, such as specific instances of stealing, but then they aren’t really society damaging in the grand scheme of things. That said, I think as a rule of thumb stealing is bad in most cases due to the ripple effects of living in a society in which people do that, but that’s another discussion. The point is there may be objectively better choices even if you have no explicit preferences for things (or you can choose your preferences).
Of course, that’s all conditioned on whether you can choose your utility function. For our purposes for the foreseeable future, that is not the case and so you should stick with expected utility functions.
Hm. If people have approximately-equivalent utility functions, does that help them all accomplish their utility better? If so, it makes sense to have none of them value stealing (since having all value stealing could be a problem). In a large enough society, though, the ripple effect of my theft is negligible.
That’s beside the point, though.
“Avoid death” seems like a pretty good basis for a utility function. I like that.
Yeah I agree that the ripple effect of your personal theft would be negligible. I see it as similar to littering. You do it in a vacuum, no big deal, but when many have that mentality, it causes problems. Sounds like you agree too :-)
I like this idea, but I would also, it seems, need to consider the (probabilistic) length of time each utility function would last.
That doesn’t change your basic point, though, which seems reasonable.
The one question I have is this: In cases where I can choose whether or not to change my utility function—cases where I can choose to an extent the probability of a configuration appearing—couldn’t I maximize expected utility by arranging for my most-likely utility function at any given time to match the most-likely universe at that time? It seems that would make life utterly pointless, but I don’t have a rational basis for that—it’s just a reflexive emotional response to the suggestion.
Yeah I agree that you would have to consider time. However, my feeling is that for the utility calculation to be performed at all (that is, even in the context of a fixed utility), you must also consider time through the state of being in all subsequent states, so now you just add and expected utility calculation to each of those subsequent states (and therefore implicitly capture the length of time it lasts) instead of the fixed utility. It is possible, I suppose, that the probability could be conditional on the previous state’s utility function too. That is, if you’re really into math one day it’s more likely that you could switch to statistics rather than history following that, but if you have it conditioned on having already switched to literature, maybe history would be more likely then. That makes for a more complex analysis, but again, approximations and all would help :p
Regarding your second question, let me make sure I’ve understood it correctly. You’re basically saying couldn’t you change the utility function, what you value, on the whims of what is most possible? For instance, if you were likely to wind up stuck in a log cabin that for entertainment only had books on the civil war, that you change your utility to valuing civil war books? Assuming I understood that correctly, if you could do that, I suppose changing your utility to reflect your world would be the best choice. Personally, I don’t think humans are quite that malleable and so you’re to an extent kind of stuck with who you are. Ultimately, you might also find that some things are objectively better or worse than others; that regardless of the utility function some things are worse. Things that are damaging to society, for instance, might be objectively worse than alternatives because the consequential reproductions for you will almost always be bad (jail, a society that doesn’t function as well because you just screwed it up, etc.). If true, you still would have some constant guiding principles, it would just mean that there are a set of other paths that are in a way equally good.
I’m not saying I can change to liking civil war books. I’m saying if I could choose between A) continuing to like scifi and having fantasy books, or B) liking civil war books and having civil war books, I should choose B, even though I currently value scifi>stats>civil war. By extension, if I could choose A) continuing to value specific complex interactions and having different complex interactions, or B) liking smiley faces and building a smiley-face maximizer I should choose B even though it’s counterintuitive. This one is somewhat more plausible, as it seems it’d be easier to build an AI that could change my values to smiley faces and make smiley faces than it would be to build one that works toward my current complicated (and apparently inconsistent) utility function.
I don’t think society-damaging actions are “objectively” bad in the way you say. Stealing something might be worse than just having it, due to negative repercussions, but that just changes the relative ordering. Depending on the value of the thing, it might still be higher-ordered than buying it.
Right, so if you can choose your utility function, then it’s better to choose one that can be better maximized. Interestingly though, if we ever had this capability, I think we could just reduce the problem by using an unbiased utility function. That is, explicit preferences (such as liking math versus history) would be removed and instead we’d work with a more fundamental utility function. For instance, death is pretty much a universal stop point since you cannot gain any utility if you’re dead, regardless of your function. This would be in a sense the basis of your utility function. We also find that death is better avoided when society works together and develops new technology. Your actions then might be dictated by what you are best at doing to facilitate the functioning and growth of society. This is why I brought up society damaning as being potentially objectively worse. You might be able to come up with specific instances of actions that we associate as society-damaging that seem okay, such as specific instances of stealing, but then they aren’t really society damaging in the grand scheme of things. That said, I think as a rule of thumb stealing is bad in most cases due to the ripple effects of living in a society in which people do that, but that’s another discussion. The point is there may be objectively better choices even if you have no explicit preferences for things (or you can choose your preferences).
Of course, that’s all conditioned on whether you can choose your utility function. For our purposes for the foreseeable future, that is not the case and so you should stick with expected utility functions.
Hm. If people have approximately-equivalent utility functions, does that help them all accomplish their utility better? If so, it makes sense to have none of them value stealing (since having all value stealing could be a problem). In a large enough society, though, the ripple effect of my theft is negligible. That’s beside the point, though.
“Avoid death” seems like a pretty good basis for a utility function. I like that.
Yeah I agree that the ripple effect of your personal theft would be negligible. I see it as similar to littering. You do it in a vacuum, no big deal, but when many have that mentality, it causes problems. Sounds like you agree too :-)