I’ll begin at the end: What is “the expected value of utility” if it isn’t an average of utilities?
You originally wrote:
suppose you had no idea which agent in the universe it would be, what circumstances you would be in, or what your values would be, but you still knew you would be born into this universe. Consider having a bounded quantitative measure of your general satisfaction with life, for example, a utility function. Then try to make the universe such that the expected value of your life satisfaction is as high as possible if you conditioned on you being an agent in this universe, but didn’t condition on anything else.
What is “the expected value of your life satisfaction [] conditioned on you being an agent in this universe but [not] on anything else” if it is not the average of the life satisfactions (utilities) over the agents in this universe?
(The slightly complicated business with conditional probabilities that apparently weren’t what you had in mind were my attempt at figuring out what else you might mean. Rather than trying to figure it out, I’m just asking you.)
I’ll begin at the end: What is “the expected value of utility” if it isn’t an average of utilities?
I’m just using the regular notion of expected value. That is, let P(u) be the probability density you get utility u. Then, the expected value of utility is ∫[a,b]uP(u)du, where ∫ uses Lebesgue integration for greater generality. Above, I take utility to be in [a,b].
Also note that my system cares about a measure of satisfaction, rather than specifically utility. In this case, just replace P(u) to be that measure of life satisfaction instead of a utility.
Also, of course, P(u) is calculated conditioning on being an agent in this universe, and nothing else.
And how do you calculate P(u) given the above? Well, one way is to first start with some disjoint prior probability distribution over universes and situations you could be in, where the situations are concrete enough to determine your eventual life satisfaction. Then just do a Bayes update on “is an agent in this universe and get utility u” by setting the probabilities of hypothesis in which the agent isn’t in this universe or doesn’t have preferences. Then just renormalize the probabilities so they sum to 1. After that, you can just use this probability distribution of possible worlds W to calculate P(u) in a straightforward manner. E.g. ∫WP(utility=U|W)dP(w).
(I know I pretty much mentioned the above calculation before, but I thought rephrasing it might help.)
If you are just using the regular notion of expected value then it is an average of utilities. (Weighted by probabilities.)
I understand that your measure of satisfaction need not be a utility as such, but “utility” is shorter than “measure of satisfaction which may or may not strictly speaking be utility”.
Oh, I’m sorry; I misunderstood you. When you said the average of utilities, I thought you meant the utility averaged among all the different agents in the world. Instead, it’s just, roughly, an average among probability density function of utility. I say roughly because I guess integration isn’t exactly an average.
I’ll begin at the end: What is “the expected value of utility” if it isn’t an average of utilities?
You originally wrote:
What is “the expected value of your life satisfaction [] conditioned on you being an agent in this universe but [not] on anything else” if it is not the average of the life satisfactions (utilities) over the agents in this universe?
(The slightly complicated business with conditional probabilities that apparently weren’t what you had in mind were my attempt at figuring out what else you might mean. Rather than trying to figure it out, I’m just asking you.)
I’m just using the regular notion of expected value. That is, let P(u) be the probability density you get utility u. Then, the expected value of utility is ∫[a,b]uP(u)du, where ∫ uses Lebesgue integration for greater generality. Above, I take utility to be in [a,b].
Also note that my system cares about a measure of satisfaction, rather than specifically utility. In this case, just replace P(u) to be that measure of life satisfaction instead of a utility.
Also, of course, P(u) is calculated conditioning on being an agent in this universe, and nothing else.
And how do you calculate P(u) given the above? Well, one way is to first start with some disjoint prior probability distribution over universes and situations you could be in, where the situations are concrete enough to determine your eventual life satisfaction. Then just do a Bayes update on “is an agent in this universe and get utility u” by setting the probabilities of hypothesis in which the agent isn’t in this universe or doesn’t have preferences. Then just renormalize the probabilities so they sum to 1. After that, you can just use this probability distribution of possible worlds W to calculate P(u) in a straightforward manner. E.g. ∫WP(utility=U|W)dP(w).
(I know I pretty much mentioned the above calculation before, but I thought rephrasing it might help.)
If you are just using the regular notion of expected value then it is an average of utilities. (Weighted by probabilities.)
I understand that your measure of satisfaction need not be a utility as such, but “utility” is shorter than “measure of satisfaction which may or may not strictly speaking be utility”.
Oh, I’m sorry; I misunderstood you. When you said the average of utilities, I thought you meant the utility averaged among all the different agents in the world. Instead, it’s just, roughly, an average among probability density function of utility. I say roughly because I guess integration isn’t exactly an average.