Of course you can make moral decisions without going through such calculations. We all do that all the time. But the whole issue with infinite ethics—the thing that a purported system for handling infinite ethics needs to deal with—is that the usual ways of formalizing moral decision processes produce ill-defined results in many imaginable infinite universes. So when you propose a system of infinite ethics and I say “look, it produces ill-defined results in many imaginable infinite universes”, you don’t get to just say “bah, who cares about the details?” If you don’t deal with the details you aren’t addressing the problems of infinite ethics at all!
Well, I can’t say I exactly disagree with you here.
However, I want to note that this isn’t a problem specific to my ethical system. It’s true that in order to use my ethical system to make precise moral verdicts, you need to more fully formalize probability theory. However, the same is also true with effectively every other ethical theory.
For example, consider someone learning about classical utilitarianism and its applications in a finite world. Then they could argue:
Okay, I see your ethical system says to make the balance of happiness to unhappiness as high as possible. But how am I supposed to know what the world is actually like and what the effects of my actions are? Do other animals feel happiness and unhappiness? Is there actually a heaven and Hell that would influence moral choices? This ethical system doesn’t answer any of this. You can’t just handwave this away! If you don’t deal with the details you aren’t addressing the problems of ethics at all!
Also, I just want to note that my system as described seems to be unique among the infinite ethical systems I’ve seen in that it doesn’t make obviously ridiculous moral verdicts. Every other one I know of makes some recommendations that seem really silly. So, despite not providing a rigorous formalization of probability theory, I think my ethical system has value.
But what you actually want (I think) isn’t quite a probability distribution over universes; you want a distribution over experiences-in-universes, and not your experiences but those of hypothetical other beings in the same universe as you. So now think of the programs you’re working with as describing not your experiences necessarily but those of some being in the universe, so that each update is weighted not by Pr(I have experience X | my experiences are generated by program P) but by Pr(some subject-of-experience has experience X | my experiences are generated by program P), with the constraint that it’s meant to be the same subject-of-experience for each update. Or maybe by Pr(a randomly chosen subject-of-experience has experience X | my experiences are generated by program P) with the same constraint.
Actually, no, I really do want a probability distribution over what I would experience, or more generally, the situations I’d end up being in. The alternatives you mentioned,
Pr(some subject-of-experience has experience X | my experiences are generated by program P) and Pr(a randomly chosen subject-of-experience has experience X | my experiences are generated by program P), both lead to problems for the reasons you’ve already described.
I’m not sure what made you think I didn’t mean, P(I have experience x | …). Could you explain?
We’re concerned about infinitarian paralysis, where we somehow fail to deliver a definite answer because we’re trying to balance an infinite amount of good against an infinite amount of bad. So far as I can see, your system still has this problem. E.g., if I know there are infinitely many people with various degrees of (un)happiness, and I am wondering whether to torture 1000 of them, your system is trying to calculate the average utility in an infinite population, and that simply isn’t defined.
My system doesn’t compute the average utility of anything. Instead, it tries to compute the expected value of utility (or life satisfaction). I’m sorry if this was somehow unclear. I didn’t think I ever mentioned I was dealing with averages anywhere, though. I’m trying to get better at writing clearly, so if you remember what made you think this, I’d appreciate hearing.
I’ll begin at the end: What is “the expected value of utility” if it isn’t an average of utilities?
You originally wrote:
suppose you had no idea which agent in the universe it would be, what circumstances you would be in, or what your values would be, but you still knew you would be born into this universe. Consider having a bounded quantitative measure of your general satisfaction with life, for example, a utility function. Then try to make the universe such that the expected value of your life satisfaction is as high as possible if you conditioned on you being an agent in this universe, but didn’t condition on anything else.
What is “the expected value of your life satisfaction [] conditioned on you being an agent in this universe but [not] on anything else” if it is not the average of the life satisfactions (utilities) over the agents in this universe?
(The slightly complicated business with conditional probabilities that apparently weren’t what you had in mind were my attempt at figuring out what else you might mean. Rather than trying to figure it out, I’m just asking you.)
I’ll begin at the end: What is “the expected value of utility” if it isn’t an average of utilities?
I’m just using the regular notion of expected value. That is, let P(u) be the probability density you get utility u. Then, the expected value of utility is ∫[a,b]uP(u)du, where ∫ uses Lebesgue integration for greater generality. Above, I take utility to be in [a,b].
Also note that my system cares about a measure of satisfaction, rather than specifically utility. In this case, just replace P(u) to be that measure of life satisfaction instead of a utility.
Also, of course, P(u) is calculated conditioning on being an agent in this universe, and nothing else.
And how do you calculate P(u) given the above? Well, one way is to first start with some disjoint prior probability distribution over universes and situations you could be in, where the situations are concrete enough to determine your eventual life satisfaction. Then just do a Bayes update on “is an agent in this universe and get utility u” by setting the probabilities of hypothesis in which the agent isn’t in this universe or doesn’t have preferences. Then just renormalize the probabilities so they sum to 1. After that, you can just use this probability distribution of possible worlds W to calculate P(u) in a straightforward manner. E.g. ∫WP(utility=U|W)dP(w).
(I know I pretty much mentioned the above calculation before, but I thought rephrasing it might help.)
If you are just using the regular notion of expected value then it is an average of utilities. (Weighted by probabilities.)
I understand that your measure of satisfaction need not be a utility as such, but “utility” is shorter than “measure of satisfaction which may or may not strictly speaking be utility”.
Oh, I’m sorry; I misunderstood you. When you said the average of utilities, I thought you meant the utility averaged among all the different agents in the world. Instead, it’s just, roughly, an average among probability density function of utility. I say roughly because I guess integration isn’t exactly an average.
Well, I can’t say I exactly disagree with you here.
However, I want to note that this isn’t a problem specific to my ethical system. It’s true that in order to use my ethical system to make precise moral verdicts, you need to more fully formalize probability theory. However, the same is also true with effectively every other ethical theory.
For example, consider someone learning about classical utilitarianism and its applications in a finite world. Then they could argue:
Also, I just want to note that my system as described seems to be unique among the infinite ethical systems I’ve seen in that it doesn’t make obviously ridiculous moral verdicts. Every other one I know of makes some recommendations that seem really silly. So, despite not providing a rigorous formalization of probability theory, I think my ethical system has value.
Actually, no, I really do want a probability distribution over what I would experience, or more generally, the situations I’d end up being in. The alternatives you mentioned, Pr(some subject-of-experience has experience X | my experiences are generated by program P) and Pr(a randomly chosen subject-of-experience has experience X | my experiences are generated by program P), both lead to problems for the reasons you’ve already described.
I’m not sure what made you think I didn’t mean, P(I have experience x | …). Could you explain?
My system doesn’t compute the average utility of anything. Instead, it tries to compute the expected value of utility (or life satisfaction). I’m sorry if this was somehow unclear. I didn’t think I ever mentioned I was dealing with averages anywhere, though. I’m trying to get better at writing clearly, so if you remember what made you think this, I’d appreciate hearing.
I’ll begin at the end: What is “the expected value of utility” if it isn’t an average of utilities?
You originally wrote:
What is “the expected value of your life satisfaction [] conditioned on you being an agent in this universe but [not] on anything else” if it is not the average of the life satisfactions (utilities) over the agents in this universe?
(The slightly complicated business with conditional probabilities that apparently weren’t what you had in mind were my attempt at figuring out what else you might mean. Rather than trying to figure it out, I’m just asking you.)
I’m just using the regular notion of expected value. That is, let P(u) be the probability density you get utility u. Then, the expected value of utility is ∫[a,b]uP(u)du, where ∫ uses Lebesgue integration for greater generality. Above, I take utility to be in [a,b].
Also note that my system cares about a measure of satisfaction, rather than specifically utility. In this case, just replace P(u) to be that measure of life satisfaction instead of a utility.
Also, of course, P(u) is calculated conditioning on being an agent in this universe, and nothing else.
And how do you calculate P(u) given the above? Well, one way is to first start with some disjoint prior probability distribution over universes and situations you could be in, where the situations are concrete enough to determine your eventual life satisfaction. Then just do a Bayes update on “is an agent in this universe and get utility u” by setting the probabilities of hypothesis in which the agent isn’t in this universe or doesn’t have preferences. Then just renormalize the probabilities so they sum to 1. After that, you can just use this probability distribution of possible worlds W to calculate P(u) in a straightforward manner. E.g. ∫WP(utility=U|W)dP(w).
(I know I pretty much mentioned the above calculation before, but I thought rephrasing it might help.)
If you are just using the regular notion of expected value then it is an average of utilities. (Weighted by probabilities.)
I understand that your measure of satisfaction need not be a utility as such, but “utility” is shorter than “measure of satisfaction which may or may not strictly speaking be utility”.
Oh, I’m sorry; I misunderstood you. When you said the average of utilities, I thought you meant the utility averaged among all the different agents in the world. Instead, it’s just, roughly, an average among probability density function of utility. I say roughly because I guess integration isn’t exactly an average.