Under my eror model you run into trouble when you treat any transfininte amount the same. From that perspective recognising two transfinite amounts that could be different is progress.
I guess this is the part I don’t really understand. My infinite ethical system doesn’t even think about transfinite quantities. It only considers the prior probability over ending up in situations, which is always real-valued. I’m not saying you’re wrong, of course, but I still can’t see any clear problem.
Another attempt to throw a situation you might not be able to handle. Instead of having 2 infinite groups of unknown relative size all receiving the same bad thing as compensation for the abuse 1 slice of cake for one gorup and 2 slices of cake for the second group. Could there be a difference in the group size that perfectly balances the cake slice difference in order to keep cake expectation constant?
Are you asking if there is a way to simultaneously change the group size as well as change the relative amount of cake for each group so the expected number of cakes received is constant?
If this is what you mean, then my system can deal with this. First off, remember that my system doesn’t worry about the number of agents in a group, but instead merely cares about the probability of an agent ending up in that group, conditioning only on being in this universe.
By changing the group size, however you define it, you can affect the probability of you ending up in that group. To see why, suppose you can do something to add any agents in a certain situation-description into the group. Well, as long as this situation has a finite description length, the probability of ending up in that situation is non-zero, so thus stopping them from being in that situation can decrease the probability of you ending up in that group.
So, currently, the expected value of cake received from these situations is P(in first group) * 1 + P(in second group) * 2. (For simplicity, I’m assuming no one else in the universe gets cake.) So, if you increase the number of cakes received by the second group by u, you just need to decrease P(in the first group) by 2u to keep the expectation constant.
Additional challenging situation. Instead of giving 1 or 2 slices of cake say that each slice is 3 cm wide so the original choices are between 3 cm of cake and 6 cm of cake. Now take some custom amount of cake slice (say 2.7 cm) then determine what would be group size to keep the world cake expectation the same. Then add 1 person to that group. Then convert that back to a cake slice width that keeps cake expectation the same. How wide is the slice?.
If literally only one more person gets cake, even considering acaucal effects, then this would in general not affect the expected value of cake. So the slice would still be 2.7cm.
Now, perhaps you meant that you directly cause one more person to get cake, resulting acausally in infinitely-many others getting cake. If so, then here’s my reasoning:
Previously, the expected value of cake received from these situations was P(in first group) * 1 + P(in second group) * 2. Since cake size in non-constant, let’s add a variable to this. So let’s use P(in first group) * u + P(in second group) * 2. I’m assuming only the 1-slice group gets its cake amount adjusted; you can generalize beyond this. u represents the amount of cake the first group gets, with one 3cm slice being represented as 1.
Suppose adding the extra person acausally results in an increase in the probability of ending up in the first group by . So then, to avoid changing the expected value of cake, we need P(old probability of being in first group) * 1 = (P(old probability of being in first group) + $\epsilon) * u.
Solve that, and you get u = P(old probability of being in first group) / (P(old probability of being in first group) + $\epsilon). Just plug in the exact numbers of how much adding the person changes the probability of of ending up in the group, and you can get an exact slice width.
Another formulation of the same challenge: Define a real number r for which converting that to a group size would get you a group of 5 people.
I’m not sure what you mean here. What does it mean to convert a real number to a group size? One trivial way to interpret this is that the answer is 5: if you convert 5 to a group size, I guess(?) that means a group of five people. So, there you go, the answer would be 5. I take it this isn’t what you meant, though.
Did you get on board about the difference between “help all the stars” and “all the stars as they could have been”?
No, I’m still not sure what you mean by this.
Thanks for the response.
In an infinite universe, there’s already infinitely-many people, so I don’t think this applies to my infinite ethical system.
In a finite universe, I can see why those verdicts would be undesirable. But in an infinite universe, there’s already infinitely-many people at all levels of suffering. So, according to my own moral intuition at least, it doesn’t seem that these are bad verdicts.
You might have differing moral intuitions, and that’s fine. If you do have an issue with this, you could potentially modify my ethical system to make it an analogue of total utilitarianism. Specifically, consider the probability distribution something would have if it conditions on it ending up somewhere in this universe, but doesn’t even know if it will be an actual agent with preferences or not.That is, it uses some prior that allows for the possibility that of ending up as a preference-free rock or something. Also, make sure the measure of life satisfaction treats existences with neutral welfare and the existences of things without preferences as zero. Now, simply modify my system to maximize the expected value of life satisfaction, given this prior. That’s my total-utilitarianism-infinite-analog ethical system.
So, to give an example of how this works, consider the situation in which you can torture one person to avoid creating a large number of people with pretty decent lives. Well, the large number of people with pretty decent lives would increase the moral value of the world, because creating those people makes it more likely that a prior that something would end up as an agent with positive life satisfaction rather than some inanimate object, conditioning only on being something in this universe. But adding a tortured creature would only decrease the moral value of the universe. Thus, this total-utilitarian-infinite-analogue ethical system would prefer create the large number of people with decent lives than to tortured one creature.
Of course, if you accept this system, then you have to a way to deal with the repugnant conclusion, just like you need to find a way to deal with it using regular total utilitarian in a finite universe. I’ve yet to see any satisfactory solution to the repugnant conclusion. But if there is one, I bet you could extend it to this total-utilitarian-infinite-analogue ethical system. This is because because this ethical system is a lot like regular total utilitarian, except it replaces, “total number of creatures with satisfaction x” with “total probability mass of ending up as a creature with satisfaction x”.
Given the lack of a satisfactory solution to the repugnant conclusion, I prefer the idea of just sticking with my average-utilitarianism-like infinite ethical system. But I can see why you might have different preferences.