How is that the repugnant conclusion? It seems like the exact opposite of the repugnant conclusion to me. (That is, it is a strong argument against creating a very large number of people with very little [resources/utility/etc.].)
Maybe I misunderstood. Your statement that shorter-lived individuals have less quality of their minutes of experience implied to me that there would have to be more individuals to get to equal total-happiness. And if this can extend, it leads to maximizing the number of individuals with minimally-positive experience value.
My best guess is there’s a declining marginal value to spending resources on happiness or quantity at either extreme (that is, making a small number of very happy entities slightly happier rather than slightly more numerous will be suboptimal, _AND_ making a large number of barely-happy entities slightly more numerous as opposed to slightly happier will be suboptimal). Finding the crossover point will be the hard problem to solve.
First, the grandfather was my first comment in this tree. Check the usernames.
Second, the repugnant conclusion can indeed be applied here, but the idea itself isn’t the repugnant conclusion. In fact, if the number of people-minutes is limited, and the value of a person-minute is proportional to the length of the life that contains that minute, shouldn’t that lead to the Antirepugnant Conclusion (there should only be one person)?
...wait, I just rederived utility monsters, didn’t I.
…wait, I just rederived utility monsters, didn’t I.
Looks like. Which implies optimal is somewhere between one immortal super-entity using all resources of the universe and 10^55 3-gram distinct entities who barely appreciate their existence before being replaced with another.
Whether it’s beneficial to increase or decrease from the current size/duration of entities, I don’t know. Intution is that I would prefer to live longer and be smarter, even at the cost of others, especially others not coming into existence. I have the opposite reaction when asked if I’d give my organs today (killing me) to extend other’s lives by more in aggregate than mine is cut short.
Calling it trivial or saying “sometimes the obvious answer is right” is simply a mistake. The obvious answer is highly suspect.
How is that the repugnant conclusion? It seems like the exact opposite of the repugnant conclusion to me. (That is, it is a strong argument against creating a very large number of people with very little [resources/utility/etc.].)
Maybe I misunderstood. Your statement that shorter-lived individuals have less quality of their minutes of experience implied to me that there would have to be more individuals to get to equal total-happiness. And if this can extend, it leads to maximizing the number of individuals with minimally-positive experience value.
My best guess is there’s a declining marginal value to spending resources on happiness or quantity at either extreme (that is, making a small number of very happy entities slightly happier rather than slightly more numerous will be suboptimal, _AND_ making a large number of barely-happy entities slightly more numerous as opposed to slightly happier will be suboptimal). Finding the crossover point will be the hard problem to solve.
First, the grandfather was my first comment in this tree. Check the usernames.
Second, the repugnant conclusion can indeed be applied here, but the idea itself isn’t the repugnant conclusion. In fact, if the number of people-minutes is limited, and the value of a person-minute is proportional to the length of the life that contains that minute, shouldn’t that lead to the Antirepugnant Conclusion (there should only be one person)?
...wait, I just rederived utility monsters, didn’t I.
Looks like. Which implies optimal is somewhere between one immortal super-entity using all resources of the universe and 10^55 3-gram distinct entities who barely appreciate their existence before being replaced with another.
Whether it’s beneficial to increase or decrease from the current size/duration of entities, I don’t know. Intution is that I would prefer to live longer and be smarter, even at the cost of others, especially others not coming into existence. I have the opposite reaction when asked if I’d give my organs today (killing me) to extend other’s lives by more in aggregate than mine is cut short.
Calling it trivial or saying “sometimes the obvious answer is right” is simply a mistake. The obvious answer is highly suspect.