Any moral framework that doesn’t acknowledge tradeoffs is broken. The interesting questions aren’t “should we save this person if it doesn’t cost anything?”—truly, that’s trivial as you say. The interesting ones, which authors and ethicists try to address even if they’re not all that clear are, “who should suffer and by how much in order to let this person live better/longer?”.
How much environmental damage and sweatshop labor goes into “extreme” medical interventions? How much systemic economic oppression is required to have enough low-paid nurses to work the night shift? How many 6-year-olds could have better nutrition and a significantly better life for the cost of extending a 95-year-old’s life to 98 years?
I’m delighted that humanity is getting more efficient and able to support more people more fully than even imaginable in the recent past. It’s good to be rich. But there’s still a limit, and always there will be a frontier where tradeoffs have to be made.
I’m currently hold the opinion (weakly, but it’s been growing in me) that resource allocation is the only thing that has any moral weight.
Any moral framework that doesn’t acknowledge tradeoffs is broken. The interesting questions aren’t “should we save this person if it doesn’t cost anything?”—truly, that’s trivial as you say. The interesting ones, which authors and ethicists try to address even if they’re not all that clear are, “who should suffer and by how much in order to let this person live better/longer?“.
The problem comes in when people start inventing imaginary tradeoffs, purely out of a sense that there must be tradeoffs—and, critically, then use the existence of those (alleged) tradeoffs as a reason to simply reject the proposal.
And then you have the flaw in reasoning that I described in this old comment:
I think that this post conflates two issues, and is an example of a flaw of reasoning that goes like this:
Alice: It would be good if we could change [thing X]. Bob: Ah, but if we changed X, then problems A, B, and C would ensue! Therefore, it would not be good if we could change X.
Bob is confusing the desirability of the change with the prudence of the change. Alice isn’t necessarily saying that we should make the change she’s proposing. She’s saying it would be good if we could do so. But Bob immediately jumps to examining what problems would ensue if we changed X, decides that changing X would be imprudent, and concludes from this that it would also be undesirable.
[…]
I think that Bob’s mistake is rooted in the fact that he is treating Alice’s proposal as, essentially, a wish made to a genie. “Oh great genie,” says Alice, “please make it so that death is no more!” Bob, horrified, stops Alice before she can finish speaking, and shouts “No! Think of all the ways the words of your wish can be twisted! Think of the unintended consequences! You haven’t considered the implications! No, Alice, you must not make such grand wishes of a genie, for they will inevitably go awry.”
I fully agree with both of your points—people can mis-estimate the tradeoffs in either direction (assuming there are none, as EY does in this post, and assuming they’re much larger than they are, as you say). And people confuse desirability of an outcome with desirability of the overall effect of a policy/behavior/allocation change.
Neither of these change my main point that the hard part is figuring out and acknowledging the actual tradeoffs and paths from the current state to a preferred possible state, not just identifying imaginary-but-impossible worlds we’d prefer.
I do not read Eliezer as claiming that there are no tradeoffs. Rather, his aim is to establish the desirability of indefinite life extension in the first place! Once we’re all agreed on that, then we can talk tradeoffs.
And, hey, maybe we look at the tradeoffs and decide that nah, we’re not going to do this. Yet. For now. With sadness and regret, we shelve the idea, being always ready to come back to it, as soon as our technology advances, as soon as we have a surplus of resources, as soon as anything else changes…
Whereas if we just shake our heads and dismissively say “don’t you know that tradeoffs exist”, and end the discussion there, then we’re never going to live for a million years.
But on the other hand, maybe we look at the tradeoffs and decide that, actually, life extension is worth doing, right now! How will we know, unless we actually try and figure it out? And why would we do that, unless we first agree that it’s desirable? That is what Eliezer is trying to convince readers of, in this essay.
For example, you say:
How much environmental damage and sweatshop labor goes into “extreme” medical interventions? How much systemic economic oppression is required to have enough low-paid nurses to work the night shift? How many 6-year-olds could have better nutrition and a significantly better life for the cost of extending a 95-year-old’s life to 98 years?
Well? How many? These are fine questions. What are the answers?
The view here on Lesswrong, on the other hand, treats Alice’s proposal as an engineering challenge. … Once you properly distinguish the concepts of desirability and prudence, you can treat problems with your proposal as obstacles to overcome, not reasons not to do it.
(One important effect of actually trying to answer specific questions about tradeoffs, like the ones you list, is that once you know exactly what the tradeoffs are, you can also figure out what needs to change in order to shift the tradeoffs in the right direction and by the right amount, to alter the decision. And then you can start doing what needs to be done, to change those things!)
I don’t claim to actually know the answers, or even how I’d figure out the answers. I merely want to point out that it’s not simple, and saying “sometimes it’s easy” without acknowledging that “sometimes it’s hard” and “knowing the difference is hard” is misleading and unhelpful.
Interestingly, fighting aging is winning in this logic against many other causes. For example, if we assume that giving Aubrey de Grey 1 trillion dollars is enough to solve aging (ok, I know, but), than this imply that 10 billions people will be saved from death, and the cost of the saved life is around 100 USD. There are some obvious caveats, but life extension research still seems to be one of the most cost-effective interventions.
I don’t have a strong opinion on whether fewer longer-lived individuals or more shorter-lived individuals are preferable, in the case where total desirable-experience-minutes before heat death of the universe are conserved. I honestly can’t tell you whether it’d be better to spend the trillion on convincing the expensive elderly to die more gracefully is a better improvement. To the extent that it’s not my $trillion in the first place, I fall back to “simple is easier to justify, so prefer the obvious” and voice my support for life extension.
However, I don’t actually spend the bulk of my money on life extension. I spend it primarily on personal and family/close-friend comfort and happinesss, and secondarily on more immediate palliative (helping existing persons in the short/medium term) and threat-reduction (environmental and long-term species capability) charities. Basically, r-strategy: we have a lot of humans, so it’s OK if many suffer and die, as long as the quantity and median quality are improving.
The interesting objection is that any short lived person will be unhappy because she will fear death, thus the minutes are inferior.
Anyway, I think that using “happy minutes” as a measure of good social policy is suffering from Goodhart-like problems on extreme cases like life extension.
The interesting objection is that any short lived person will be unhappy because she will fear death, thus the minutes are inferior.
Ok, so it requires MORE minutes to be net equal. This is exactly the repugnant conclusion, and I don’t know how to resolve whether my intuition about desirability is wrong, or whether every believable aggregation of value is wrong. I lean toward the former, and that leans me toward accepting that the repugnancy is an error, and the conclusion is actually better.
using “happy minutes” as a measure of good social policy is suffering from Goodhart-like problems on extreme cases like life extension.
Perhaps. To the extent that “happy experience minutes” is a proxy to what we really want, it will diverge at scale. If it _really is_ what we want, Goodhart doesn’t apply. Figuring out the points it starts to diverge is one good way of understanding the thing we’re actually trying to optimize in the universe.
How is that the repugnant conclusion? It seems like the exact opposite of the repugnant conclusion to me. (That is, it is a strong argument against creating a very large number of people with very little [resources/utility/etc.].)
Maybe I misunderstood. Your statement that shorter-lived individuals have less quality of their minutes of experience implied to me that there would have to be more individuals to get to equal total-happiness. And if this can extend, it leads to maximizing the number of individuals with minimally-positive experience value.
My best guess is there’s a declining marginal value to spending resources on happiness or quantity at either extreme (that is, making a small number of very happy entities slightly happier rather than slightly more numerous will be suboptimal, _AND_ making a large number of barely-happy entities slightly more numerous as opposed to slightly happier will be suboptimal). Finding the crossover point will be the hard problem to solve.
First, the grandfather was my first comment in this tree. Check the usernames.
Second, the repugnant conclusion can indeed be applied here, but the idea itself isn’t the repugnant conclusion. In fact, if the number of people-minutes is limited, and the value of a person-minute is proportional to the length of the life that contains that minute, shouldn’t that lead to the Antirepugnant Conclusion (there should only be one person)?
...wait, I just rederived utility monsters, didn’t I.
…wait, I just rederived utility monsters, didn’t I.
Looks like. Which implies optimal is somewhere between one immortal super-entity using all resources of the universe and 10^55 3-gram distinct entities who barely appreciate their existence before being replaced with another.
Whether it’s beneficial to increase or decrease from the current size/duration of entities, I don’t know. Intution is that I would prefer to live longer and be smarter, even at the cost of others, especially others not coming into existence. I have the opposite reaction when asked if I’d give my organs today (killing me) to extend other’s lives by more in aggregate than mine is cut short.
Calling it trivial or saying “sometimes the obvious answer is right” is simply a mistake. The obvious answer is highly suspect.
Could you explain what you mean by resource allocation? Certainly there’s a lot of political and public opinion resistance to any new technology that would help the rich and not the poor. I think that stems from the thought that it will provide even more incentive for the rich to increase inequality (a view to which I’m sympathetic), but I don’t see how it would imply that only the distribution of wealth is important...
(Sorry, we are still working on calibrating the spam system, and this got somehow marked as spam. I fixed it, and we have a larger fix coming later today that should overall stop the spam problems).
I do not mean “wealth” when I talk about resource allocation. I mean actual real stuff: how much heat is generated on one’s behalf, how much O2 is transformed to CO2 per unit time for whom, who benefits from a given square-meter-second of sunlight energy, etc. As importantly, how much attention and motivated work one gets from other people, how much of their consumed resources benefit whom, etc.?
Money is a very noisy measure of this, and is deeply misleading when applied to any long-term goals.
Any moral framework that doesn’t acknowledge tradeoffs is broken. The interesting questions aren’t “should we save this person if it doesn’t cost anything?”—truly, that’s trivial as you say. The interesting ones, which authors and ethicists try to address even if they’re not all that clear are, “who should suffer and by how much in order to let this person live better/longer?”.
How much environmental damage and sweatshop labor goes into “extreme” medical interventions? How much systemic economic oppression is required to have enough low-paid nurses to work the night shift? How many 6-year-olds could have better nutrition and a significantly better life for the cost of extending a 95-year-old’s life to 98 years?
I’m delighted that humanity is getting more efficient and able to support more people more fully than even imaginable in the recent past. It’s good to be rich. But there’s still a limit, and always there will be a frontier where tradeoffs have to be made.
I’m currently hold the opinion (weakly, but it’s been growing in me) that resource allocation is the only thing that has any moral weight.
The problem comes in when people start inventing imaginary tradeoffs, purely out of a sense that there must be tradeoffs—and, critically, then use the existence of those (alleged) tradeoffs as a reason to simply reject the proposal.
And then you have the flaw in reasoning that I described in this old comment:
I fully agree with both of your points—people can mis-estimate the tradeoffs in either direction (assuming there are none, as EY does in this post, and assuming they’re much larger than they are, as you say). And people confuse desirability of an outcome with desirability of the overall effect of a policy/behavior/allocation change.
Neither of these change my main point that the hard part is figuring out and acknowledging the actual tradeoffs and paths from the current state to a preferred possible state, not just identifying imaginary-but-impossible worlds we’d prefer.
I do not read Eliezer as claiming that there are no tradeoffs. Rather, his aim is to establish the desirability of indefinite life extension in the first place! Once we’re all agreed on that, then we can talk tradeoffs.
And, hey, maybe we look at the tradeoffs and decide that nah, we’re not going to do this. Yet. For now. With sadness and regret, we shelve the idea, being always ready to come back to it, as soon as our technology advances, as soon as we have a surplus of resources, as soon as anything else changes…
Whereas if we just shake our heads and dismissively say “don’t you know that tradeoffs exist”, and end the discussion there, then we’re never going to live for a million years.
But on the other hand, maybe we look at the tradeoffs and decide that, actually, life extension is worth doing, right now! How will we know, unless we actually try and figure it out? And why would we do that, unless we first agree that it’s desirable? That is what Eliezer is trying to convince readers of, in this essay.
For example, you say:
Well? How many? These are fine questions. What are the answers?
Quoting myself once again:
(One important effect of actually trying to answer specific questions about tradeoffs, like the ones you list, is that once you know exactly what the tradeoffs are, you can also figure out what needs to change in order to shift the tradeoffs in the right direction and by the right amount, to alter the decision. And then you can start doing what needs to be done, to change those things!)
I don’t claim to actually know the answers, or even how I’d figure out the answers. I merely want to point out that it’s not simple, and saying “sometimes it’s easy” without acknowledging that “sometimes it’s hard” and “knowing the difference is hard” is misleading and unhelpful.
Interestingly, fighting aging is winning in this logic against many other causes. For example, if we assume that giving Aubrey de Grey 1 trillion dollars is enough to solve aging (ok, I know, but), than this imply that 10 billions people will be saved from death, and the cost of the saved life is around 100 USD. There are some obvious caveats, but life extension research still seems to be one of the most cost-effective interventions.
I don’t have a strong opinion on whether fewer longer-lived individuals or more shorter-lived individuals are preferable, in the case where total desirable-experience-minutes before heat death of the universe are conserved. I honestly can’t tell you whether it’d be better to spend the trillion on convincing the expensive elderly to die more gracefully is a better improvement. To the extent that it’s not my $trillion in the first place, I fall back to “simple is easier to justify, so prefer the obvious” and voice my support for life extension.
However, I don’t actually spend the bulk of my money on life extension. I spend it primarily on personal and family/close-friend comfort and happinesss, and secondarily on more immediate palliative (helping existing persons in the short/medium term) and threat-reduction (environmental and long-term species capability) charities. Basically, r-strategy: we have a lot of humans, so it’s OK if many suffer and die, as long as the quantity and median quality are improving.
The interesting objection is that any short lived person will be unhappy because she will fear death, thus the minutes are inferior.
Anyway, I think that using “happy minutes” as a measure of good social policy is suffering from Goodhart-like problems on extreme cases like life extension.
Ok, so it requires MORE minutes to be net equal. This is exactly the repugnant conclusion, and I don’t know how to resolve whether my intuition about desirability is wrong, or whether every believable aggregation of value is wrong. I lean toward the former, and that leans me toward accepting that the repugnancy is an error, and the conclusion is actually better.
Perhaps. To the extent that “happy experience minutes” is a proxy to what we really want, it will diverge at scale. If it _really is_ what we want, Goodhart doesn’t apply. Figuring out the points it starts to diverge is one good way of understanding the thing we’re actually trying to optimize in the universe.
How is that the repugnant conclusion? It seems like the exact opposite of the repugnant conclusion to me. (That is, it is a strong argument against creating a very large number of people with very little [resources/utility/etc.].)
Maybe I misunderstood. Your statement that shorter-lived individuals have less quality of their minutes of experience implied to me that there would have to be more individuals to get to equal total-happiness. And if this can extend, it leads to maximizing the number of individuals with minimally-positive experience value.
My best guess is there’s a declining marginal value to spending resources on happiness or quantity at either extreme (that is, making a small number of very happy entities slightly happier rather than slightly more numerous will be suboptimal, _AND_ making a large number of barely-happy entities slightly more numerous as opposed to slightly happier will be suboptimal). Finding the crossover point will be the hard problem to solve.
First, the grandfather was my first comment in this tree. Check the usernames.
Second, the repugnant conclusion can indeed be applied here, but the idea itself isn’t the repugnant conclusion. In fact, if the number of people-minutes is limited, and the value of a person-minute is proportional to the length of the life that contains that minute, shouldn’t that lead to the Antirepugnant Conclusion (there should only be one person)?
...wait, I just rederived utility monsters, didn’t I.
Looks like. Which implies optimal is somewhere between one immortal super-entity using all resources of the universe and 10^55 3-gram distinct entities who barely appreciate their existence before being replaced with another.
Whether it’s beneficial to increase or decrease from the current size/duration of entities, I don’t know. Intution is that I would prefer to live longer and be smarter, even at the cost of others, especially others not coming into existence. I have the opposite reaction when asked if I’d give my organs today (killing me) to extend other’s lives by more in aggregate than mine is cut short.
Calling it trivial or saying “sometimes the obvious answer is right” is simply a mistake. The obvious answer is highly suspect.
Could you explain what you mean by resource allocation? Certainly there’s a lot of political and public opinion resistance to any new technology that would help the rich and not the poor. I think that stems from the thought that it will provide even more incentive for the rich to increase inequality (a view to which I’m sympathetic), but I don’t see how it would imply that only the distribution of wealth is important...
(Sorry, we are still working on calibrating the spam system, and this got somehow marked as spam. I fixed it, and we have a larger fix coming later today that should overall stop the spam problems).
I do not mean “wealth” when I talk about resource allocation. I mean actual real stuff: how much heat is generated on one’s behalf, how much O2 is transformed to CO2 per unit time for whom, who benefits from a given square-meter-second of sunlight energy, etc. As importantly, how much attention and motivated work one gets from other people, how much of their consumed resources benefit whom, etc.?
Money is a very noisy measure of this, and is deeply misleading when applied to any long-term goals.