Any moral framework that doesn’t acknowledge tradeoffs is broken. The interesting questions aren’t “should we save this person if it doesn’t cost anything?”—truly, that’s trivial as you say. The interesting ones, which authors and ethicists try to address even if they’re not all that clear are, “who should suffer and by how much in order to let this person live better/longer?“.
The problem comes in when people start inventing imaginary tradeoffs, purely out of a sense that there must be tradeoffs—and, critically, then use the existence of those (alleged) tradeoffs as a reason to simply reject the proposal.
And then you have the flaw in reasoning that I described in this old comment:
I think that this post conflates two issues, and is an example of a flaw of reasoning that goes like this:
Alice: It would be good if we could change [thing X]. Bob: Ah, but if we changed X, then problems A, B, and C would ensue! Therefore, it would not be good if we could change X.
Bob is confusing the desirability of the change with the prudence of the change. Alice isn’t necessarily saying that we should make the change she’s proposing. She’s saying it would be good if we could do so. But Bob immediately jumps to examining what problems would ensue if we changed X, decides that changing X would be imprudent, and concludes from this that it would also be undesirable.
[…]
I think that Bob’s mistake is rooted in the fact that he is treating Alice’s proposal as, essentially, a wish made to a genie. “Oh great genie,” says Alice, “please make it so that death is no more!” Bob, horrified, stops Alice before she can finish speaking, and shouts “No! Think of all the ways the words of your wish can be twisted! Think of the unintended consequences! You haven’t considered the implications! No, Alice, you must not make such grand wishes of a genie, for they will inevitably go awry.”
I fully agree with both of your points—people can mis-estimate the tradeoffs in either direction (assuming there are none, as EY does in this post, and assuming they’re much larger than they are, as you say). And people confuse desirability of an outcome with desirability of the overall effect of a policy/behavior/allocation change.
Neither of these change my main point that the hard part is figuring out and acknowledging the actual tradeoffs and paths from the current state to a preferred possible state, not just identifying imaginary-but-impossible worlds we’d prefer.
I do not read Eliezer as claiming that there are no tradeoffs. Rather, his aim is to establish the desirability of indefinite life extension in the first place! Once we’re all agreed on that, then we can talk tradeoffs.
And, hey, maybe we look at the tradeoffs and decide that nah, we’re not going to do this. Yet. For now. With sadness and regret, we shelve the idea, being always ready to come back to it, as soon as our technology advances, as soon as we have a surplus of resources, as soon as anything else changes…
Whereas if we just shake our heads and dismissively say “don’t you know that tradeoffs exist”, and end the discussion there, then we’re never going to live for a million years.
But on the other hand, maybe we look at the tradeoffs and decide that, actually, life extension is worth doing, right now! How will we know, unless we actually try and figure it out? And why would we do that, unless we first agree that it’s desirable? That is what Eliezer is trying to convince readers of, in this essay.
For example, you say:
How much environmental damage and sweatshop labor goes into “extreme” medical interventions? How much systemic economic oppression is required to have enough low-paid nurses to work the night shift? How many 6-year-olds could have better nutrition and a significantly better life for the cost of extending a 95-year-old’s life to 98 years?
Well? How many? These are fine questions. What are the answers?
The view here on Lesswrong, on the other hand, treats Alice’s proposal as an engineering challenge. … Once you properly distinguish the concepts of desirability and prudence, you can treat problems with your proposal as obstacles to overcome, not reasons not to do it.
(One important effect of actually trying to answer specific questions about tradeoffs, like the ones you list, is that once you know exactly what the tradeoffs are, you can also figure out what needs to change in order to shift the tradeoffs in the right direction and by the right amount, to alter the decision. And then you can start doing what needs to be done, to change those things!)
I don’t claim to actually know the answers, or even how I’d figure out the answers. I merely want to point out that it’s not simple, and saying “sometimes it’s easy” without acknowledging that “sometimes it’s hard” and “knowing the difference is hard” is misleading and unhelpful.
The problem comes in when people start inventing imaginary tradeoffs, purely out of a sense that there must be tradeoffs—and, critically, then use the existence of those (alleged) tradeoffs as a reason to simply reject the proposal.
And then you have the flaw in reasoning that I described in this old comment:
I fully agree with both of your points—people can mis-estimate the tradeoffs in either direction (assuming there are none, as EY does in this post, and assuming they’re much larger than they are, as you say). And people confuse desirability of an outcome with desirability of the overall effect of a policy/behavior/allocation change.
Neither of these change my main point that the hard part is figuring out and acknowledging the actual tradeoffs and paths from the current state to a preferred possible state, not just identifying imaginary-but-impossible worlds we’d prefer.
I do not read Eliezer as claiming that there are no tradeoffs. Rather, his aim is to establish the desirability of indefinite life extension in the first place! Once we’re all agreed on that, then we can talk tradeoffs.
And, hey, maybe we look at the tradeoffs and decide that nah, we’re not going to do this. Yet. For now. With sadness and regret, we shelve the idea, being always ready to come back to it, as soon as our technology advances, as soon as we have a surplus of resources, as soon as anything else changes…
Whereas if we just shake our heads and dismissively say “don’t you know that tradeoffs exist”, and end the discussion there, then we’re never going to live for a million years.
But on the other hand, maybe we look at the tradeoffs and decide that, actually, life extension is worth doing, right now! How will we know, unless we actually try and figure it out? And why would we do that, unless we first agree that it’s desirable? That is what Eliezer is trying to convince readers of, in this essay.
For example, you say:
Well? How many? These are fine questions. What are the answers?
Quoting myself once again:
(One important effect of actually trying to answer specific questions about tradeoffs, like the ones you list, is that once you know exactly what the tradeoffs are, you can also figure out what needs to change in order to shift the tradeoffs in the right direction and by the right amount, to alter the decision. And then you can start doing what needs to be done, to change those things!)
I don’t claim to actually know the answers, or even how I’d figure out the answers. I merely want to point out that it’s not simple, and saying “sometimes it’s easy” without acknowledging that “sometimes it’s hard” and “knowing the difference is hard” is misleading and unhelpful.