Yeah, “Life is good” doesn’t validly imply “Living forever is good”. There can obviously be offsetting costs; I think it’s good to point this out, so we don’t confuse “there’s a presumption of evidence for (transhumanist intervention blah)” with “there’s an ironclad argument against any possible offsetting risks/costs turning up in the future”.
Like Said, I took Eliezer to just be saying “there’s no currently obvious reason to think that the optimal healthy lifespan for most people is <200 (or <1000, etc.).” My read is that 2007-Eliezer is trying to explain why bioconservatives need to point to some concrete cost at all (rather than taking it for granted that sci-fi-ish outcomes are weird and alien and therefore bad), and not trying to systematically respond to every particular scenario one might come up with where the utilities do flip at a certain age.
The goal is to provide an intuition pump: “Wanting people to live radically longer, be radically smarter, be radically happier, etc. is totally mundane and doesn’t require any exotic assumptions or bizarre preferences.” Pretty similar to another Eliezer intuition pump:
In addition to standard biases, I have personally observed what look like harmful modes of thinking specific to existential risks. The Spanish flu of 1918 killed 25-50 million people. World War II killed 60 million people.108 is the order of the largest catastrophes in humanity’s written history. Substantially larger numbers, such as 500 million deaths, and especially qualitatively different scenarios such as the extinction of the entire human species, seem to trigger a different mode of thinking—enter into a “separate magisterium.” People who would never dream of hurting a child hear of an existential risk, and say, “Well, maybe the human species doesn’t really deserve to survive.”
There is a saying in heuristics and biases that people do not evaluate events, but descriptions of events—what is called non-extensional reasoning. The extension of humanity’s extinction includes the death of yourself, of your friends, of your family, of your loved ones, of your city, of your country, of your political fellows. Yet people who would take great offense at a proposal to wipe the country of Britain from the map, to kill every member of the Democratic Party in the U.S., to turn the city of Paris to glass—who would feel still greater horror on hearing the doctor say that their child had cancer— these people will discuss the extinction of humanity with perfect calm. “Extinction of humanity,” as words on paper, appears in fictional novels, or is discussed in philosophy books—it belongs to a different context than the Spanish flu. We evaluate descriptions of events, not extensions of events. The cliché phrase end of the world invokes the magisterium of myth and dream, of prophecy and apocalypse, of novels and movies. The challenge of existential risks to rationality is that, the catastrophes being so huge, people snap into a different mode of thinking.
People tend to think about the long-term future in Far Mode, which makes near-mode good things like “watching a really good movie” or “helping a sick child” feel less cognitively available/relevant/salient. The point of Eliezer’s “transhumanist proof by induction” isn’t to establish that there can never be offsetting costs (or diminishing returns, etc.) to having more of a good thing. It’s just to remind us that small concrete near-mode good things don’t stop being good when we talk about far-mode topics. (Indeed, they’re often the dominant consideration, because they can end up adding up to so much value when we talk about large-scale things.)
Yeah, “Life is good” doesn’t validly imply “Living forever is good”. There can obviously be offsetting costs; I think it’s good to point this out, so we don’t confuse “there’s a presumption of evidence for (transhumanist intervention blah)” with “there’s an ironclad argument against any possible offsetting risks/costs turning up in the future”.
Like Said, I took Eliezer to just be saying “there’s no currently obvious reason to think that the optimal healthy lifespan for most people is <200 (or <1000, etc.).” My read is that 2007-Eliezer is trying to explain why bioconservatives need to point to some concrete cost at all (rather than taking it for granted that sci-fi-ish outcomes are weird and alien and therefore bad), and not trying to systematically respond to every particular scenario one might come up with where the utilities do flip at a certain age.
The goal is to provide an intuition pump: “Wanting people to live radically longer, be radically smarter, be radically happier, etc. is totally mundane and doesn’t require any exotic assumptions or bizarre preferences.” Pretty similar to another Eliezer intuition pump:
People tend to think about the long-term future in Far Mode, which makes near-mode good things like “watching a really good movie” or “helping a sick child” feel less cognitively available/relevant/salient. The point of Eliezer’s “transhumanist proof by induction” isn’t to establish that there can never be offsetting costs (or diminishing returns, etc.) to having more of a good thing. It’s just to remind us that small concrete near-mode good things don’t stop being good when we talk about far-mode topics. (Indeed, they’re often the dominant consideration, because they can end up adding up to so much value when we talk about large-scale things.)
I like this reading and don’t have much of an objection to it.
K, cool. :)