This is a bad argument for transhumanism; it proves way too much. I’m a little surprised that this needs to be said.
Consider: “having food is good. Having more and tastier food is better. This is common sense. Transfoodism is the philosophy that we should take this common sense seriously, and have as much food as possible, as tasty as we can make it, even if doing so involves strange new technology.” But we tried that, and what happened was obesity, addiction, terrible things happening to our gut flora, etc. It is just blatantly false in general that having more of a good thing is better.
As for “common sense”: in many human societies it was “common sense” to own slaves, to beat your children, again etc. Today it’s “common sense” to circumcise male babies, to eat meat, to send people who commit petty crimes to jail, etc., to pick some examples of things that might be considered morally repugnant by future human societies. Common sense is mostly moral fashion, or if you prefer it’s mostly the memes that were most virulent when you were growing up, and it’s clearly unreliable as a guide to moral behavior in general.
Figuring out the right thing to do is hard, and it’s hard for comprehensible reasons. Value is complex and fragile; you were the one who told us that!
---
In the direction of what I actually believe: I think that there’s a huge difference between preventing a bad thing happening and making a good thing happen, e.g. I don’t consider preventing an IQ drop equivalent to raising IQ. The boy has had120 IQ his entire life and we want to preserve that, but the girl has had 110 IQ her entire life and we want to change that. Preserving and changing are different, and preserving vs. changing people in particular is morally complicated. Again the argument Eliezer uses here is bad and proves too much:
Either it’s better to have an IQ of 110 than 120, in which case we should strive to decrease IQs of 120 to 110. Or it’s better to have an IQ of 120 than 110, in which case we should raise the sister’s IQ if possible. As far as I can see, the obvious answer is the correct one.
Consider: “either it’s better to be male than female, in which case we should transition all women to men. Or it’s better to be female than male, in which case we should transition all men to women.”
---
What I can appreciate about this post is that it’s an attempt to puncture bad arguments against transhumanism, and if it had been written more explicitly to do that as opposed to presenting an argument for transhumanism, I wouldn’t have a problem with it.
Consider: “having food is good. Having more and tastier food is better. This is common sense. Transfoodism is the philosophy that we should take this common sense seriously, and have as much food as possible, as tasty as we can make it, even if doing so involves strange new technology.” But we tried that, and what happened was obesity, addiction, terrible things happening to our gut flora, etc. It is just blatantly false in general that having more of a good thing is better.
Conclusion does not follow from example.
You are making exactly the mistake which I described in detail (and again in the comments to this post). You’re conflating desirability with prudence.
It is desirable to have as much food as possible, as tasty as we can make it. It may, however, not be prudent, because the costs make it a net loss. But if we could solve the problems you list—if we could cure and prevent obesity and addiction, if we could reverse and prevent damage to our gut flora—then of course having lots of tasty food would be great! (Or would it? Would other problems crop up? Perhaps they might! And what we would want to do then, is to solve those problems—because having lots of tasty food is still desirable.)
So, in fact, your example shows nothing like what you say it shows. Your example is precisely a case where more of a good thing is better… though the costs, given current technology and scientific understanding, are too high to make it prudent to have as much of that good thing as we’d like.
As for “common sense”: in many human societies it was “common sense” to own slaves, to beat your children, again etc. Today it’s “common sense” to circumcise male babies, to eat meat, to send people who commit petty crimes to jail, etc., to pick some examples of things that might be considered morally repugnant by future human societies. Common sense is mostly moral fashion, or if you prefer it’s mostly the memes that were most virulent when you were growing up, and it’s clearly unreliable as a guide to moral behavior in general.
Now this does prove too much. Ok, so “common sense” can’t be trusted. Now what? Do we just discard everything it tells us? Reject all our moral intuitions?
Yes, by all means let’s examine our intuitions, let us interrogate the output of our common sense. This is good!
But sometimes, when we examine our intuitions and interrogate our common sense, we come up with the same answer that we got at first. We examine our intuitions, and find that actually, yeah, they’re exactly correct. We interrogate our common sense, and find that it passes muster.
And that’s fine. Answers don’t have to be complex, surprising, or unintuitive. Sometimes, the obvious answer is the right one.
Yeah, “Life is good” doesn’t validly imply “Living forever is good”. There can obviously be offsetting costs; I think it’s good to point this out, so we don’t confuse “there’s a presumption of evidence for (transhumanist intervention blah)” with “there’s an ironclad argument against any possible offsetting risks/costs turning up in the future”.
Like Said, I took Eliezer to just be saying “there’s no currently obvious reason to think that the optimal healthy lifespan for most people is <200 (or <1000, etc.).” My read is that 2007-Eliezer is trying to explain why bioconservatives need to point to some concrete cost at all (rather than taking it for granted that sci-fi-ish outcomes are weird and alien and therefore bad), and not trying to systematically respond to every particular scenario one might come up with where the utilities do flip at a certain age.
The goal is to provide an intuition pump: “Wanting people to live radically longer, be radically smarter, be radically happier, etc. is totally mundane and doesn’t require any exotic assumptions or bizarre preferences.” Pretty similar to another Eliezer intuition pump:
In addition to standard biases, I have personally observed what look like harmful modes of thinking specific to existential risks. The Spanish flu of 1918 killed 25-50 million people. World War II killed 60 million people.108 is the order of the largest catastrophes in humanity’s written history. Substantially larger numbers, such as 500 million deaths, and especially qualitatively different scenarios such as the extinction of the entire human species, seem to trigger a different mode of thinking—enter into a “separate magisterium.” People who would never dream of hurting a child hear of an existential risk, and say, “Well, maybe the human species doesn’t really deserve to survive.”
There is a saying in heuristics and biases that people do not evaluate events, but descriptions of events—what is called non-extensional reasoning. The extension of humanity’s extinction includes the death of yourself, of your friends, of your family, of your loved ones, of your city, of your country, of your political fellows. Yet people who would take great offense at a proposal to wipe the country of Britain from the map, to kill every member of the Democratic Party in the U.S., to turn the city of Paris to glass—who would feel still greater horror on hearing the doctor say that their child had cancer— these people will discuss the extinction of humanity with perfect calm. “Extinction of humanity,” as words on paper, appears in fictional novels, or is discussed in philosophy books—it belongs to a different context than the Spanish flu. We evaluate descriptions of events, not extensions of events. The cliché phrase end of the world invokes the magisterium of myth and dream, of prophecy and apocalypse, of novels and movies. The challenge of existential risks to rationality is that, the catastrophes being so huge, people snap into a different mode of thinking.
People tend to think about the long-term future in Far Mode, which makes near-mode good things like “watching a really good movie” or “helping a sick child” feel less cognitively available/relevant/salient. The point of Eliezer’s “transhumanist proof by induction” isn’t to establish that there can never be offsetting costs (or diminishing returns, etc.) to having more of a good thing. It’s just to remind us that small concrete near-mode good things don’t stop being good when we talk about far-mode topics. (Indeed, they’re often the dominant consideration, because they can end up adding up to so much value when we talk about large-scale things.)
This is a bad argument for transhumanism; it proves way too much. I’m a little surprised that this needs to be said.
Consider: “having food is good. Having more and tastier food is better. This is common sense. Transfoodism is the philosophy that we should take this common sense seriously, and have as much food as possible, as tasty as we can make it, even if doing so involves strange new technology.” But we tried that, and what happened was obesity, addiction, terrible things happening to our gut flora, etc. It is just blatantly false in general that having more of a good thing is better.
As for “common sense”: in many human societies it was “common sense” to own slaves, to beat your children, again etc. Today it’s “common sense” to circumcise male babies, to eat meat, to send people who commit petty crimes to jail, etc., to pick some examples of things that might be considered morally repugnant by future human societies. Common sense is mostly moral fashion, or if you prefer it’s mostly the memes that were most virulent when you were growing up, and it’s clearly unreliable as a guide to moral behavior in general.
Figuring out the right thing to do is hard, and it’s hard for comprehensible reasons. Value is complex and fragile; you were the one who told us that!
---
In the direction of what I actually believe: I think that there’s a huge difference between preventing a bad thing happening and making a good thing happen, e.g. I don’t consider preventing an IQ drop equivalent to raising IQ. The boy has had120 IQ his entire life and we want to preserve that, but the girl has had 110 IQ her entire life and we want to change that. Preserving and changing are different, and preserving vs. changing people in particular is morally complicated. Again the argument Eliezer uses here is bad and proves too much:
Consider: “either it’s better to be male than female, in which case we should transition all women to men. Or it’s better to be female than male, in which case we should transition all men to women.”
---
What I can appreciate about this post is that it’s an attempt to puncture bad arguments against transhumanism, and if it had been written more explicitly to do that as opposed to presenting an argument for transhumanism, I wouldn’t have a problem with it.
Conclusion does not follow from example.
You are making exactly the mistake which I described in detail (and again in the comments to this post). You’re conflating desirability with prudence.
It is desirable to have as much food as possible, as tasty as we can make it. It may, however, not be prudent, because the costs make it a net loss. But if we could solve the problems you list—if we could cure and prevent obesity and addiction, if we could reverse and prevent damage to our gut flora—then of course having lots of tasty food would be great! (Or would it? Would other problems crop up? Perhaps they might! And what we would want to do then, is to solve those problems—because having lots of tasty food is still desirable.)
So, in fact, your example shows nothing like what you say it shows. Your example is precisely a case where more of a good thing is better… though the costs, given current technology and scientific understanding, are too high to make it prudent to have as much of that good thing as we’d like.
Now this does prove too much. Ok, so “common sense” can’t be trusted. Now what? Do we just discard everything it tells us? Reject all our moral intuitions?
Yes, by all means let’s examine our intuitions, let us interrogate the output of our common sense. This is good!
But sometimes, when we examine our intuitions and interrogate our common sense, we come up with the same answer that we got at first. We examine our intuitions, and find that actually, yeah, they’re exactly correct. We interrogate our common sense, and find that it passes muster.
And that’s fine. Answers don’t have to be complex, surprising, or unintuitive. Sometimes, the obvious answer is the right one.
That is Eliezer’s point.
Yeah, “Life is good” doesn’t validly imply “Living forever is good”. There can obviously be offsetting costs; I think it’s good to point this out, so we don’t confuse “there’s a presumption of evidence for (transhumanist intervention blah)” with “there’s an ironclad argument against any possible offsetting risks/costs turning up in the future”.
Like Said, I took Eliezer to just be saying “there’s no currently obvious reason to think that the optimal healthy lifespan for most people is <200 (or <1000, etc.).” My read is that 2007-Eliezer is trying to explain why bioconservatives need to point to some concrete cost at all (rather than taking it for granted that sci-fi-ish outcomes are weird and alien and therefore bad), and not trying to systematically respond to every particular scenario one might come up with where the utilities do flip at a certain age.
The goal is to provide an intuition pump: “Wanting people to live radically longer, be radically smarter, be radically happier, etc. is totally mundane and doesn’t require any exotic assumptions or bizarre preferences.” Pretty similar to another Eliezer intuition pump:
People tend to think about the long-term future in Far Mode, which makes near-mode good things like “watching a really good movie” or “helping a sick child” feel less cognitively available/relevant/salient. The point of Eliezer’s “transhumanist proof by induction” isn’t to establish that there can never be offsetting costs (or diminishing returns, etc.) to having more of a good thing. It’s just to remind us that small concrete near-mode good things don’t stop being good when we talk about far-mode topics. (Indeed, they’re often the dominant consideration, because they can end up adding up to so much value when we talk about large-scale things.)
I like this reading and don’t have much of an objection to it.
K, cool. :)