Transhumanists Don’t Need Special Dispositions
This essay was originally posted in 2007.
I have claimed that transhumanism arises strictly from love of life. A bioconservative humanist says that it is good to save someone’s life or cure them of debilitating syndromes if they are young, but once they are “too old” (the exact threshold is rarely specified) we should stop trying to keep them alive and healthy. A transhumanist says unconditionally: “Life is good, death is bad; health is good, death is bad.” Whether you’re 5, 50, or 500, life is good, why die? Nothing more is required.
Then why is there a widespread misunderstanding that transhumanism involves a special fetish for technology, or an unusually strong fear of death, or some other abnormal personal disposition?
I offer an analogy: Rationality is often thought to be about cynicism. The one comes to us and says, “Fairies make the rainbow; I believe this because it makes me feel warm and fuzzy inside.” And you say, “No.” And the one reasons, “I believe in fairies because I enjoy feeling warm and fuzzy. If I imagine that there are no fairies, I feel a sensation of deadly existential emptiness. Rationalists say there are no fairies. So they must enjoy sensations of deadly existential emptiness.” Actually, rationality follows a completely different rule—examine the rainbow very closely and see how it actually works. If we find fairies, we accept that, and if we don’t find fairies, we accept that too. The look-and-see rule makes no mention of our personal feelings about fairies, and it fully determines the rational answer. So you cannot infer that a competent rationalist hates fairies, or likes feelings of deadly existential emptiness, by looking at what they believe about rainbows.
But this rule—the notion of actually looking at things—is not widely understood. The more common belief is that rationalists make up stories about boring old math equations, instead of pretty little fairies, because rationalists have a math fetish instead of a fairy fetish. A personal taste, and an odd one at that, but how else would you explain rationalists’ strange and unusual beliefs?
Similarly, love of life is not commonly understood as a motive for saying that, if someone is sick, and we can cure them using medical nanotech, we really ought to do that. Instead people suppose that transhumanists have a taste for technology, a futurism fetish, that we just love those pictures of little roving nanobots. A personal taste, and an odd one at that, but how else would you explain transhumanists’ strange and unusual moral judgments?
Of course I’m not claiming that transhumanists take no joy in technology. That would be like saying a rationalist should take no joy in math. Homo sapiens is the tool-making species; a complete human being should take joy in a contrivance of special cleverness, just as we take joy in music or storytelling. It is likewise incorrect to say that the aesthetic beauty of a technology is a distinct good from its beneficial use—their sum is not merely additive, there is a harmonious combination. The equations underlying a rainbow are all the more beautiful for being true, rather than just made up. But the esthetic of transhumanism is very strict about positive outcomes taking precedence over how cool the technology looks. If the choice is between using an elegant technology to save a million lives and using an ugly technology to save a million and one lives, you choose the latter. Otherwise the harmonious combination vanishes like a soap bubble popping. It would be like preferring a more elegant theory of rainbows that was not actually true.
In social psychology, the “correspondence bias” is that we see far too direct a correspondence between others’ actions and their personalities. As Gilbert and Malone put it, we “draw inferences about a person’s unique and enduring dispositions from behaviors that can be entirely explained by the situations in which they occur.” For example, subjects listen to speakers giving speeches for and against abortion. The subjects are explicitly told that the speakers are reading prepared speeches assigned by coin toss—and yet the subjects still believe the pro-abortion speakers are personally in favor of abortion.
When we see someone else kick a vending machine for no visible reason, we assume he is “an angry person”. But if you yourself kick the vending machine, you will tend to see your actions as caused by your situation, not your disposition. The bus was late, the train was early, your report is overdue, and now the damned vending machine has eaten your money twice in a row. But others will not see this; they cannot see your situation trailing behind you in the air, and so they will attribute your behavior to your disposition.
But, really, most of the people in the world are not mutants—are probably not exceptional in any given facet of their emotional makeup. A key to understanding human nature is to realize that the vast majority of people see themselves as behaving normally, given their situations. If you wish to understand people’s behaviors, then don’t ask after mutant dispositions; rather, ask what situation they might believe themselves to be in.
Suppose I gave you a control with two buttons, a red button and a green button. The red button destroys the world, and the green button stops the red button from being pressed. Which button would you press? The green one. This response is perfectly normal. No special world-saving disposition is required, let alone a special preference for the color green. Most people would choose to press the green button and save the world, if they saw their situation in those terms.
And yet people sometimes ask me why I want to save the world. Why? They want to know why someone would want to save the world? Like you have to be traumatized in childhood or something? Give me a break.
We all seem normal to ourselves. One must understand this to understand all those strange other people.
Correspondence bias can also be seen as essentialist reasoning, like explaining rain by water spirits, or explaining fire by phlogiston. If you kick a vending machine, why, it must be because you have a vending-machine-kicking disposition.
So the transhumanist says, “Let us use this technology to cure aging.” And the reporter thinks, How strange! He must have been born with an unusual technology-loving disposition. Or, How strange! He must have an unusual horror of aging!
Technology means many things to many people. So too, death, aging, sickness have different implications to different personal philosophies. Thus, different people incorrectly attribute transhumanism to different mutant dispositions.
If someone prides themselves on being cynical of all Madison Avenue marketing, and the meaning of technology unto them is Madison Avenue marketing, they will see transhumanists as shills for The Man, trying to get us to spend money on expensive but ultimately meaningless toys.
If someone has been fed Deep Wisdom about how death is part of the Natural Cycle of Life ordained by heaven as a transition to beyond the flesh, etc., then they will see transhumanists as Minions of Industry, Agents of the Anti-Life Death Force that is Science.
If someone has a postmodern ironic attitude toward technology, then they’ll see transhumanists as being on a mission to make the world even stranger, more impersonal, than it already is—with the word “Singularity” obviously referring to complete disconnection and incomprehensibility.
If someone sees computers and virtual reality as an escape from the real world, opposed to sweat under the warm sun and the scent of real flowers, they will think that transhumanists must surely hate the body; that they want to escape the scent of flowers into a grayscale virtual world.
If someone associates technology with Gosh-Wow-Gee-Whiz-So-Cool flying cars and jetpacks, they’ll think transhumanists have gone overboard on youthful enthusiasm for toys.
If someone associates the future with scary hyperbole from Wired magazine—humans will merge with their machines and become indistinguishable from them—they’ll think that transhumanists yearn for the cold embrace of metal tentacles, that we want to lose our identity and be eaten by the Machine or some other dystopian nightmare of the month.
In all cases they make the same mistake—drawing a one-to-one correspondence between the way in which the behavior strikes them as strange, and a mutant mental essence that exactly fits the behavior. This is an unnecessarily burdensome explanation for why someone would advocate healing the sick by any means available, including advanced technology.
- 12 Dec 2018 21:04 UTC; 5 points) 's comment on Should ethicists be inside or outside a profession? by (
- Transhumanism 3 - LW/ACX Meetup #239 (Wednesday, May 31st 2023) by 30 May 2023 17:40 UTC; 2 points) (
There’s another hypothesis worth considering: that while a few vanguard transhumanist-identified intellectual leaders think this way, the majority of people associated with the trend are in fact just looking for things mood-affiliated with their aesthetic. Aesthetic modernism is a thing, after all, even though art deco skyscrapers aren’t more convenient to built than blocky ones, any more than Mussolini actually made the trains run on time. This recent news article seems relevant, especially the final paragraph:
It’s not obvious that people are making a mistake by assuming you’re just a nerd who likes futuristic stuff, if you’re doing a lot of things that are characteristic of that group and principles meant literally are rare. Most people repeating clever arguments that favor their political affiliation don’t actually believe what they’re saying in the sense of having constrained anticipations, and after a certain point you simply shouldn’t expect people to parse the arguments.
I basically agree with this. I once went to a transhumanist conference and attendees I talked to seemed at least a bit like this.
My impression from a few arguments I’ve been in is that there are people who simply don’t/can’t believe that health extension is possible, so they can’t assimilate arguments based on the idea of health extension. You say life extension and they hear miserable old age extension.
Where was this originally posted?
Has someone been making bad criticisms of transhumanism lately?
In 2007, when this was first published, I think I understood which bravery debate this essay might apply to (/me throws some side-eye in the direction of Leon Kass et al), but in 2018 this sort of feels like something that (at least for a LW audience I would think?) has to be read backwards to really understand its valuable place in a larger global discourse.
If I’m trying to connect this to something in the news literally in the last week, it occurs to me to think about He Jiankui’s recent attempt to use CRISPR technology to give HIV-immunity to two girls in China, which I think is very laudable in the abstract but also highly questionable as actually implemented based on current (murky and confused) reporting.
Basically, December of 2018 seems like a bad time to “go abstract” in favor of transhumanism, when the implementation details of transhumanism are finally being seriously discussed, and the real and specific challenges of getting the technical and ethical details right are the central issue.
The motivation for the current set of repostings is simply that the posts had never been on LessWrong (and this post wasn’t even on Yudkowsky.net) and it seemed nice to get them in a more accessible place. (I hadn’t actually read this one)
There is one more essay from the mini-sequence, coming Monday.
While I think this is mostly old-hat to longterm LW readers, I do think it’s still relevant outside of our bubble.
Wouldn’t this then be the best time to go abstract, since it would necessarily distinguish bad things done in the name of transhumanism from the actual values of the philosophy.
I can see two senses for what you might be saying…
I agree with one of them (see the end of my response), but I suspect you intend the other:
First, it seems clear to me that the value of a philosophy early on is a speculative thing, highly abstract, oriented towards the future, and latent in the literal expected value of the actions and results the philosophy suggests and envisions.
However, eventually, the actual results of actual people whose hands were moved by brains that contain the philosophy can be valued directly.
Basically, the value of the results of a plan or philosophy screen off the early expected value of the plan or philosophy… not entirely (because the it might have been “the right play, given the visible cards” with the deal revealing low probability outcomes). However, bad results provide at least some Bayesian evidence of bad ideas without bringing more of a model into play.
So when you say that “the actual values of transhumanism” might be distinguished from less abstract “things done in the name of transhumanism” that feels to me like it could be a sort of category error related to expected value? If the abstraction doesn’t address and prevent highly plausible failure modes of someone who might attempt to implement the abstract ideas, then the abstraction was bad.
(Worth pointing out: The LW/OB subculture has plenty to say here, though mostly by Hanson, who has been pointing out for over a decade that much of medicine is actively harmful and exists as a costly signal of fitness as an alliance partner aimed at non-perspicacious third parties through ostensible proofs of “caring” that have low actual utility with respect to desirable health outcomes. Like… it is arguably PART OF OUR CULTURE that “standard non-efficacious bullshit medicine” isn’t “real transhumanism”. However, that part of our culture maybe deserves to be pushed forward a bit more right now?)
A second argument that seems like it could be unpacked from your statement, that I would agree with, is that well formulated abstractions might contain within them a lot of valuable latent potential, and in the press of action it could be useful to refer back to these abstractions as a sort of True North that might otherwise fall from the mind and leave one’s hands doing confused things.
When the fog of war descends, and a given plan seemed good before the fog descended, and no new evidence has arisen to the contrary, and the fog itself was expected, then sticking to the plan (however abstract or philosophical it may be) has much to commend it :-)
If this latter thing is all you meant, then… cool? :-)
Do you like the body?
I like sex.
As it has always been a part of my experience, I don’t know what its absence from my experience would be like, and whether I would prefer this state. I don’t like headaches, and I enjoy in person communication. I also view Irreversible Changes as risky.
Well if you think they think you want to “lose your identity”, why do you think they will be persuaded by a “no, I don’t”? “Identity” is a big thing with many parts, you will have to show it doesn’t get lost.
There is a thing called ОБВМ in Russian, it stands for “[but has a] Very Rich Internal World”, and AFAIK initially it was said about RPG players who avoid playing action or are side-lined by the orgs into being “the third elf in the fifth row”. And maybe they do have rich internal worlds that could be made richer by cold tentacles, but for the rest of the players they don’t have an observable identity.
There is a tale in Greek myths of a woman who tricked Apollo into granting her a wish, grabbed a pile of sand and asked to live as many ages as there were individual pieces in the pile—yet she forgot to ask for eternal youth, so a thousand years later she was a dribbling and unhappy old hag desperately wishing she had died ages ago. I suppose something like this is what pictured by non-transhumanists when they speak of “too old”—and there is no obligation that your methods to preserve life will also preserve good bodily condition (not to mention Alzheimer-like thingies—what if the body does live but the mind deteriorates?).
I don’t know, it seems as though Wired magazine understands my hopes for the future pretty well. Where is the scary part?
Transhumanism imposes on territory that’s traditionally been metaphysical or philosophical. The assumption is that it does so because of or in accompaniment with metaphysical or philosophical reasoning. Part of the reason a special disposition is assumed is because the alternative, that you don’t think about what other people are thinking about at all, is probably distressing to them. This is also one of the reasons people don’t like atheists. Yes, there are those who think atheists are actually all satan worshippers, but mostly they are just creeped out that atheists seem to be not thinking the kinds of thoughts that religious or spiritual people think at all. And there’s plenty of neuroscience that shows the brains of atheists and religious people function differently, so it is literally a matter of being confronted with alien intelligence; a mind that cannot think in the same way; rather than merely a mind that happens to be thinking other thoughts.
I am suspicious of claims that ideological differences arise from fundamental neurological differences—that seems to uphold the bias toward a homogeneous enemy. (That doesn’t mean it’s impossible, but that it’s more likely to be falsely asserted than claims that we’re not biased toward.) Could you link to the studies that you say support your statement?
I was remembering an article in The Atlantic from a while ago, but I can’t seem to find it now. All I can find now is this, which doesn’t have the same power because it’s the result of an after-the-fact search: https://www.liebertpub.com/doi/abs/10.1089/brain.2013.0172
I see how it follows that it will be “attacked” on such grounds.
I don’t follow why “thinks differently” implies “neurological differences”. Why should we suppose it is hardware rather than software? I would be interested in seeing those studies, as well.