Humans in Funny Suits
Many times the human species has travelled into space, only to find the stars inhabited by aliens who look remarkably like humans in funny suits—or even humans with a touch of makeup and latex—or just beige Caucasians in fee simple.
It’s remarkable how the human form is the natural baseline of the universe, from which all other alien species are derived via a few modifications.
What could possibly explain this fascinating phenomenon? Convergent evolution, of course! Even though these alien lifeforms evolved on a thousand alien planets, completely independently from Earthly life, they all turned out the same.
Don’t be fooled by the fact that a kangaroo (a mammal) resembles us rather less than does a chimp (a primate), nor by the fact that a frog (amphibians, like us, are tetrapods) resembles us less than the kangaroo. Don’t be fooled by the bewildering variety of the insects, who split off from us even longer ago than the frogs; don’t be fooled that insects have six legs, and their skeletons on the outside, and a different system of optics, and rather different sexual practices.
You might think that a truly alien species would be more different from us than we are from insects—that the aliens wouldn’t run on DNA, and might not be made of folded-up hydrocarbon chains internally bound by van der Waals forces (aka proteins).
As I said, don’t be fooled. For an alien species to evolve intelligence, it must have two legs with one knee each attached to an upright torso, and must walk in a way similar to us. You see, any intelligence needs hands, so you’ve got to repurpose a pair of legs for that—and if you don’t start with a four-legged being, it can’t develop a running gait and walk upright, freeing the hands.
For an alien species to evolve intelligence it needs binocular vision for precise manipulation, which means exactly two eyes. These eyes must be located in a head atop a torso. The alien must communicate by transcoding their thoughts into acoustic vibrations, so they need ears and lips and a throat. And think of how out-of-place ears and eyes and lips would look, without a nose! Sexual selection will result in the creation of noses—you wouldn’t want to mate with something without a face, would you? A similar logic explains why the female of the species is invariably attractive—ugly aliens would enjoy less reproductive success. And as for why the aliens speak English, well, if they spoke some kind of gibberish, they’d find it difficult to create a working civilization.
...or perhaps we should consider, as an alternative theory, that it’s the easy way out to use humans in funny suits.
But the real problem is not shape, it is mind. “Humans in funny suits” is a well-known term in literary science-fiction fandom, and it does not refer to something with four limbs that walks upright. An angular creature of pure crystal is a “human in a funny suit” if she thinks remarkably like a human—especially a human of an English-speaking culture of the late-20th/early-21st century.
I don’t watch a lot of ancient movies. When I was watching the movie Psycho (1960) a few years back, I was taken aback by the cultural gap between the Americans on the screen and my America. The buttoned-shirted characters of Psycho are considerably more alien than the vast majority of so-called “aliens” I encounter on TV or the silver screen.
To write a culture that isn’t just like your own culture, you have to be able to see your own culture as a special case - not as a norm which all other cultures must take as their point of departure. Studying history may help—but then it is only little black letters on little white pages, not a living experience. I suspect that it would help more to live for a year in China or Dubai or among the !Kung… this I have never done, being busy. Occasionally I wonder what things I might not be seeing (not there, but here).
Seeing your humanity as a special case, is very much harder than this.
In every known culture, humans seem to experience joy, sadness, fear, disgust, anger, and surprise. In every known culture, these emotions are indicated by the same facial expressions. Next time you see an “alien”—or an “AI”, for that matter—I bet that, when it gets angry (and it will get angry), it will show the human-universal facial expression for anger.
We humans are very much alike under our skulls—that goes with being a sexually reproducing species; you can’t have everyone using different complex adaptations, they wouldn’t assemble. (Do the aliens reproduce sexually, like humans and many insects? Do they share small bits of genetic material, like bacteria? Do they form colonies, like fungi? Does the rule of psychological unity apply among them?)
The only intelligences your ancestors had to manipulate—complexly so, and not just tame or catch in nets—the only minds your ancestors had to model in detail—were minds that worked more or less like their own. And so we evolved to predict Other Minds by putting ourselves in their shoes, asking what we would do in their situations; for that which was to be predicted, was similar to the predictor.
“What?” you say. “I don’t assume other people are just like me! Maybe I’m sad, and they happen to be angry! They believe other things than I do; their personalities are different from mine!” Look at it this way: a human brain is an extremely complicated physical system. You are not modeling it neuron-by-neuron or atom-by-atom. If you came across a physical system as complex as the human brain, which was not like you, it would take scientific lifetimes to unravel it. You do not understand how human brains work in an abstract, general sense; you can’t build one, and you can’t even build a computer model that predicts other brains as well as you predict them.
The only reason you can try at all to grasp anything as physically complex and poorly understood as the brain of another human being, is that you configure your own brain to imitate it. You empathize (though perhaps not sympathize). You impose on your own brain the shadow of the other mind’s anger and the shadow of its beliefs. You may never think the words, “What would I do in this situation?”, but that little shadow of the other mind that you hold within yourself, is something animated within your own brain, invoking the same complex machinery that exists in the other person, synchronizing gears you don’t understand. You may not be angry yourself, but you know that if you were angry at you, and you believed that you were godless scum, you would try to hurt you...
This “empathic inference” (as I shall call it) works for humans, more or less.
But minds with different emotions—minds that feel emotions you’ve never felt yourself, or that fail to feel emotions you would feel? That’s something you can’t grasp by putting your brain into the other brain’s shoes. I can tell you to imagine an alien that grew up in universe with four spatial dimensions, instead of three spatial dimensions, but you won’t be able to reconfigure your visual cortex to see like that alien would see. I can try to write a story about aliens with different emotions, but you won’t be able to feel those emotions, and neither will I.
Imagine an alien watching a video of the Marx Brothers and having absolutely no idea what was going on, or why you would actively seek out such a sensory experience, because the alien has never conceived of anything remotely like a sense of humor. Don’t pity them for missing out; you’ve never antled.
At this point, I’m sure, several readers are imagining why evolution must, if it produces intelligence at all, inevitably produce intelligence with a sense of humor. Maybe the aliens do have a sense of humor, but you’re not telling funny enough jokes? This is roughly the equivalent of trying to speak English very loudly, and very slowly, in a foreign country; on the theory that those foreigners must have an inner ghost that can hear the meaning dripping from your words, inherent in your words, if only you can speak them loud enough to overcome whatever strange barrier stands in the way of your perfectly sensible English.
It is important to appreciate that laughter can be a beautiful and valuable thing, even if it is not universalizable, even if it is not possessed by all possible minds. It would be our own special part of the Gift We Give To Tomorrow. That can count for something too. It had better, because universalizability is one metaethical notion that I can’t salvage for you. Universalizability among humans, maybe; but not among all possible minds.
We do not think of ourselves as being human when we are being human. The artists who depicted alien invaders kidnapping girls in torn dresses and carrying them off for ravishing, did not make that error by reasoning about the probable evolutionary biology of alien minds. It just seemed to them that a girl in a torn dress was sexy, as a property of the girl and the dress, having nothing to do with the aliens. Your English words have meaning, your jokes are funny. What does that have to do with the aliens?
Our anthropomorphism runs very deep in us; it cannot be excised by a simple act of will, a determination to say, “Now I shall stop thinking like a human!” Humanity is the air we breathe; it is our generic, the white paper on which we begin our sketches. Even if one can imagine a slime monster that mates with other slime monsters, it is a bit more difficult to imagine that the slime monster might not envy a girl in a torn dress as a superior and more curvaceous prey—might not say: “Hey, I know I’ve been mating with other slime monsters until now, but screw that—or rather, don’t.”
And what about minds that don’t run on emotional architectures like your own—that don’t have things analogous to emotions? No, don’t bother explaining why any intelligent mind powerful enough to build complex machines must inevitably have states analogous to emotions. Go study evolutionary biology instead: natural selection builds complex machines without itself having emotions. Now there’s a Real Alien for you—an optimization process that really Does Not Work Like You Do.
Much of the progress in biology since the 1960s has consisted of trying to enforce a moratorium on anthropomorphizing evolution. That was a major academic slap-fight, and I’m not sure that sanity would have won the day if not for the availability of crushing experimental evidence backed up by clear math. Getting people to stop putting themselves in alien shoes is a long, hard, uphill slog. I’ve been fighting that battle on AI for years.
It is proverbial in literary science fiction that the true test of an author is their ability to write Real Aliens. (And not just conveniently incomprehensible aliens who, for their own mysterious reasons, do whatever the plot happens to require.) Jack Vance was one of the great masters of this art. Vance’s humans, if they come from a different culture, are more alien than most “aliens”. (Never read any Vance? I would recommend starting with City of the Chasch.) Niven and Pournelle’s The Mote in God’s Eye also gets a standard mention here.
And conversely—well, I once read a science fiction author (I think Orson Scott Card) say that the all-time low point of television SF was the Star Trek episode where parallel evolution has proceeded to the extent of producing aliens who not only look just like humans, who not only speak English, but have also independently rewritten, word for word, the preamble to the U.S. Constitution.
This is the Great Failure of Imagination. Don’t think that it’s just about SF, or even just about AI. The inability to imagine the alien is the inability to see yourself—the inability to understand your own specialness. Who can see a human camouflaged against a human background?
- Gentleness and the artificial Other by 2 Jan 2024 18:21 UTC; 292 points) (
- Ends Don’t Justify Means (Among Humans) by 14 Oct 2008 21:00 UTC; 201 points) (
- Superintelligent AI is necessary for an amazing future, but far from sufficient by 31 Oct 2022 21:16 UTC; 132 points) (
- The genie knows, but doesn’t care by 6 Sep 2013 6:42 UTC; 119 points) (
- An artificially structured argument for expecting AGI ruin by 7 May 2023 21:52 UTC; 91 points) (
- Gentleness and the artificial Other by 2 Jan 2024 18:21 UTC; 89 points) (EA Forum;
- Detached Lever Fallacy by 31 Jul 2008 18:57 UTC; 85 points) (
- Sympathetic Minds by 19 Jan 2009 9:31 UTC; 69 points) (
- Devil’s Offers by 25 Dec 2008 17:00 UTC; 50 points) (
- In Praise of Boredom by 18 Jan 2009 9:03 UTC; 42 points) (
- Aiming at the Target by 26 Oct 2008 16:47 UTC; 40 points) (
- Superintelligent AI is necessary for an amazing future, but far from sufficient by 31 Oct 2022 21:16 UTC; 35 points) (EA Forum;
- The Comedy of Behaviorism by 2 Aug 2008 20:42 UTC; 32 points) (
- Technical Predictions Related to AI Safety by 13 Aug 2021 0:29 UTC; 29 points) (
- Recognizing Intelligence by 7 Nov 2008 23:22 UTC; 28 points) (
- Disjunctions, Antipredictions, Etc. by 9 Dec 2008 15:13 UTC; 28 points) (
- What if sympathy depends on anthropomorphizing? by 24 Jul 2011 12:33 UTC; 22 points) (
- Help us Optimize the Contents of the Sequences eBook by 19 Sep 2013 4:31 UTC; 18 points) (
- 4 Dec 2021 22:02 UTC; 11 points) 's comment on Solve Corrigibility Week by (
- Rationality Reading Group: Part M: Fragile Purposes by 5 Nov 2015 2:08 UTC; 6 points) (
- [SEQ RERUN] Humans in Funny Suits by 18 Jul 2012 3:17 UTC; 6 points) (
- 6 Nov 2015 9:54 UTC; 5 points) 's comment on Open thread, Nov. 02 - Nov. 08, 2015 by (
- 11 Mar 2011 5:31 UTC; 5 points) 's comment on Why not just write failsafe rules into the superintelligent machine? by (
- Extraterrestrial paperclip maximizers by 8 Aug 2010 20:35 UTC; 5 points) (
- 9 Aug 2010 8:32 UTC; 5 points) 's comment on Extraterrestrial paperclip maximizers by (
- 14 Aug 2012 16:17 UTC; 4 points) 's comment on [LINK] The Cambridge Declaration on Consciousness by (
- 23 Jan 2020 22:35 UTC; 4 points) 's comment on Concerns Surrounding CEV: A case for human friendliness first by (
- 14 Dec 2012 22:21 UTC; 4 points) 's comment on By Which It May Be Judged by (
- 27 Jan 2013 17:08 UTC; 3 points) 's comment on Cryo and Social Obligations by (
- 13 Mar 2023 1:25 UTC; 2 points) 's comment on Why Not Just Outsource Alignment Research To An AI? by (
- 12 Nov 2011 5:09 UTC; 0 points) 's comment on Is Kiryas Joel an Unhappy Place? by (
Telling people to “beware, you might be biased” is not useless, but is almost so—all you can do is become more uncertain. Telling people “beware your judgment when drunk” is a lot more useful, as you can then become more uncertain when drunk, and more certain when not drunk.
Telling people to in general “beware of assuming aliens are like you” is very weak advice. It would be much more helpful to tell them more specifically kinds of situations or features for which they are more likely to make this error.
Economists usually get the opposite complaint, that our math models are too much about a generic social intelligence, and too little about specific features of human society.
Can’t we imagine the SF writers reasoning that they’re never going to succeed anyway in creating “real aliens,” so they might as well abandon that goal from the outset and concentrate on telling a good story? Absent actual knowledge of alien intelligences, perhaps the best one can ever hope to do is to write “hypothetical humans”: beings that are postulated to differ from humans in just one or two important respects that the writer wants to explore. (A good example is the middle third of The Gods Themselves, which delves into the family dynamics of aliens with three sexes instead of two, and one of the best pieces of SF I’ve read—not that I’ve read a huge amount.) Of course, most SF (like Star Wars) doesn’t even do that, and is just about humans with magic powers, terrible dialogue, and funny ears. I guess Star Trek deserves credit for at least occasionally challenging its audience, insofar as that’s possible with mass-market movies and TV.
Yes, of course there are many good reasons why writers do this. Reasons why, for a writer, it can be good to do this, in addition to just being difficult to avoid.
But i don’t think that’s really the point. We’re not here to critique science fiction. We’re not tv critics. We’re trying to learn rationality techniques to help us “win” whatever we’re trying to win. And this is a fairly good description of a certain kind of bias.
You’re right though. Sci-fi is a good example to demonstrate what the bias is, but not a great example to demonstrate why it’s important.
Budget constraints usually curtail attempts to bring ‘alien’ aliens to the big and small screens. Making plausible-looking aliens is quite expensive; even the prosthetics used to make quasi-human aliens are extensive, and the more complex they are, the harder it is for actors to be expressive.
In the few cases I know of where it was attempted anyway, people responded so poorly to the unfamiliar aspects of the extraterrestrials that the shows gave up. See especially: the early appearances of the Minbari from Babylon 5. Delenn was originally supposed be male, played by a female actress, then changed to female in the metamorphosis at the end of the first season. JMS thought the juxtaposition of human-female features in a male character would be interesting.
Everyone hated it, and Furlan wanted to ditch the facial prosthetics, so the plan was scrapped.
What about Vulcans? They have no emotions at all. Would that count as an escape from the funny suits? (Of course in practice the writers did not do a good job of depicting emotionless characters, but suppose we give them credit for the idea if not the execution.)
Humans without an known feature are easier to imagine that humans with an extra feature.
Exactly. Note that the writers intended to give them the extra feature “behaves logically”, and failed completely. They managed “behaves like a human, then complains that it’s not logical”, which is very far from being the same thing.
Trekkie nitpick: Vulcans do have emotions; they just repress them.
Come to think of it, that means they’re very much humans in funny suits. The alien isn’t different because he’s an alien, he’s exactly like a human but just represses it.
They did a somewhat better job with Commander Data, I think, although from what I recall he tended to act like they were taking human as a point of departure and adding or subtracting features. Also, interesting, he was completely aware of every difference between himself and humans, although since he “grew up” completely immersed in human society that’s not entirely unreasonable.
Captain Kurt is one hot MoFo! Even an alien can see that!
I remember Star Trek TNG had an episode about a sort of progenitor humanoid race that had at some point in the past seeded parts of the galaxy with its DNA. So that was at least an attempt to explain why all the races were so similar. Even so I find it hard to get into any SF where alien races are obviously just subsets from human culture: the warrior race, the neutral race, the science race, the trader race, etc.
TV tropes calls that the “planet of hats”. (visit tv tropes at your own peril, it’s a notorious time sink).
I think it represents a different fallacy: to assume that am unfamiliar group of things (or people) are much more homogenous than they really are. And more specifically: to assume that a culture or group of things is entirely defined by the things that make them different from us.
A radically different intelligence might not be graspable by us as an intelligence, or as an individual, or as anything at all. Perhaps termite mounds are intelligent, but in such a different dimension that we just can’t appreciate it.
What you seem to want is an intelligence that is non-human but still close enough to human that we can communicate with it. Although it’s not clear what we’d have to talk about, once we get past the Pythagorean theorem.
I always wonder why we don’t seem to be making more of an effort to communicate with the larger, intelligent animals? Like, we had Koko the gorilla and the chimp (whose name I forgot) that learned sign language, and since then I haven’t really heard of … similar projects? But if we’re going to work on trying to imagine intelligence that isn’t human, I feel like there are things to learn there. Also, if we can get the animals to learn … words (?), maybe we can get them to blog or vlog regularly, which is much easier and cheaper than in Koko’s time.
Something along the lines of Alex the parrot?
Yep, exactly! But also they don’t have to speak. They can just press buttons that mean stuff, which seems a lot easier to fund nowadays. We are all drowning in tablets and ipads!
You may be interested to hear that there are real pet owners doing this nowadays. https://www.lesswrong.com/posts/zbqLuTgTCu365MNu9/your-dog-is-even-smarter-than-you-think
Hmmm. You make an excellent point; it looks like an experiment worth trying.
The animal in question will, of course, have been raised by humans, raised—in effect—in a human mindset, which rather limits the difference from a baseline human mind that’s possible. Even so, though, one expects that there might be some differences...
I actually do agree that other animals, especially mammals, will probably turn out to be annoyingly similar to humans. So if we’re looking for a completely alien race we might be pretty disappointed. But they might have some interesting differences we don’t anticipate precisely because we haven’t really tried looking yet?
Also, I bet it’s pretty different to be an octopus. Even a human-raised octopus! (Although we would need textured buttons so ipads won’t help there. And they might turn out to not be smart/cooperative enough.)
http://en.wikipedia.org/wiki/Washoe_(chimpanzee) “Washoe was raised in an environment as close as possible to that of a human child, in an attempt to satisfy her psychological need for companionship.”
http://en.wikipedia.org/wiki/Nim_Chimpsky “Nim at 2 weeks old was raised by a family in a home environment by human surrogate parents”
The consensus on the results seems to be that they learned hundreds of words of ASL, but do not seem to be able to learn to combine them with any sort of grammar other than grouping situationally-related words close together in time. More a series of single-gesture associations and exclamations than sentences. Our language and communication faculties don’t have much of a counterpart in there.
As they matured they also appeared to become aggressive and uncontrollable. Though perhaps that has something to do with being fully physically mature by age 11 after entering puberty at age 7...
Nim wasn’t raised for very long in that environment, though—they transferred him to a laboratory at a young age and he was very undersocialized compared to Washoe.
I’m not sure to what degree you’d call Washoe aggressive and uncontrollable. I know a few people who met her (a journalist, a primatologist, a psychologist and sign language interpreter) and even interacted freely with her and all of them found her to be rather charming; all in circumstances where, while her surrogate parent was certainly present, he could hardly have stopped her if she’d decided to inflict harm or just felt threatened for some reason.
(Washoe is also said to have taught her son much of what she knew before her death—he was raised only by the sign-using chimps in that community, not humans, and the human handlers only ever used seven signs around him. Her vocabulary was also double-blind tested.)
Thanks for the more direct input—popular accounts no doubt follow a few scripts in their descriptions. I’d imagine socialization would make a huge impact indeed, and that a good deal of interpretation of ‘agression’ could come from the fact that its much more disconcerting to us to have a nonhuman making various displays than a human, possibly combined with faster maturation.
The teaching of sign language is interesting… have adult chimps taught each other sign language?
I’m not sure, but I know Kanzi, a Bonobo, is claimed to have picked it up from video of Koko the Gorilla (he was not ever trained to sign, but began quoting some of her signs verbatim. He normally communicates with lexigrams; it’s been discovered that he’s vocalizing, albeit at much too high a pitch but with approximate articulation, the English word he hears whenever he selects a lexigram. Chantek, an orangutan (who has had several outside observors interview him, and was raised-as-human basically full time like Washoe) has not taught his current, non-signing female roommate what he knows, and it has been attested that he seems to consider his use of sign something unique; he refers to himself as an “orangutan-person”, while roomie is just an “orangutan” and his handler is “person.”
(Randomly, I’m also reminded—though I can’t track down which ape this was at the moment, will poke it later—of an experiment with one well-socialized chimp who, faced with a “pictures of humans, pictures of chimps, here’s your picture, where does it go?” puzzle insistently placed his picture with the humans, and seemed rather upset to be corrected. This may’ve been Kanzi, so substitute bonobo in that case...)
I imagine that that chimp (or bonobo), if presented with a copy of Tarzan, and if able to read, would immediately identify with the hero.
It may be that a particularly intelligent bonobo can transcend the expected limitations of his species; consider, for example, Kanzi (whose wikipedia article I have just now come across—I have no idea how much of that is exaggeration).
Thanks for hunting down the links! The Nim story sounds really sad. =/
I kinda feel like that experiment was trying too hard to teach Nim to think like a human, rather than find out what/how Nim thinks. I’d be pretty impressed with combinations of words without grammar from other animals, considering that’s less than we currently have.
Very probably. Take lions, for example; how does having a tail, no hands, and a pure-meat diet change one’s outlook on the world?
...now, octopi are interesting. A 2012 Cambridge Declaration on Consciousness, they’re very probably conscious—if a little alien in morphology. Wikipedia describes them as having keen eyesight but limited hearing, which may lead to a few practical problems with communication (nothing insoluble; their aquatic environment might cause more problems).
And octopi apparently show great problem-solving ability; especially when the problem is question is how to get out of the secure tank that it is in and into the secure tank containing crabs (or other food) that is just over there.
So the octopus learning experiment (Olex?) looks like it might have a pretty good chance of success.
I was watching this video about octopi and the lady says they “taste” stuff with their suckers? I can’t tell if if she means that literally or how she knows. (Button design idea?) But it definitely looks like octopi have a lot to ‘say’. =]
I did hear about octopi stealing crabs from neighboring tanks and then closing their own lids after themselves! The problem-solving skills might make it hard to design good experiments. The octopus might figure out how to maximize its food output without meaning anything it says. (It will start talking about consciousness?)
Can we get the Singularity Institute to fund Olex? I bet we can cap the cost pretty low and monetize the cute factor. Octopus friend t-shirts and autographs!
Also, this elephant randomly start saying Korean words.
There’s another elephant, Batyr, who did this and was famous for it:
http://en.wikipedia.org/wiki/Batyr
I’ve heard of this one too, but I believe there’s no records/footage?
Must … upvote … all … elephants.
There are records and footage, but considering he died in 1993 in Kazakhstan, you’d probably have to speak Kazakh or Russian to even be able to search them down effectively. It was long enough ago that they might never have been released to the public internet. According to the Wikipedia article they’re kept at Moscow State University. The listed publications might be worth pursuing if you want to investigate it further.
A bit of googling reveals several pages (including one from Scientific American) that repeat the claim that “octopi taste with their suckers”. As far as I can find, the claim seems to date back to a paper by MJ Wells, published in 1963.
I haven’t read that paper.
But that’s exactly what makes them such interesting experimental subjects!
Ah, the Chinese Box problem. Tricky. Though technically we could apply the same question to humans...
That would certainly be a good way to maximize food output, but I think that in order to successfully do that well enough to fool even researchers looking for it, the octopus would have to have at least enough complexity in it’s brain to actually be conscious. Which is, in fact, the same problem with the Chinese Room; the notecards need to be drawn up by an actual Chinese speaker.
And if we look at evolutionary history, it looks like in evolutionary terms, actually being conscious was a better strategy than pretending to be conscious… …or it could be that we’ve just retroactively defined “consciousness” as the thing humans do when they try to fake consciousness. :p
I suspect that octopi are more-or-less as conscious as dolphins are, as a rough approximation.
I’m not sure it’s possible to confirm or deny that question without being able to define, once and for all, exactly and precisely what consciousness is.
Okay, that sounds like a totally alien experience. Imagine tasting your floor! And like … doorknobs and things.
I’m imagining tasting where other people have been walking, and I can see a possible market for octopus shoes. Especially if other people haven’t been cleaning up after their dogs.
Of course, it might just be that I’m squeamish because I’m not used to it. (But an octopus civilisation might choose the material from which to make their paths based on the taste thereof...)
I think you’re right. That squeamishness is very much a product of you having grown up as not-an-octopus.
Most creatures taste with an organ that’s at the top of their digestive tract, it’s fairly sensible that they have an aversion to tasting anything that they would be unhealthy for them to consume.
A species that had always had a chemical-composition-sense on all of it’s limbs? Would almost certainly have a very different relationship with that sense than we have with taste.
Hmmm. Fair enough. But even if they’re not squeamish about it, it would make sense for them to select the material from which they make their walkways according to flavour (among other factors, such as strength and durability).
Yup! I agree completely.
If you were modeling an octopus-based sentient species, for the purposes of writing some interesting fiction, then this would be a nice detail to add.
I think this might be the bias in action yet again.
Our idea of an alien experience is to taste with a different part of our bodies? That’s certainly more different-from-human than most rubber-forehead aliens, but “taste” is still a pretty human-familiar experience. There are species with senses that we don’t have at all, like a sensitivity to magnetism or electric fields.
You’ve never licked a doorknob just to see what it tastes like?
I guess I figured they’d taste like cheap spoons, except with more bacteria. Am I missing out?
Nope, that’s a pretty accurate description of my sensory memory of the experience. :p
I’ve read that people with autism anthropomorphize far less than other people, because their “model other people based on myself” module doesn’t seem to be working normally (or so my crude impression goes).
The gap between autistic humans and neurotypical humans may be bigger than the gap between male and female humans. I would list autism as an exception to the psychological unity of humankind.
Not really an exception since autistic people can still sexually reproduce with neurotypical humans, and especially since there’s a whole spectrum with varying degrees and even different symptoms not always showing up; autistic people don’t have any unique complex machinery and not even any missing complex machinery, at least not in terms of the entire machinery missing. As a mildly autistic person I really wish I could make some comment from personal experience on anthropomorphism, but… I suppose the problem is that I only have the experiences of an autistic person to compare my experiences to, so I can’t say much in relative terms, and haven’t very strongly noticed trends among neurotypical people to be different.
Personally I think I actually tend to anthropomorphize more as a result of my ability to guess what others are thinking being learned rather than instinctive. Because I really am using the same circuitry for comprehending people as I do for comprehending car engines and computers and using it in essentially the same way.
But I may not be typical. Best guess is that my particular quirks are mostly the result of a childhood head injury rather than anything genetic.
On one hand I totally agree that assuming that aliens would necessarily have human emotions because they are intelligent is stupid. On the other hand, I think it would be possible to have some emotions in common with some species of alien. If the emotion operated in the brain the same way and arose in a similar way (e.g. anger at economic freeloaders), you might as well call it the same emotion, in the same way you could meaninfully translate words for colours in aliens with a similar visual system.
I wonder how large the spectrum of emotions and modes of thoughts for intelligent entities (that might evolve or be designed) is ? Does it dwarf the human experience ? Are there elements that are nearly universal ?
I think natural selection would also result in animals that could reason about the behavior of their predators and prey. That’s why we often imagine what other species of animals are thinking even as they do things a human being would not.
An Alien being a human in a funny suit due to budget reasons seems logical for television and film. A SF novel has no costume budget restriction. How much weight do human traits in aliens have in the readers picture of the alien? Do human traits make the alien believable, enjoyable. What is the commercial value of human like aliens vs. alien aliens?
What you seem to want is an intelligence that is non-human but still close enough to human that we can communicate with it. Although it’s not clear what we’d have to talk about, once we get past the Pythagorean theorem.
How about P vs. NP? :-)
I remember reading “The Curious Incident of the Dog in the Night-time” and thinking: “This guy is more alien than most aliens I saw in Sci-fi”.
Mtraven: “Although it’s not clear what we’d have to talk about, once we get past the Pythagorean theorem.” Scott: “How about P vs. NP? ”
Or Bayes, I guess.
The mind-projection fallacy is an old favourite on OB, and Eliezer always come up with some colourful examples.
None are as good as this one, though:
http://www.overcomingbias.com/2008/06/why-do-psychopa.html
How alien would intelligences really be that “grew up” in our culture, under pressure to function well when interacting with us?
Huhm, thanks Eliezer, now I start seeing your point. It’s amazing that you can imagine a mind that doens’t run on emotional architectures like our own. I honestly can’t. No matter how hard I try I keep being biased by my own humanity. And yet I’ve lived in very different cultures and in various extremes of human nature.
Doug S.: I don’t agree, I found some autistic people to be far more ‘human’ (or should I say humane) than the average person. If you look for an example of a non-human human, how about Hitler? Serial killers? Rapitsts? They obviously lack some basic human(e) emotions.
Being human and being humane aren’t really in any way connected. Murdering people because they belong to a different group is a perfectly human thing to do.
I agree with AlexanderRM.
You stated that some of the autistic people you know are significantly different from most humans. That’s in line with the original content, not a counter-argument to it.
And with that said, I’m not sure I’m happy being in a conversation about how “different” a group of people is from normal people. It’s hard to know how that will be taken by the people involved, and it may not be a nice feeling to read it.
Surprised you didn’t mention DNA here, Eliezer. I imagine that if a truly alien species did a lifeform scan [cringe] of Earth, the first general comment they’d make would be along the lines of ‘hey, they’re all based on a double-helix self-replicator system’. Although not in English, of course.
So tell us—just how far back do we need to roll our intuitions here? If there’s no perfect, blank ghost-in-the-machine intelligence, what common factors would we expect the average evolved intelligence to have? Some sort of visual cortex? A ‘brain’ that began as an I/O hub but ‘evolved’ to be the seat of intelligence? A mix of ‘organic’ and ‘technological’ elements?
Scott Aaronson: Can’t we imagine the SF writers reasoning that they’re never going to succeed anyway in creating “real aliens,” so they might as well abandon that goal from the outset and concentrate on telling a good story?
Personally, I recall often hearing both writers and readers mention that you shouldn’t try to make your characters genuinely alien—not because it’s hard, but because readers will have a difficult time understanding and emphasizing with totally alien characters, thus detracting from their enjoyment. I even read some reviews of Egan’s Diaspora where people complained about the characters being too alien—considering that those were still pretty human in my eye, it seems like it’s rather easy for people to get that feel. From reading one of the Uplift novels, I can relate—there were aliens there who were sufficiently alien that I did found them rather interesting, but definitely couldn’t relate to them on an emotional level.
Stanislaw Lem treated the theme of ungraspable aliens with some success; “Solaris” is better-known, but “Eden” is even more striking in its exploration of the failure to understand.
The remark about the “Star Trek” episode seems strangely inept; surely the writers weren’t concerned about the plausibility of the identical parallel evolution—it was just a literary device for them. Criticizing that as a failure to imagine divergent evolution is a bit like criticizing a soap opera for using the twin device to keep an actor after the character dies; after all, the writers could have refrained from doing that, and instead put in a new different character with a different actor...
Valentina Poletti: If you look for an example of a non-human human, how about Hitler? Serial killers? Rapitsts? They obviously lack some basic human(e) emotions.
I wouldn’t call it obvious at all. Case in point: meat. The majority of people in Western countries regularly eat meat, despite the knowledge that by doing so, they are helping maintain a system where countless of farm animals are kept in miserable conditions (me included—though AFAIK the animals have it somewhat better in Finland than in most countries). They don’t even do it because they’d believe themselves to be helping “real” people, like Hitler thought—they simply do it because they like the taste.
You don’t need to be lacking basic emotions in order to do bad things—you just need to think the other person as something else than a human. And not necessarily even that explictly—most people (whether they admit it or not) care more for those more similar to them (members of the same family, culture, whatever). I doubt people participating in tribal wars lack any human emotions—they just only express those towards their in-group, which doesn’t happen to include the enemy tribe.
Response to old post:
The majority of people in Western countries do something that I believe we are morally obliged to avoid. They do this despite (insert the reasons I consider this morally obligatory). Since my belief that it is morally obligatory is correct, the fact that people do those things anyway demonstrates that they are willing to do immoral things for no good reason.
You replaced “something” with “meat-eating”, but supporters of some other idea could easily replace it with something else: The majority of people in Western society support abortion despite (insert reasons why abortion is murder). This shows that people in Western society are willing to ignore morality merely for personal benefits.
If you assume that your beliefs about doing bad things are correct and the world is full of people who disagree with you, it is easy to show that the world is full of lots of people who do bad things for trivial reasons.
Isn’t “people who do things that my morality says is bad” the very definition of “bad people”?
If we just taboo “bad things” in my original comment and replace it with “hurt others”, it doesn’t lose any of its original intention.
My response doesn’t lose any of its original intention either. Abortion opponents believe that abortions hurt others, after all, since they count fetuses as others. A huge number of things people oppose they oppose because they believe them to hurt others; for instance, consider that people oppose homosexuality because ithey think it destroys society, and destroying society hurts others.
Sure. Do we disagree over something?
Yes. It’s easy to take a controversial issue and say “The other side’s position is one which hurts others. Lots of people are on the other side. This shows that lots of people are willing to hurt others.” You can do that for any controversial issue.
The flaw is that this only shows that lots of people are willing to hurt others assuming that your side is correct. But you don’t just get to assume that your side of a controversial issue is correct and use that to make unconditional conclusions about the other side.
People’s failure to embrace vegetarianism shows that people are willing to hurt others in exactly the same way that people’s failure to oppose abortion, or oppose gay marriage, or support gay marriage, or support any policy of the week, shows that people are willing to hurt others.
Fair point.
OTOH, Eliezer held a poll on his Facebook wall on “meat-eaters, do you believe that animals are capable of suffering” and the results were something along the lines of a 4:1 ratio in favor of “yes”, so that would suggest that many (though not all) meat-eaters do at least believe in animals being capable of suffering.
That only means that “can animals suffer” isn’t very controversial. To actually show what you want it to show, animals have to be able to suffer significantly, not just by some non-zero amount. That’s a lot more controversial.
And even then, not only does someone have to believe that animals can suffer significantly, they have to believe in utilitarianism and a couple of other things that cumulatively, are pretty controversial.
What? Why?
Even if they believe that animals can suffer, they also have to believe in utilitarianism in order for that belief to be reasonably described as “willingness to hurt others”, because “willingness to hurt others” also has an implied “significantly”, and that means making comparisons that say that the gain from harming animals is smaller than the loss to the animals.
Technically, there are beliefs other than utilitarianism which can lead to that but I suggest that they would be rare among meat eaters. For instance, “you should never eat things that suffer no matter what” is a deontological rule which would also lead to the conclusion that meat eaters are willing to hurt others significantly (since the rule implies that all suffering significantly hurts others). However, I doubt many meat-eaters have such rules.
I feel like we’re talking past each other somehow, but getting to the center of that and sorting it out doesn’t seem like a particularly high-value time investment. Tapping out.
I think this is true of only a broad class of moralities, rather than all moralities- consider, say, a Calvinist view where some people are good and others are bad and this is only weakly, if at all, correlated to the actions they take.
mtraven: any intelligence will be visible by the optimizations it produces. As for that matter will other sorts of optimizer than intelligence. There might exist some X which is as alien to everything we know as intelligence is to evolution, but it should produce identifiable stigmata resulting from its process, just as evolution and design do.
Yes, science fiction writers don’t write truly alien characters because the market is too small.
Aliens who don’t make sense are basicly all the same alien.
Aliens who make sense according to some logic that doesn’t fit human feelings can be an interesting intellectual puzzle. But they aren’t real, they’re puzzles. You can figure out the logic.
Aliens who start out not making sense but then start to make more and more sense as it goes along, and you keep fitting things together to understand things you didn’t see before, and at the end of the story you still don’t get it all—I find that satisfying, but it’s hard to write like that. CJ Cherryh has made solid attempts at it. She has aliens that are very much like big smart cats, but the longer she wrote about them the more like humans they got. In 40,000 in Gehenna she had aliens that were a lot like alien ants. One of her less successful stories. She wrote one where the aliens were like truly giant earthworms who communicated by building structures. She wound up with humans who could communicate with them after a fashion, but who weren’t very good at translating. It was odd.
Larry Niven’s “puppeteers” were reasonably alien. Herbivores who had technology that let them live forever. If you were this kind of alien all you had to do was keep your place in the herd and you need never die. But ordinary ones were not allowed to reproduce at all, there was no place for more of them. And any individual who was willing to get in a spacecraft or go face-to-face with dangerous aliens like human beings was classified as insane and snubbed. Things that we would consider reasonable risks were not reasonable for them, when they might live forever. However, Niven wrote a short story, “Safe at any Speed”, in which immortal human beings behaved precisely the way his puppeteers did, and they seemed quite human. Niven at 40 years old had a quite limited concept of how 200-year-old humans would behave, much less immortal aliens. Still, his stories were interesting and they sold well.
Some people consider human beings inhuman when they do something that is considered socially unacceptable. Dictators who start purges that kill many of their own supporters, like Stalin, say. Or Mao. But people who get into high-stakes gambling will do all sorts of things for the chance to win, when the stakes are high. And dictators play for the highest stakes, it’s somewhat rare for them to lose and come out of it alive. The ones who survive and come to the USA etc tend to die in a few years of various things, typically cancer. They aren’t that different.
On the subject of Star Trek, the Klingon culture in ST:TNG is supposedly inspired by (Western stereotypes of) feudal Japan.
Japanese Klingons? Now there’s a thought.
I’m not plugged into the 4chan or Something Awful communities, but if anyone runs a photoshop contest on what a Japanese/Klingon culture would look like, do notify me.
Many of our emotions can be thought of as shortcuts for reasoning. Not so much simple states of happiness and sadness, which are more affective descriptions, but emotions like fear, anger, hope, love, envy, jealousy and so on. These emotions prompt actions. But in principle, such actions are in most cases the same ones that a fully rational and unemotional person would take. Fear makes you run from danger—exactly what a rational person would do. Love makes you protect your allies—again, a rational action. The value of the emotions is that they shortcut what might be a slow rational decision, and also that they are available to lesser animals who do not have our developed sense of rationality.
One place we might look for alien emotions, then, is to shortcut other aspects of rationality. Aliens might have a strategic-move emotion, that would activate in games like chess or in comparable strategic situations. This would manifest as an urgent subconscious drive to make a certain move in such situations.
A simpler example would be an emotional drive to eat. We have hunger, but that is just a sensation. And sometimes people do acquire emotional connotations for eating. But aliens could have a feeding emotion, as urgent in its way as fear or love, but as different as these two, and directed towards eating.
Any biological behavior could acquire a corresponding emotional drive. Aliens might even give themselves emotions. Imagine aliens who have emotions helping them to overcome bias. Perhaps they have an emotional abhorrence of disagreement, so that the idea of consummating a bet fills them with horror. They might look at our society with barely restrained disgust and disdain.
A majority of people in first-world countries not on a weight-loss diet or something seldom feel real hunger (as opposed to appetite), and the “I wouldn’t mind a snack right now” feeling (despite having had lunch a couple of hours before) feels much more like an emotion such as missing someone or being worried than like a sensation such as having to pee or feeling cold.
Commenting on the autism thing (as I’ve got an insider’s perspective there): one thing that strongly characterized my experience growing up was being consistently “mis-read” by those around me. While I (and, I’d wager, most others on the autistic spectrum) do have some “standard” reactions to things (like laughing when amused, smiling when happy, etc.), I don’t always emote in visibly standard ways. This led a lot of people, while I was growing up, to believe that I “didn’t care” in situations where I cared deeply, that I had intentions I didn’t have, that I was sad/lonely when in fact I was just neutrally preoccupied with something, etc.
I also tend(ed) to get read as “nervous” a lot because I can be fidgety and have difficulty speaking (or, in some cases, talk a mile a minute simply because I don’t have much vocal modulation) -- and while like everyone I get anxious occasionally, I am probably no more generally anxious than average, and despite being introverted, I am definitely not “shy”.
Anyway, even before I found out I was on the spectrum, I had figured out that I was (what I termed) “differently mapped”—as in, I’d realized that my outward signals didn’t mean the same things that people assumed them to. Earlier, in around fourth grade, I’d determined that I might actually be an alien because of how disconnected I felt from those around me and how often I was called “weirdo”. I soon decided that it was scientifically infeasible for me to actually have come from outer space, but still, in communicating with other autistics, I have been amazed at how common it is for us to wonder as children whether we’re “not entirely human”. There’s even some thought that “changeling” mythology (in which young children are said to have been “replaced” by elves or faerie babies, whose qualities perplex or annoy the parents) is based in early observations of autistics and other atypical children.
Also, regarding the “autistics anthropomorphize less” thing: my experience as a youngster was subjectively similar to what I’ve seen termed “panpsychism”. That is, I didn’t really distinguish between “live” and “non-live” things at all, or between humans and nonhuman animals—everything was “potentially alive” as far as I was concerned. I’ve since learned otherwise (due to learning about brains and nerves and such), and I no longer wonder if objects like pencils and Lego blogs feel pain, but I definitely still feel a kind of “psychological unity” with nonhuman animals, especially cats, as their actions make a lot of sense to me for some reason.
I’ve confirmed that I am not unique in this among the autistic population; several others have described similar experiences (I know one autistic kid who, upon determining that the electronic pokemon plush he’d just gotten at the store didn’t light up the way it was supposed to, decided to keep it anyway because he figured it still needed a home). Which is interesting to me, as the stereotype seems to be that autistics see the whole world (including the people in it) as “dead” and “empty”. My experience was the precise opposite of this; I perceived the whole world as vibrant and suffused with great depth and beauty and complexity, to the point where humans didn’t always stand out as the most interesting thing in my environment (which is probably why I was seen as “oblivious to other people” at times).
Nevertheless, I certainly wouldn’t describe autistics as “actually alien”, as we evolved here on Earth just like everyone else. We’re just a particular variation of human. And I do actually think that despite the lack of a simplistic “psychological unity” that can be fully detected on the basis of outward expressions, there is definitely a deeper psychological unity. Autistic humans and nonautistic humans alike can feel happy, sad, frustrated, angry, appreciative of beauty, disgusted, etc., even if we show these emotions in different ways and in response (sometimes) to different experiences.
One of the great challenges of what I’d call “social progress” is that of figuring out how humans with different cognitive styles can learn to communicate with one another and recognize that different but equally “valid” minds do in fact exist already within the human population. I also think it is probably relevant to AI research to look at how humans who are cognitively different in various ways end up coming to understand one another, because this does happen at least occasionally. I’ve noticed that in relating to other autistics I experience a lot more of what feels like the ability to take accurate “short cuts” to mutual understanding, and it occurred to me a while back that perhaps that “short cut” feeling is what many nonautistic people experience all the time with the majority of those around them.
Anne, feel free not to answer this one: What do you know about neurotypicals that neurotypicals don’t know about themselves?
Anne, feel free not to answer this one: What do you know about neurotypicals that neurotypicals don’t know about themselves?
Wow, that’s an interesting one. I don’t think I can make a valid general statement that some particular thing that’s true of ALL nonautistic people but that none of them know themselves, so I won’t even attempt that.
However, the thing that does come to mind in response to your question (and I don’t know if this counts but I’ll put it forth anyway) is that I do find myself often aware when (nonautistic) people are making certain assumptions about reality that are transparent to them because they happen so automatically, but apparent to me because I don’t make those assumptions.
I’m sure I make other assumptions (as all humans, insofar as I know, use heuristics to some extent), but it’s pretty evident that my heuristic set is somewhat atypical, and judging from the cog-sci stuff I’ve read, some of this could probably relate to a difference in how low-level perceptual information is processed.
E.g., there have been times when people have commented on something I’ve done, “You must have spent a lot of time on that!” or even “Too much effort” (as one teacher wrote on a project I did in high school), when in fact I haven’t necessarily spent a lot of time on said thing, or put in what I’d consider to be heroic amounts of effort. I’ve also had the opposite experience, wherein I’ve tried very hard to do something for a long time, and still not been able to, and gotten numerous comments regarding how I could do it if only I “tried harder” or “relaxed”.
To me, this says that many (mostly non-autistic) people tend toward a particular way of perceiving and processing certain kinds of information, and are hence presuming that certain things are going to be relatively easier or more difficult based on the assumptions their processing style encourages. And it also tells me that in those cases, I am sometimes more aware of how their processing style might be working than they are—that is, what variables they might be ignoring without realizing it.
Hopefully this doesn’t come across as horribly presumptuous—I’m perfectly aware that this can go in the other direction. Where I see there being potential here (as far as helping further an understanding of cognition goes) is in the fact that minds with at least somewhat different basic assumption-sets can sometimes point out these assumption sets across cognitive style gaps, leading to a greater meta-awareness of the kinds of assumptions that tend to get made and what their consequences can be.
I have to agree completely with the contents of this post. I’ve spent years trying to explain to people how terribly unlikely DNA is to be the genetic material of an alien life from, but with little success. Heck, even carbon isn’t essential (although I would expect it to be a common case).
Thank you for writing, Anne. Your comments here, as well as your recent ‘interview’ posts on your blog, have been most interesting.
Intelligence will be visible by the improbable situations it maintains. It’s a special case of the visible signs of life.
I have a dim memory of a short story (By Ursula K. LeGuin?) in which humans come in contact with another life form, but they can’t make any sense of the life form. They can’t communicate with it (them?) and, indeed, aren’t even sure the life form is aware of them. So, in frustration, the humans wipe it out. Does anyone else know this story?
It’s the unrecognized bias I find frustrating in Science Fiction Television and Movies. For example, in space, there’s no up or down, yet every single space ship is in the same orientation with every other space ship it encounters. I mean, why couldn’t The Enterprise “up” be “down” on a ship that it’s communicating with? It wouldn’t even be a difficult special effect and could be quite funny. When Kirk would be talking to the Klingons “on screen,” the Klingon Captain would be “upside down.” “Why should I negotiate with you? You don’t even know up from down!” I have never seen an aknowledgement of this orientation problem in any sci-fi movie (or book, for that matter).
It adds an interesting complication to the whole transporter beam technology: not only would the beam have to transport all your atoms and metabolic functions, but it would have to put you back together in the proper orientation. Otherwise, you’d end up “standing” on the ceiling and crashing “Up” into the floor.
And I haven’t even gotten to the truly alien: what if “Up” and “Down,” “Ceiling” and “Floor” made no sense to them at all?
--Katherine
Katherine, the up/down thing would just work out for communication. If you turn your TV upside down the picture turns upside down with it. Or if the camera turns upside down your TV picture will turn upside down apart from your TV. It’s all in the signal.
The transporter problem would go the same way, if there’s a downside at the emitter then that information will get sent to the receiver which also has a downside.
Things like space battles usually don’t show enough detail to see that they’re thinking in 2D. The StarTrek movie The Wrath of Khan did a parody of that, though. Kirk realizes that Khan is thinking in terms of two-dimensional strategies, so when Khan is chasing Kirk’s ship, with all the advantages because he’s behind, Kirk moves his ship up and Khan is completely confused and goes straight ahead so that Kirk can then go down and get behind him. It’s possible it wasn’t meant to be a parody, but it’s so stupid I’d rather give them the benefit of the doubt.
I don’t remember that LeGuin story, but somebody—LeGuin? Joanna Russ? -- did a Clarion exercise where they had some of the writing students pretend to be aliens who communicated with stylized gestures, and the other students were supposed to react to that. The aliens did their thing and nobody could figure any of it out. The aliens started “dying” and they called it all off because it was so totally frustrating.
CJ Cherryh wrote a story where human soldiers were fighting on some dreary planet where the enemy never surrendered but sometimes would commit suicide rather than keep fighting. Their carrion birds actied kind of weird, everything was a little strange, the humans got very tired of fighting with people they didn’t understand. The viewpoint character has gotten completely sick of it, and at the story’s end when they’ve negotiated a sort of peace he realizes that to actually seal the peace a human soldier has to do a mutual suicide with one of the local soldiers, and he pulls the grenade pin while the other guy holds it.
People who read science fiction like to get insights from it. But when it’s alien stories, it’s hard to provide the right level of hints. Too many and it’s trite, too few and it never makes sense. Hard to calibrate that for the average reader.
If I remember correctly, LeGuin wrote about a Clarion workshop where she asked her students to write stories with aliens, and all the stories were comic. So she asked them to write stories about dying aliens. This was an introduction to a story about an alien whose culture uses mazes as a basic tool of communication.
Human scientists capture it and put it in a maze. The alien is distressed and bewildered because the maze doesn’t make sense and the humans don’t respond to any of the alien’s efforts at communication. The alien eventually dies, though I don’t remember how much this is of misery and how much that the physical conditions are wrong for it.
C.J. Cherryh (again!) also wrote a series of books (the Foreigner series) about the interaction of humans with a race of reasonably human-like aliens. The basic driver of the first few books is the premise that the aliens were a bit too human-like, and their language fairly easily understood, such that the humans became overconfident of their “understanding” of the alien culture.
J. Thomas, I believe the Cherryh story you mention is the novella “The Scapegoat”.
Cherryh has clearly wrestled with these issues for a while...
Peter Watts’ “Blindsight” is one of the better attempts to describe a truly alien-alien I’ve read recently, and I think he still has it as a free download. Interestingly the human protagonists (and the vampire—don’t worry, it’s not what you’re thinking) are almost alien-alien as well. Although not quite.
This is from the star-trek wiki which gives an explanation as to why many of the aliens resemble each other—http://memory-alpha.org/en/wiki/Humanoid
Quote -
Despite the vast distances separating their homeworlds, many humanoid species have been found to share a remarkable commonality in form and genetic coding. These similarities were believed to be evidence of a common ancestry, an ancient humanoid species, who lived in our galaxy’s distant past some four billion years ago.
To preserve their heritage, this species apparently seeded the primordial oceans of many potentially hospitable planets with encoded DNA fragments. The genetic information incorporated into the earliest lifeforms on those planets and through preprogrammed mutations caused by a genetic template, directed evolution toward a physical development similar to their own. Because of this controlled mutation mechanism, most habitable planets in the galaxy evolved with many physically similar species (e.g. fish, trees, dogs, insects), and on many of those worlds with at least one sentient species with a humanoid configuration. Most of these humanoids are even interfertile with each other.
It would not be hard to read SF stories/movies as a reflection of how the US-Soviet relationships were—how tense the Cold War was. During dentente, one gets cuddly aliens like ET. During more tense periods one gets The Thing, or Invasion of the Pod People.
I also wouldn’t read too much into why Star Trek aliens look like people in rubber suits. After all, the “transporter” was done as a cost savings as they could not afford the budget to use the “shuttlecraft” in every episode. While $300k/episode doesn’t get you much these days, back in the 1960s it was a budget buster. And after all, Kirk kissing Uhura was the first interracial kiss on US TV—and it almost got that show cancelled as well.
This post series was seriously undermining my enjoyment of Hellboy 2 last weekend.
We have plenty of aliens available on our planet. We already have citations of animals and termites in these comments. My most recent reading on that was Mary Roach’s Bonk, which enters via the topic of pig orgasms. We have trouble recognizing the emotional states of species not that different from us because we are looking for similar facial expressions; and where we see things that look like human facial expressions, we infer similar emotional states; and this is already assuming that other species have human emotional states.
AnneC, I find the same “effort” thing in dealing with my co-workers, with everyone more or less neurotypical. I do mostly quantitative work, and they are mostly innumerate. That seems to be a big enough distance. Most have no concept of whether a request will take five minutes or a week, or even what constitutes a well-formed request. I assume this is the case for most of us: those outside our specialties have trouble telling which are the hard cases.
I’m probably WAY to late to this thread to be asking this, but what exactly do you mean by “you’ve never antled”?
I’m thinking this may just be some reference that is lost on me.
Presumably some subjective experience that’s as foreign to us as humor is to the alien species in the analogy.
Miguel: it doesn’t seem to be a reference to something, but just a word for some experience an alien might have had that is incomprehensible to us humans, analogous to humour for the alien.
I think Star Trek TNG did a really good job at presenting alien protagonist culture. While most aliens were flanderized, Federation had a rich culture that was quite unlike modern human culture, with post-scarcity economy, Prime Directive and happy exploration as the main goal.
It probably didn’t make a very good story, as DS9 make Federation a lot more human. It improved storytelling, but I still miss TNG Federation and its alien ways.
On the subject of Star Trek, the Klingon culture in ST:TNG is supposedly inspired by (Western stereotypes of) feudal Japan.
The original Klingons were obviously supposed to be Russians, in ST:TNG and later they seemed to be some kind of absurd combination of Vikings and Samurai (both small warrior minorities in much larger cultures). Though I’d say Star Trek went to that well so many times that they dried up the water table.
Klingons: Samurai Romulans: Imperial Japan Cardassians: Fascist Japan Talaxians: Occupied Japan
Even the institutional culture of Starfleet could be considered something of a riff on Corporate Japan.
To be fair, Eleizer, if the alien’s brains were as sloppily put together as ours (and that seems likely), it should be entirely possible for them to develop sexual fetishes by having the part of their brain concerning sexual activity and eroticism getting miswired (and they almost certainly will have such emotions if they reproduce sexually, even if it expresses differently to that of humanity; a sexual organism with a sex drive will out-reproduce those without one).
From there, it seems entirely possible for the alien to develop a human fetish the same way it’s possible for a human to develop an alien fetish, even if they do have a truly alien mind architecture.
Alternately, the stereotypical image of the “alien carrying off a helpless young woman” could just as easily be the result of the bias of the reporter who’s taking the image. Which would you rather take a photo of, if you were a reporter on the scene of an alien pirate raid: an alien carrying off a bag of grain, an alien carrying off a toolbox, an alien carrying off some livestock, or an alien carrying off a helpless young woman?
It seems unlikely they would consider the same humans attractive that other humans do, though. Do humans with animal fetishes consider attractive the same animals that other members of the appropriate species do?
I’m given to understand that bestiality is mostly a dominance/submission thing; the human is being degraded by having sex with an animal, and it’s that degradation that’s sexy, not the animal itself.
Things like the tentacle sex fetish among women are likely to be somewhat more accurate to how an actual alien fetish would play out in humans.
It would be amusing to have a story about an alien with a beautiful women fetish, and the other aliens saying, “What’s wrong with you?”
From Mass Effect: I’ve never considered cross-species intercourse / I’m not going to pretend I’ve got a fetish for humans, also, Mordin on the subject.
In general, though, it’s totally on-board with interspecies romance, and most of the aliens are humans in funny suits.
BTW, the body plans of aliens in fiction, including non-sentient ones, seem way less bizarre to me than those of certain terrestrial prehistoric animals such as Opabinia.
I strongly recommend Psychetypes, a book about people’s varied takes on time and space. We’re more alien to each other than I think the vast majority of us notice.
That’s one of the crazy things about racism: it focuses on such an unimportant difference.
If anything, racism focuses on an unimportant but obvious difference that is perceived as tied to non-obvious but important differences. Nothing crazy about that. Surely you’ve heard about the weak correlation between height and IQ?
Also, while I realize that white/black is the prevalent form of racism in America, it certainly isn’t the only one. For example, there’s a lot of racism involved between Koreans, Japanese, and Chinese. Slightly more difficult to find any difference there.
The non-obvious differences can still be ascertained—it is not as if one has to get there via the superficial differences.
It’s a weak correlation. if I wanted to ascertain someome’s intellgece formally, i’d use therir academic record, or an IQ test. if I wanted to do so informally, I’d talk to them. If all I had to go on was their height or skin colour, I would suspend judgement, because the infomation is too weak to be worthwhile. I dont want false confidence.
But racists do it the other way round. They argue that Bush is smarter than Obama just because he is white (Someone actually said that to me)..They don’t even bother paying attention to the way the two men speak. They discard higher quality information in favour of lower.
If it’s not superficial, its not racisim: its xenophobia. The various Balkan factions hate each other just fine even though outsiders can’t detect the differences](http://en.wikipedia.org/wiki/Serbo-Croatian#Present_sociolinguistic_situation
Equating racism with discrimination based on skin colour and calling all other forms xenophobia will only make you confused.
Many species are divided into taxonomical subspecies (races) which clearly differ in terms of intelligence, or aggression, or sociality, or whatever. The question isn’t whether races are different, but whether homo sapiens can reasonably be divided into such subspecies. That’s a scientific question, but several of your statements such as “racism is crazy” and “if it’s not superficial, it’s not racism” strike me as naïve and motivated by some weird sense of moral superiority instead.
Then quote the science.
I don’t understand what you want me to quote, or why. Can you be more specific?
“The question isn’t whether races are different, but whether homo sapiens can reasonably be divided into such subspecies. ”
Well.can it?
Current scientific consensus amongst those who seem to be the least driven by ideological agenda is “no”, except of course if you talk about extinct subspecies. Surely you know this?
Yes. I was assuming that since concerns about race are not driven by scientific consensus, they are driven by something else. I was speculating about what, and what the consequences might be for cognitive efficiency. Why did you bring ins science, when there is no science to bring in?
You did not “speculate” about anything. You labelled a belief as crazy that seemed to be based on scientific facts, was supported by scientific authorities, and has only very recently fallen into disfavour; demonstrating that you’re as least as ideologically motivated as those who propagated the belief in the first place. If you’d lived about 50 years ago, you’d say the same about the belief that ethnicity has little or no impact on intelligence/aggression/sociality/whatever and any such perceived differences must be the result of cultural differences, communication problems, and selective evidence.
This might be an interesting read for you.
I argued that it was congnitively inefficient.
Ermm..it’s not based on current science though. Are you trying to say that the cognitive error of racism is not what i said it was, but rather that it was merely based on outdated science? I do wish you guys would drop the dog whistle and say what you mean.
Whereas it has a slight impact. Maybe. But that gets back to what I was saying originally. Racism treats unimportant factors as important. That;s the bias that’s going on here.
ETA:
And here’s someone (edit: someone ELSE) self-daignosing what I was saying:-
As someone from the southern US, I was asked (jokingly) about whether or not I was a racist when I went north for college. At first I was repulsed by the question, until I noticed that I automatically got more nervous when passing a black person on the street at night. I am going to college in Cleveland, and so I have some actual reason for this since every mugger I’ve seen for five years in incident reports has been black. My problem (though I only started defining it this way within the past few months of reading LW) is that I was weighting race far too strongly in my everyday interactions.
After I realized I was doing this, I decided to switch my threat assessment style to a more clothing-based approach, with some success. Everyday interactions with other races than my native white within the university also felt easier and less forced. Taking an implicit association test helped me to realize that I was racist to some degree despite my intense repulsion to the idea. I now encourage everyone to examine their thought process for racism, especially if they would feel dismay if someone accused them of racism.
I think this is the root of your problem. You didn’t make the above cognitive changes because they were rational, you made them because you didn’t want to be accused of “racism”, i.e., because you wanted to fit in. While this is itself a perfectly understandable and reasonable human motive, it causes problems if one wants to discover what is true.
This seems like an anti-updating sort of argument. It seems analogous to telling someone who has decided to be a nicer person that —
“You didn’t make the above cognitive changes because they achieved the good; you made them because you didn’t want to be accused of ‘being a jerk’, i.e., because you wanted to fit in.”
Quite. It’s perfectly possible for rationality and some other goal to coincide. And asserting that updates don’t count unless they are motivated by a pure desire for raitonality for its own sake is setting the bar rather high.
Who is that comment addressed to?
More history (preferably contemporary writings) would probably give you at least a little more reach into the human range.
Margaret Ball’s Flameweaver duology is about a matriarchal magic-using culture which gets drawn into the Great Game between England and Russia. The Victorian(?) British seemed a lot more alien than the magic users.
Not that that is history. The Vitcorians were capable of expressing themselves in language we can still understand. How alien was Dickens, or his characters?
I suspect there are some pitfalls in treating people as popular as Dickens (or Kipling, etc.) as properly representative of their time, since people that widely read often have a significant role in shaping later culture. A Christmas Carol essentially created the Anglosphere’s modern celebration of Christmas as a family-centered, primarily secular gift-giving holiday, for example.
Conversely, many eras adopted certain conventions to regulate the content of movies (and other media), most of which no longer exist, and that change in production culture adds some inferential distance that wouldn’t necessarily exist in personal culture if communication were possible without the caveats of age, hindsight, and nostalgia. One might develop quite a different view of the late 1950s from listening to the satirical music of Tom Lehrer—or reading back issues of MAD Magazine.
BTW, I seem to recall being surprised by how non-alien characters by Dostoevsky were: it seemed as though differences between that culture and mine were mostly cosmetic or technical. But of course comparing people-novels-were-written-about then with people-novels-are-written-about today isn’t as significant as comparing the median person then with the median person today, and I’d guess the latter comparison would show much greater differences.
I suspect this may be because you read authors whose culture were significantly different from yours as part of your education, whereas Eliezer got his ideas for what an ‘alien’ culture would look like by looking at aliens as written by contemporary writers.
I suspect this may be cause you read authors whose culture were significantly different from yours as part of your education, whereas Eliezer got his ideas for what an ‘alien’ culture would look like by looking at aliens as written by contemporary writers.
I suspect this may be cause you read authors whose culture were significantly different from yours as part of your education, whereas Eliezer got his ideas for what an ‘alien’ culture would look like by looking at aliens as written by contemporary writers.
BTW, ‘Blindsight’ by Peter Watts is (not without flaws but) very good with respect to aliens’ otherness. The author is a marine biologist.
I think that the best works of fiction incorporate both Starfish aliens and Rubber-Forehead aliens. One mustn’t discard the possibility that other intelligences might evolve in a fashion analogous to us, but rather incorporate the knowledge that we cannot foresee every possible form thereof.
For logical consistency, if there are both rubber-forehead and starfish aliens, then the starfish aliens should be separable into groups, such that all species in any given group are rubber-forehead aliens relative to each other. Instead of (say) three sets of rubber-forehead aliens and three sets of starfish aliens that are starfishes to each other as well, it seems more reasonable to have three sets of rubber-forehead aliens and a number of similar clusters of (approximately) four species each consisting of remarkable similar starfish aliens. (If they’re starfish enough, then humans might be unable to differentiate between their species, and that’s fine too. They might have just as much difficulty telling humans and ferengi apart, after all.)
Too specific, I think. Toy example: we have species labelled 1,2,3,4,5; species 1 apart are rubber-foreheads to one another, but species 2+ apart are starfish.
Okay, I see what you’re getting at, and it’s a good point; but as a minor quibble, “starfish aliens” are, to my reading, pretty completely alien, while rubber-foreheads have strong similarity. You could have species 1, 2, 3, 4, 5… with each neighbouring pair being rubber-foreheads relative to each other, and becoming less and less similar as you travel down the line, but given those constraints I don’t think you can have proper starfish until you’re a good distance along that line; say, 10+ spaces. (Starfish and rubber-foreheads are extremes of, respectively, “different” and “similar”—and there are a lot of gradations between those extremes).
Of course, in any realistic lineup, it won’t be a neatly spaced line; number 4 might be missing entirely, and numbers 5 and 6 surprisingly close, and so on.
Yes, the distinction between rubber-foreheads and starfish is a fuzzy one and the ratio between “clearly rubber-foreheads” and “clearly starfish” is probably bigger than 2 for most plausible ways of quantifying the differences. I was only trying to indicate the logical structure of my objection, not trying to make a plausible and quantitatively correct example.
Right. I apologise for over-nitpicking.