Transhumanism and the denotation-connotation gap
A word’s denotation is our conscious definition of it. You can think of this as the set of things in the world with membership in the category defined by that word; or as a set of rules defining such a set. (Logicians call the former the category’s extension into the world.)
A word’s connotation can mean the emotional coloring of the word. AI geeks may think of it as a set of pairs, of other concepts that get activated or inhibited by that word, and the changes to the odds of recalling each of those concepts.
When we think analytically about a word—for instance, when writing legislation—we use its denotation. But when we are in values/judgement mode—for instance, when deciding what to legislate about, or when voting—we use its denotation less and its connotation more.
This denotative-connotative gap can cause people to behave less rationally when they become more rational. People who think and act emotionally are at least consistent. Train them to think analytically, and they will choose goals using connotation but pursue them using denotation. That’s like hiring a Russian speaker to manage your affairs because he’s smarter than you, but you have to give him instructions via Google translate. Not always a win.
Consider the word “human”. It has wonderful connotations, to humans. Human nature, humane treatment, the human condition, what it means to be human. Often the connotations are normative rather than descriptive; behaviors we call “inhumane” are done only by humans. The denotation is bare by comparison: Featherless biped. Homo sapiens, as defined by 3 billion base pairs of DNA.
Some objections to transhumanism are actually objections to transhumanism. But some are caused by the denotative-connotative gap. A person’s analytic reasoner says, “What about this transhumanism thing, then?”, and their connotative reasoner replies, “Human good! Ergo, not-human bad! QED.”
I don’t mean that we can get around this by renaming “transhumanism” as “humanism with sprinkles!” This confusion over denotation and connotation happens inside another person’s head, and you can’t control it with labels. If you propose making a germline genetic modification, this will trigger thoughts about the definition of “human” in someone else’s head. When that person asks how they feel about this modification, they take the phrase “not human” chosen for its denotation, go into values mode, access its full connotation, attach the label “bad” to “not human”, and pass the result back to their analytic reasoner to decide what to do about it. Fixing a disease gene can get labelled “bad” because the connotative reasoner makes a judgement about a different concept than the analytic reasoner thinks it did.
I don’t think the solution to the d-c gap is to operate only in denotation mode. Denotation is what 1970s AI programs had. But we can try to be aware of the influence of connotations, and to prefer words that say what we mean over the overused and hence connotation-laden words that first spring to mind. Connotation isn’t a bad thing—it’s part of what makes us vertebrate, after all.
This sounds like the objections to human reproductive cloning. If you can do it safely (i.e. negligible increase in risk of birth defects, and so on), then human cloning is essentially just a delayed twin with a different developmental environment. Hardly something to get worked up about. It’s the connotations that get people: there’s this crazy meme that a clone of someone will be an identical copy, and even among people who know better, there are still connotations of “violating nature” or “playing God”. And so people tend to just take it as a given that human reproductive cloning is inherently immoral and should be illegal. Meanwhile, people who are working in denotation-mode with regards to cloning have trouble understanding what all the fuss is about. The issue seems pretty obvious to everyone, just not in the same way.
I’d like to think that your notion of the denotation-connotation gap would help people understand their own views on human cloning and have an actual, fruitful discussion—but it would probably just be interpreted by people in connotation-mode as “You’re stupid.” So, here’s my big question:
How do you explain to someone that their opinion on something is based on inaccurate connotations without having this perceived as an insult?
Try the Socratic method? Ask them for their reasons, and find a short path to a logical contradiction.
I like it! In my experience, going Socratic is usually the best way of arguing with people whose beliefs are obviously going to collapse if you poke at them too hard. But I’ve been down that road, and I know what comes next: either they rethink their position (hooray!) or they get frustrated at you for what they see as twisting their words. And the Socratic approach only works in conversations; if you’re writing a persuasive article, you can’t really use it very well.
I really don’t know what to do when that fails. None of the approaches I’ve tried seem very effective.
In conversation, the best way I’ve found to avoid causing frustration by the Socratic method is flattery. Every once in a while say “Ah yes”, or “That’s a valuable way to look at some problems”, “I see where you’re coming from”, etc. And smiling and nodding irregularly when they say something less wrong than other statements. Sometimes, act as if the idea is new to you. Other times, attribute it knowingly to someone ostensibly respectable. It just takes a bit of positive feedback to keep a person to speak his or her mind.
A great way to get an arrogant person to reconsider an idea is to emphasize the priors they have right, have them work out inconsistencies with your simple, crafted questions, and then promote it as their idea, with a little bit of help that you provided.
It’s not just questions instead of statements, it’s a (partial) pretense of humbleness, an appeal to their ego to let you in. That’s where I suspect people usually go wrong.
A friend of mine once remarked, “transhumanism isn’t really transhumanism. It’s transmonkeyism.”
I really like that. To me, it implies that humans are an ongoing evolutionary project that still has undesirable remnants from a monkey past—e.g. immature, violent, stupid—and that deliberately improving/removing the beastly parts might leave us more human.
I like the quote; but I think you’re using ‘human’ in the way I wish people wouldn’t use it—to mean “good” rather than “characteristic of homo sapiens”.
Yes, it accepts the highly dubious frame that human == good. It’s a rhetorical trick that could backfire if it leads someone to favor eliminating PTSD, fragile spines, and status-seeking-at-the-expense-of-truth-seeking, while still rejecting immortality and uploading. I’m not completely comfortable with it, but if it could expand someone’s notion of “human” to be more flexible, then that could be an achievable goal.
This seems precisely backwards to me. Any piece of mere technology is a means, not an end. People and what we want out of life are the ends.
Which is, more-or-less, why I don’t like such terms as “humanism” or “transhumanism”, since they anchor the conversation around the coincidental shape my meat takes rather than around the ends I seek.
Yes, it accepts the highly dubious frame that human == good. It’s a rhetorical trick that could backfire if it leads someone to favor eliminating PTSD, fragile spines, and status-seeking-at-the-expense-of-truth-seeking, while still rejecting immortality and uploading. I’m not completely comfortable with it, but if it could expand someone’s notion of “human” to be more flexible, then that could be an achievable goal.
Could we think of the connotation of a sentence as related to the Bayesian evidence about the speaker when get from the fact that he is the sort of person who would say that sentence?
For example, ‘there are differences in ability between races’ has the connotation of normative racism, because normative racists are more likely to utter it.
That would be a good way to implement connotation in an AI. I don’t think we are that accurate. To retrieve sufficient Bayesian evidence for general reasoning regarding B on being reminded of A, supposing you already had P(A), you’d have to retrieve any 2 of P(A|B), P(B|A), and P(B), right? But most current models assume concept activation retrieves one number per related concept (activation level), not two.
You seem to be alluding to the sort of things George Lakoff explores with the idea of http://en.wikipedia.org/wiki/Conceptual_metaphor
Lakoff views political ideology in the US as fundamentally a “family metaphor”, with conservatives espousing a “strict father” model and liberals a “nurturing parent” model. Whether or not that’s the case could use much more research, but the core idea of metaphor as the structure upon which other thought builds is compatible with your appeal to more connotation-aware discourse.
Perhaps a way to address what you’re getting at would be to portray Transhumanism within the family metaphor; children growing up seems particularly fitting to me. What do you think?
An additional benefit of this framing would be to move the locus of attention away from the human-nonhuman issue, as the family metaphor would put connotations firmly within the “human” realm.
If the transhumans are children growing up, who are the parents?
It was a fairly naive suggestion with some potential problems, but here’s one way to unpack it using Eliezer’s concept of Future Shock Levels (http://www.sl4.org/shocklevels.html) as a guide:
The parents are “old-style” humans.
For those with a shock level that is quite low, they would adopt the role of the parents. Although retaining perhaps disdain for new fads, etc., it’s an understood social situation. This reframes what might be construed as unpleasant societal upheaval into a “standard” form of societal upheaval that occurs every generation.
For skeptical individuals who are at a sufficient shock level, the maturing children would be appropriate, redirecting the human vs nonhuman connotation into a cultural phenomenon where change and distinction over time are accepted or at least tolerated.
As mentioned in my comment above, although Lakoff is quite careful that his theoretical underpinning are solid (falsifiable, etc. etc.) the particular question of whether the family metaphor really is the dominant one in American political culture is not nearly as reliable.
They have none. Transhumans are feral children growing up.
Logicians also call the former “denotation”, and call the latter “intension” or “connotation”.
Denotation is also called “extension”, with the obvious contrast to “intension”.
Maybe it would be clearer to use “associations” or “associative network” or some such, rather than “connotation”, for the pole opposite to denotation.
My, perhaps flawed, intuition is that “connotation” is what defines a concept from “outside” in that concept’s plane, “intension” is what defines it from “inside” in its plane, and “extension/denotation” is what defines it in another plane. So connotation is the net of independent concepts linked with the discussed concept, intension is the conceptual definition of that concept, and extension/denotation is what it refers to in (somehow conceived) reality.
I don’t understand the first sentence one bit, but agree with the second sentence. The use of “connotation” as a synonym for “intension” is horrible IMHO. Connotation already had an established usage before logicians ever used it; and that usage is both very different from and used in the same very specific area of discourse as this new definition. If definitions were trademarks, this would be a violation.
You’re using ‘new’ in an interesting way here. This usage of ‘connotation’ was arguably first taken up by Mill in 1829. The word ‘connotation’ was first used by logicians in the 17th century, though it tended to mean something more like “the proper category to put something in”, and Mill was explaining he’d like to use the word more sensibly than that. The ‘common usage’ you refer to was possibly a bit earlier, 16th century if I’m to believe OED. And it was not a term of art.
I was evidently using ‘new’ in an uninformed way.
According to Wikipedia. But I’ve never heard “connotation” used that way. “Intension” as set of rules, if such a set can be found; but most linguists would say that no such set of rules can be found for most categories that could actually define their intension. IMHO, in practice, the thing described by a set of rules is still more like the extension than the intension. That’s why I didn’t call it the intension.
It’s the usual way of explaining the distinction in intro to logic classes. I’m quite sure that Hurley uses connotation in that sense. Unsurprisingly, Enderton and Tarski do not touch upon the subject since their books are more mathematical/formal.
Can you explain this? You’re not saying that it makes us metaphorically vertebrate (i.e. we have “backbone”), are you?
I liked this post, and agree that it’s important to remember that we can’t deny (even in ourselves) emotional/associative thinking.
I don’t agree that people who never perform denotationally strict reasoning are in any way more correct or consistent than those who choose to (presumably in high-impact situations only, since it’s impossible to only and always think in that mode), except in the surface sense that they’re consistent in never exercising any formal discipline of thought.
I figured it was just a joke playing on the common assertion that something is what “makes us human”, something discussed (and, to a certain extent, deconstructed) in the post.
A joke, and an example of using a more-precise word instead of the most-accessible word. Other mammals don’t speak, but I’d bet they have concepts and contexts activated by those concepts. Invertebrates seem to have less-flexible, less context-sensitive categories; so my guess as to what having connotations distinguishes us from, is invertebrates.
That does seem far more likely :)
I don’t have any evidence for that idea. It’s just a theory I decided to throw out there.
Terminology quibble:
I get where you get this notion of connotation from, but there’s a more formal one that Quine used, which is at least related. It’s the difference between an extension and a meaning. So the extensions of “vertebrate” and “things with tails” could have been identical, but that would not mean that the two predicates have the same meanings. To check if the extensions of two terms are identical, you check the world; it seems like to check whether two meanings are identical, you have to check your own mind.
Edit: Whoops, somebody already mentioned this.
Don’t know if anyone’s ever thought of this google ‘seo’ hack but make the name of the generic service in the first part of the name so google suggests it when people search for it. Like if it’s about trees, call it treely.
With emphasis on “can”, I can agree with that. But, connotation has a wider meaning than that, all of the non-explicit shadings and fuzzy edges of concepts, not just “emotional coloring”. The problem is you then go on to use “connotative” reasoning as opposed to “analytic” reasoning.