Rationality quotes: August 2010
This is our monthly thread for collecting these little gems and pearls of wisdom, rationality-related quotes you’ve seen recently, or had stored in your quotesfile for ages, and which might be handy to link to in one of our discussions.
Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
Do not quote yourself.
Do not quote comments/posts on LW/OB.
No more than 5 quotes per person per monthly thread, please.
-- Patrick Nielsen Hayden
This is a good example of how some areas are most concisely dealt with by ridicule.
What are the Serious Philosophical Issues posed by life extension?
I can see many serious practical issues, but in what way should philosophical opinions change due to the mere extension of human life span?
I conjecture that we’re supposed to read it under the classical definition of “philosophy”, which used to include pretty much every type of intellectual discussion, including such practical issues as how to properly raise children, how to organise a political society, etc.
“Who wants to live forever when love must die?”
There is a great ted lecture on this subject. I thought he did a good job at addressing the concerns. At least to the point of defending that research should continue to at least allow future generations to decide if they think it is acceptable.
http://www.ted.com/talks/aubrey_de_grey_says_we_can_avoid_aging.html
I don’t know.
What is a day for?
Is the source online? The first page of Google results seems to be just people quoting the line.
I tracked down the source as rec.arts.sf.fandom, 09 Aug 2000 (previous to any use as a quote), in a thread titled, “Do cats go to heaven?”
It’s (sort of) available online in the Google Groups archive.
However, the provided link does not seem to work due to the huge length of the thread (it’s trying to create a tree threaded discussion out of 4189 posts), but I was able to see several references to it inside, including the beginning as a snippet.
Due to the new release of Google Groups, I was able to go through this thread and found the original reference on page 41 of the thread, posted 08 Aug 2000.
Is Rain’s quote the most upvoted entry of all time? Its currently at +62.
I win!
It’s the most upvoted comment I’ve ever seen. I’m not sure about top-level posts, though.
The current first page of ‘Recent Posts’ shows 7 with a higher point total than 62; two posts are above 100 points, both by Yvain.
It would be interesting to see a compilation of the most upvoted entries from all the quote threads.
DanielVarga created a top quote list, last updated in March.
“Any sufficiently analyzed magic is indistinguishable from SCIENCE!”
~Girl Genius
Young Agatha Clay: But how can they protect me if they aren’t here? That’s illogical.
Uncle Barry: Um...It’s science.
Young Agatha Clay: Ah. You mean you’ll explain when I have a sufficiently advanced educational background.
Yes, but in context this doesn’t quite mean what it sounds like. In Girl Genius, “Science” is almost a password for any weird things built by sparks, rather than what a rationalist would call science.
Well, presumably the sparks have a rational understanding of the things they build, even if the reader doesn’t. Excepting things like the clanks Agatha built when she was asleep, but she’s something of an odd one even among sparks...
Girl Genius has visited that border between magic and science. Note also the first two frames, in which the spark rational mind is put to use in pursuit of spark emotional needs. “Look. I’m a girl with needs. Okay?”
Indeed. It’s Science as Attire in more ways than one.
Two more:
But it should have… and Tarvek correctly notices that he is confused. (The reason for his confusion doesn’t become clear until later, though.)
Does this make Brandon Sanderson scifi?
My hotel doesn’t have a 13th floor because of superstition, but people on the 14th floor, you should know what floor you’re really on. If you jump out the window, you will die sooner than you expect.
-- Mitch Hedberg (Quoted from memory)
-- Anonymous
-- Sky Masterson, a character in “Guys and Dolls”
My recollection of this involved “bet you five thousand dollars he can...” and “going to wind up five grand in the hole with an earful of cider,” which makes the quote more entertaining. Nonetheless, an excellent quote.
-C. S. Lewis, Mere Christianity
B… But, but he wants … he … says you, it’s a good idea to … argh!
His failing in one area, does not make his quotes untrue. Just a bit iffy. Thats why I try not to quote Gandhi or Churchill anymore.
?
Yes, my thought exactly.
This isn’t the first time someone’s posted a quote from Lewis that he didn’t follow.
My reply was me expressing shock that someone like him would have grounds to lecture others about not giving money to questionable charities.
Lewis was a convert to Christianity. The usual self-analytical deficiencies of religious believers are often gigantified in converts, possibly because they adopted their beliefs out of need rather than simple habit and thus will hug them much tighter.
Similar ideas discussed here.
I think there are two slightly different situations at work here. Lewis did grow up in a very Christian environment, and as such possessed the “antibodies” described in the linked post.
His being a convert from atheism—by means of some kind of emotional breakdown—didn’t, thus, make him into an advocate of following every passage of the Bible and every idea ever presented by the Magisterium; rather, it made him a zealous defender of every excuse for why you don’t need to do so in order to be a good Christian.
That’s fair—I didn’t mean to say that it was the exact same case, simply that similar topics had been discussed before. That wasn’t clear from the original comment so I’ll change it to make it a little more so.
Just an observation of how those most motivated to figure out the effectiveness of a charity are those who don’t want to donate. But of course, this only applies to seeking out evidence of non-effectiveness. Charities don’t usually think of other charities as competition. Also, most of the time, most just rationalize a reason for non-effectiveness.
-- Motoori Norinaga (1730-1801) - quoted from Blocker, Japanese Philosophy, p. 109
Motoori was as far as you can get from being a rationalist but this quote was so Yudkowskian that I felt it belonged here.
-- Freeman Dyson, “Birds and Frogs”
-- Peter Norvig, in an interview about being wrong. When I saw this, I thought it sounded a lot like entropy pruning in decision trees, where you don’t even bother asking questions that won’t make you update your probability estimates significantly. Then I remembered that Norvig was the co-author of the AI textbook that I had learned about decision trees from. Interesting interview.
Wow, I’m glad this kind of analysis is showing up in mainstream publications.
Norvig is describing an important insight from information theory: the amount of information you get from learning something is equal to the log of the inverse of the probability you assigned to it (log 1/p). (This value is called the “surprisal” or “self-information”.)
So, always getting results you expect (i.e. put a high p on), means you’re getting little information out of the experiments, and you should be doing ones where you expect the result to be less probable.
Therefore, to have a good experiment, you want to maximize the “expected surprisal” (i.e. sum over p * log(1/p)), which is equivalent to the entropy, and probably the basis for the method you mention.
Is LW broken for everyone?
ETA: When I wrote this, the “Comments” page was one of the few I could access, hence it being posted in such a strange place.
No, only for the ones who can’t reply to your comment.
It’s broken for me, at least.
Instead of the old-fashioned way of breaking, with new comments around a certain time not being visible, I can’t get to older comments or posts. I don’t know what the extent of the loss of access is.
The report issues page (link at bottom of page) is read only. There was going to be a :”brief outage” for that page at 7AM PDT. That would be almost 7 hours ago.
I think the stock of humorous messages at the “you tried a link which isn’t working” has been improved.
The last line should have been
I think the stock of humorous messages at the “you tried a link which isn’t working” page has been improved.
Or to put it another way, the permalink which would normally make it possible for me to edit isn’t working either.
And, though this is minor, if I hit the comment button, it seems as though nothing has happened. However, if I refresh the comments page, my comment shows up.
As a minor mercy, hitting the comment button more than once doesn’t seem to produce duplicate comments.
It is for me.
The tree metaphor reminds me of this...
-- William James
— Futurama: “The Late Philip J. Fry”
-- Musashi, “A Book of Five Rings”
Pretty much the opposite of the foundation of modern education and social organization.
On rationalization, aka the giant sucking cognitive black hole.
-Jonathan Haidt, “The Happiness Hypothesis”
I really like the quote about cod but I’m not particularly inspired by the moral given for the story. I’d prefer “I eliminated a non-terminal ethical principal when I realised my thinking was pretentious bullshit, moving towards a more coherent ethical framework. Yay me!”
I noticed that too; of course not eating fish is an ethical non-issue given how much other low-hanging consequentialist fruit there is.
However, note that his justification for his change of heart is pure rationalization. Whatever good reasons there might be for eating fish, or for abandoning vegetarianism, “they eat each other” is a bad one, a confabulation.
Fish and other animals are not capable of reflecting ethically on their actions, so they are ethically blameless for whatever they do. That does not mean their suffering doesn’t count. Franklin knew that.
I know I’m bringing Drescher up a lot recently, but this exchange reminds me of some of his points, and how, after reading Good and Real, I see Haidt’s work (among other people’s) in a different light.
Drescher’s theory of ethics and decision making is, “You should do what you [self-interestedly] wish all similarly situated beings would do” on the basis that “if you would regard it as the optimal thing to do, then-counterfactually they would too”.
He claims it implies you should cast a wide net in terms of which beings you grant moral status, but not too wide: you draw the line at beings that don’t make choices (in the sense of evaluating alternatives and picking one for the sake of a goal), as that breaks a critical symmetry between you and them.
Taking your premise that fish don’t reflect on their actions, this account would claim that they likewise do not have the moral status of humans. But it would also agree with you that it’s insufficient to point to how they eat each other, because “I would not want some superbeing to eat me simply on the basis that I eat less intelligent beings.”
Also, Drescher accounts for our moral intuitions by saying that they are a case of us being choice machines which recognize acausal means-end links (i.e. relationships between our choices and the achievement of goals that do not require the choice to [futurewardly] cause the goal). This doesn’t necessarily contradict Haidt’s argument that we judge things as right because of e.g. ingroup/outgroup distinctions (he says that functional equivalence to acausal means-ends links is all that matters, even if the agent simply feels that they “care” about others), but it does tend to obviate that kind of supposition. [/show-off]
I’ve got Good and Real on hold at the library. :) Currently working through Cialdini’s Influence, muahaha...
This sounds to me like a modernized version of Kantian deontology… interesting.
Where I really trip up with this argument is in the ‘granting moral status’ step. What does it mean if I decide to say ‘a fish has no moral status?’
Let’s do a reductio. Say fish have no moral status. Does that mean it’s permissible to torture them, say by superstimulating pain centres in their brains? I don’t think so, even if the torture achieved some small useful end.
I don’t think suffering should be taken out of the equation in favour of symmetries. The latter have no obvious moral weight.
I don’t have a good answer for the rest of your comment, but I can answer this:
Drescher does a good job of making sure that nothing depends on choice of terminology. In this case, “a fish has no moral status” cashes out to “I should not count a fish’s disutility/pain/etc. against the optimality of actions I am considering.”
You can take “should” to mean anything under Drescher’s account, and, as long as you’re consistent with its usage, it has non-absurd implications. Under common parlance, you can take “should” to mean “the action that I will choose” or “the action I regard as optimal”. Then, you can see how this sense of the term applies:
“If I would regard it as optimal to kill weaker beings, then-counterfactually beings who are stronger than me would regard it as optimal to kill me, to the extent that their relation to me mirrors my relation to the weaker beings under consideration.”
I didn’t give a full exposition of how exactly you apply such reasoning to fish, but under this account, you would need to look at what is counterfactually entailed by your reasoning to cause pain to fish.
No, that isn’t implied. There are all sorts of coherent value systems which make ethical distinctions between killing things that kill other things and killing things that don’t kill other things. Maybe Franklin was confabulating, but again, that moral does not inspire me. In most cases the reasoning is sound and does move the values a step towards coherency.
There is a difference between dastardly rationalisation and updating your ethical position by eliminating obviously poor thinking.
A lot of people are good at not reflecting ethically too, and it does help them get away with stuff (via more effective signalling). This is not a feature of the universe over which I rejoice and nor is it one that I encourage via my ethical signalling.
His comment on the matter suggests he thought he was.
The context does not record whether he returned to vegetarianism once away from the temptation.
Yes. Hence the lack of inspiration. It’s the same old moral: “Thoughts and ethical intuitions are enemies. Ethical intuitions are good and you should follow them. Thinking your ethics through is bad. Submit to the will of the tribe!”
I say if subjecting your ethical intuitions to rational analysis doesn’t lead you to change them in some way then you are probably doing it wrong.
How subject ethical intuitions should be to rational analysis (in the sense of being changed by them) depends on how much you endorse the fact-value distinction and how fundamental the intuition is.
Reason leads me (though perhaps my reasoning is flawed) to conclude that “others’ abject suffering is bad” isn’t any more justified a desire than “others’ abject suffering is good;” they’re as equivalent as a preference for chocolate or vanilla ice cream. But so what? I don’t abandon my preference for vanilla just because it doesn’t follow from reason. Morality works the same way, except that ideally, I care about it enough to force my preferences on others.
Yes. It is non-terminal ethical intuitions that I expect to be updated. “Should not do X because Y” should be discarded when it becomes obvious that Y is bullshit.
-- Christopher Morley
“A joke told by Warren Buffett comes to mind: a patient, after hearing from a doctor that he has cancer, tells the doctor, “Doc, I don’t have enough money for the surgery, but maybe could I pay you to touch up the x-ray?” Hope and self-deception are not a strategy.”
~ Vitaliy Katsenelson
- Leo Tolstoy, The Kingdom of God is Within You (1894), ch. III
You know, I’m going to remember that one and try to remind myself of it whenever I notice my status is interfering with my thoughts.
This quote seems undermine the SIAI.
-- Michael Anissimov
I don’t see how. Whatever truth it might hold for a 140⁄100 IQ gap, it doesn’t hold for arbitrarily smart beings, who could tell a lesser being all the self-modifying it would need to do to reach cognitive parity.
In any case, as those who have seen my posts here know, my warning lights go off whenever someone claims that something “can’t be explained, even given infinite time and space”.
The point being made seems to be the contradiction of a common AI-theorist, futurist position.
That position being that once a computer algorithm of effective AI IQ of 100 is produced, it can increase it’s intelligence to arbitrary levels by the addition of additional hard-drive space, ram, and processing power.
IMO the analogy slightly fails. It fails to include anything analogous to the increase in ram, which is a very important factor, as it allows complex concepts to be dealt with as a whole.
The original quote said, “the most difficult subjects can be explained to the most slow-witted man”. This contradicts in my opinion what Michael Anissimov (Media Director, SIAI) thinks to be the case, namely that “a person with an IQ of 100 cannot understand certain concepts that people with an IQ of 140 can”.
I was just being pedantic here but thought that highlighting this point would be good as other people, like Greg Egan, seem to disagree. This is an important question regarding the dangers posed by AI.
Infinite time and space. That’s a lot of time and space. I suspect my warning lights would go off too. Do people make claims like that often?
Anissimov sure did: “no matter how many time and notebooks they have.”
I’m not sure what your point is exactly, but if it is anything like “That quote by Anissimov is largely mistaken” then you are correct.
It’s an open question. I just don’t know if that is the case and I’m very curious to know more about this. The chimp-human example is very convincing. Further someone with down-syndrome probably cannot understand what you can comprehend. So where is the gap, is there one? It looks like. This says, yes, I should believe what the SIAI claims. However, the original quote claims that “the most difficult subjects can be explained to the most slow-witted man”. If this was the case, it would hint at the possibility that superhuman AI would merely be superhuman fast and had superhuman memory and ram, yet wouldn’t reside on a different conceptual level.
Both Anissimov and Tolstoy appear to me to be engaging in hyperbole. Further, Tolstoy was writing in the end of the nineteenth century—computers didn’t exist, much less the idea of general artificial intelligence.
How do you mean?
While a perfectly valid, and somewhat relevant, point if true, that position does need support.
It seems pretty intuitively obvious—unless the notebooks have special things written on them.
Really?
Give an example of something that you believe the average person, given 200 years to study completely focused on that task, couldn’t possibly understand.
Intuitively obvious but also wrong. IQ primarily makes a difference in how long it takes to master something and which knowledge can be created independently, without the need to load it from a notebook. Both of these factors have been assumed in infinite measure. Obviously IQ will make a difference in how effectively those concepts can be applied.
I make the tentative prediction “There are no concepts that can be discovered by typical people with an IQ of 140 that can not be understood by an average person with IQ of 100, no matter how much time and how many notebooks the latter is given.” I would also be comfortable extending the IQ limit to 200, so long as a clear condition of (neuro)typical is maintained.
On the nature of ethics:
~ Eliezer Yudkowsky c/o Harry Potter, Methods of Rationality chapter 39.
This is too much of a bare assertion to be a good rationality quote.
-- Primo Levi, The Drowned and the Saved (quoted by Damon Linker at The New Republic ; h/t Andrew Sullivan)
Gary Drescher, Good and Real
-- Charles T. Tart
“There are worse things than seeming irresponsible. Losing, for example.”
~ Paul Graham
-- Howard Stanton Levey, The Satanic Bible
(hopefully unnecessary disclaimer: I am in no way a Satanist)
Completely out of curiosity, why do you cite him by his birth name rather than his pen name of Anton Szander LaVey?
As a token of obligatory disrespect towards the juvenile pomposity of adopting a spoooky name for oneself.
And “NihilCredo” isn’t a spooky adopted name? :-)
Heh, good quip! :) (the difference is obvious, anyway.)
I know this very well. Video games get so much more appealing when I have some work to avoid.
It could also be a selection effect—if you’re doing something even though you’re not supposed to, it must be something you really like.
Similarly, unhealthy food tastes good because substances that are neither healthy nor tasty aren’t classified as food.
Or an evolutionary nudge in the direction of hypocrisy.
More precisely, if the tribe has bothered to prohibit it, it must be something that advantages you at someone else’s (possibly the group’s) expense, and therefore you should be:
a) Encouraged to do it more, while...
b) Exhibiting plausible self-punishment when caught, to maintain group ties.
The former explains the increased reward, the latter the social behavior aspect.
It’s specifically the “I have something unpleasant to do, and I’m not doing it!” that’s exciting. I’ll play even a bad game when I have work to avoid.
— Mark Chu-Carroll, Metaphorical Crankery: a bad metaphor is like a steaming pile of …
— Mark Chu-Carroll, The Danger When You Don’t Know What You Don’t Know : Good Math, Bad Math
-- Isaac Asimov
“Someday I want to be so powerful that I can defeat myself in a single blow.”—Silence, commenting on a character in Prism Ark that says they want to be stronger.
I don’t understand. Anybody who lives near a cliff or owns a gun has that much power. Explain? What is the connection to rationality?
That’s the best summary of rationality I’ve ever seen.
Wouldn’t a good summary of rationality be a little more comprehensible? I’m sure there is a deep insight that Nic sees behind the quote and from what you say it must be related to rationality but at face value, without context that most people, myself included, don’t have it makes no sense.
Wow, I had no idea that this quote would be so controversial. I will attempt to explain.
To begin, I think there is something that can be learned from the quote, but I wouldn’t call it a “deep insight.” Or, at least, I wouldn’t have before uninverted’s reply. Silence, a friend of mine, was making fun of the focus of many anime on literal, physical strength, often detached from anything to protect or, indeed, any real overall goal at all (I don’t remember if this is the case with Prism Ark or not). It’s thus a light warning against focusing too much on method over end results.
Also, if there were two of you, it’d be impossible for you[a] to be decisively stronger and more skilled than you[b].
Uninverted, I’m guessing, was thinking of akrasia and overcoming cognitive biases, which can be considered “part of self” and, at the same time, something to “defeat” as easily as possible. Hence, the quote can also be read as a summary of rationality.
Yes, that’s what I thought exactly.
With a big enough hammer, that goal is within the reach of anybody strong enough to swing it at their own head.
- Incubus, Drive
Wow… I hadn’t noticed that before
I’d quote the whole song, but I think that might violate a copyright or something.
I doubt it, but there are several sites with the full lyrics if anyone wants to see them.
Edit 24 July 2013: Removed link to lyricsbay at their request. – admin
On the subject of noble lies:
Thomas Carlyle—The Latter-Day Pamphlets
I had just recently seen the two Christopher Nolan Batman movies (Batman Begins and The Dark Knight). Here are my favorite quotes; please add any that you like. (Character attribution left off to prevent pre-judgment.) Wikiquote lists.
“It’s not who you are on the inside; it’s what you do that defines you.” (Compare: Functionalism, substrate independence, timeless identity – I know, that’s probably not what was intended, but note how it’s used in other contexts.)
“Criminals thrive on the indulgence of society’s understanding.” (Compare to counterfactual reasoning: if we would sympathize with every defection, and felt that “the past is the past”, those wishing to defect would have no reason not to.)
“And what about escalation? […] We start carrying semi-automatics, they buy automatics. We start wearing Kevlar, they buy armor-piercing rounds. […] And you’re wearing a mask, and jumping off rooftops!”
(Also, I watched the earlier Batman movies after seeing this, and frankly, by comparison, they look like campy garbage.)
Nolan’s Memento is also interesting from a rationalist perspective—it gives “running on untrusted hardware” a quite concrete meaning.
It’s a complicated issue—as nearly as I can tell, the people who argue for no understanding assume that they can just use their intuitions about punishment, and not update about whether they’re getting the effects they want.
I agree. I wasn’t trying to argue in favor of some kind of unlimited punishment, or against all understanding whatsoever—just that this kind of understanding can be misused, especially when you discount the offense for being in the past. (I had recently read Drescher’s account in Good and Real of why the pastward, inalterable aspect of a transgression, and the fact that the punishment only causes things in the future, are no reason not to punish.)
Edit3: And, of course [rot13] vg jnf n onq thl jub fnvq gung va gur zbivr, naq gur guvatf lbh jnea nobhg ner unaqyrq va gur zbivr, nf gung punenpgre’f ernfbavat yrnqf uvz gb qb ubeevoyr guvatf ba gung onfvf, yvxr gel gb qrfgebl na ragver pvgl. Ng gur fnzr gvzr, V guvax ur qbrf unir n cbvag.
-H.P. Lovecraft
Sounds like Caveman Science Fiction to me. “Why should we risk learning about new things, when there’s a possibility they’ll be scary?”
This belongs on the parody site http://morewrong.com. Please build it :-)
I don’t know; the more Less Wrong I read, the more I start to think Lovecraft was on to something.
Delving too far in our search for knowledge is likely to awaken vast godlike forces which are neither benevolent nor malevolent but horrifyingly indifferent to humanity. Some of these forces may be slightly better or worse than others, but all of them could and would swat our civilization away like a mosquito. Such forces may already control other star systems.
The only defense against such abominations is to study the arcane knowledge involved in summoning or banishing these entities; however, such knowledge is likely to cause its students permanent psychological damage or doom them to eternities of torture.
We’ve got Harry Potter and the Methods of Rationality; maybe you should write Cthulhu Mythos and Rationality ?
Then again, it might be unwise to disseminate it openly.
At the Mountains of Sanity
I’ve always enjoyed Vernor Vinge’s name for AI: “Applied Theology”.
(In, I think, A Fire upon the Deep.)
“Theological engineering” has a nice ring to it.
I never read Lovecraft as being any kind of metaphor for the real world, so I wouldn’t vote this up as a rationalist quote for that reason.
But I like it as a device used Lovecraft to try to convey a sheer magnitude of horror. Can you imagine discovering something so horrific you wished you could delete the whole thing from your memory ? The more you pride yourself as a rationalist, the more horrific it would have to be.
This seems to be the premise of Isaac Asimov’s “Nightfall”.
-- John Donne
Any man’s death diminishes me, because their agenthood and qualia are probabilistically similar to mine, and it would not have taken many counterfactual changes for me to be not at all unlike that man, compared to a babyeater, whose pain bothers me less. Thus altruism is found in the egoist, to an extent.
Getting from “other people’s minds are probably similar to mine” to “I care about other people’s minds” still requires some implicit premises or some psychological features beyond egoism (e.g. empathy).
Specifically it requires “I care about myself” with saner boundaries around what counts as ‘self’ as the implicit premise.
Added: Any specific reason for the downvote?
Evolution (of genes and/or memes) is probably sufficient to generate this.
That’s a better response to Will’s post than to the Donne quote. Donne only states that all humans influence each other’s existence to some minimal degree, butterfly-effect-style (I would object, incidentally, that such influence may not always be desirable); it’s Will that brings up the similarities between humans as his moral foundation.
Indeed. But my comment was a reply to Will’s.
Mine eyes, they deceive me! Deleted.
Please don’t delete comments. It makes it hard to understand orphaned replies. Adding an [Edit: Withdrawn] at the end of the comment serves the same purpose, but maintains conversational continuity.
What are qualia?
Are you unfamiliar with the term, or are you asking him to demonstrate that he understands it well enough to permit his using it as a quiet assumption in making a point?
WrongBot has been around enough that one can safely assume that his ignorance is Socratic.
I am familiar with the term, but I don’t seem to have any. And no one’s been able to tell me what they do, so I like to ask when they come up so that maybe someday I’ll find out.
I don’t believe in qualia as a real entity, but when people talk about them they’re referring to a genuine phenomenon which you also experience: that your conscious understanding of the experience of perception is only the merest shadow of the perception itself. Seeing red doesn’t mean seeing something with a little XML “red” tag attached, but something much more complicated that happens beyond your conscious introspection. You can imagine the state of having switched that “red” experience with the “green” experience, in all your memories as well as in current perception, and still instantly knowing that the switch had occurred. This phenomenon is not an illusion, just a blind spot of conscious knowledge which happens to confuse the hell out of naive philosophers.
Thank you, well said! I’ve seen people go so far in dissolving qualia that they think they have to deny their own conscious experience, or think the confusion is extinguished as soon as you have the terminology nailed down.
Of course. If I had perfect knowledge of my brain’s functioning, now that would be a very strange thing indeed.
No, I can’t. If all my memories had been altered to agree with my newly-altered perception system, what difference would I detect? How would I detect it? Different from what?
The hypothetical situation I mean is one where your current retina is reprogrammed to switch red and green stimuli, and your memories are edited so that you don’t figure it out from inconsistencies, but everything else is left the same.
The fact that there’s subconscious cognitive content to red vs. green can be deduced from things like instinctive reactions† to the sight of blood: the brain doesn’t check the color against the memory of other blood, it reacts faster than that to to perception. The emotional valence of colors would seem off somehow after a switch, because those don’t appear to operate fully through memory, either. Snap judgments of peoples’ attractiveness would backfire as your subconscious applied the rule “green tint means sickly” to someone with a healthy complexion.
I don’t think you’d be able to consciously articulate what exactly seemed “red” about that green grass, but parts of your mind would be telling you that something’s gone wrong, because they’re hooked up not just to labels “red” and “green” but to full systems of processing that would be running on suddenly different stimuli.
†Similarly, chimps raised by humans in captivity will still freak out when exposed to a fake snake, because certain patterns have been encoded deep within. There’s no reason for such patterns to be raised to the level of conscious knowledge.
Ahhh, so you’d only be reprogramming part of my brain. Well, of course I’d run into problems then. All that means is that there are more parts of my brain than those I have conscious access to, which seems pretty obvious to me even before I start to think about what I know of neurology.
I think we agree with each other.
I wouldn’t be sure, the vision system has an amazing ability to adept to rewiring. Monkeys were able to see another color through gene therapy that their species hadn’t seen before.
Indeed there’s rewiring over time, but it wouldn’t be instant and it wouldn’t be total, so the point stands.
That’s a really interesting experiment—can you find me a link?
http://www.wired.com/wiredscience/2009/09/colortherapy/
Thanks!
Most philosophical definitions are pretty weird. On the rare occasions I use “qualia”, it means the inside view of your sense perceptions. What things look/sound/feel like to the person doing the sensing.
The subjective way we experience things.
What do you mean by “experience”? And “subjective”? I’m not sure what you’re talking about.
By experience I mean anything which we detect with one of our senses.
The subjective part is, IMHO, the key to qualia.
Suppose that you’ve never seen red light, and that you are then told all of its properties in perfect detail. You would still gain new information by actually seeing red light, because you still don’t know “what it feels like” to see it. The qualia is not the objective facts, but rather what seeing the light “feels like”, your perception of the effect produced on your brain by the light.
(Qualia are usually taken to be an argument against materialism- because after you know every objective fact about something, you still gain new information (qualia) by experiencing it.)
This “Mary’s Room” argument, like the “Chinese Room” argument†, contains a subtle sleight of hand.
On the one hand, for the learning to be about just the qualia rather than about externally observable features of vision processing, the subject would need to learn immensely more than the physical properties of red light. (The standard version of Mary’s Room does so, postulating Mary to also deeply understand her own visual cortex and the changes it would undergo upon being exposed to that color.) In fact, the depth of conscious theoretical understanding that this would require is far beyond any human being, and it’s wrong and silly to naively map our mind-states onto those of such a mind.
On the other hand, it plays on the everyday intuition that if I’ve never seen the color red, but have been given a short list of facts about it and am consciously representing my limited intuition for that set of facts, that doesn’t add up to the experience of seeing red.
The equivocation consists of thinking that a superhuman level of detailed understanding of (and capability to predict) the human brain can be analogized to that everyday intuition, rather than being unimaginably other to it. So I don’t see that an agent who was really possessed of that level of self-understanding would necessary feel that the actual experience added an ineffable otherness to what they already knew.
That sense of ineffable otherness, IMO, comes from the levels of detail in the mental processing of color which we don’t have conscious access to. Our conscious mind isn’t built to understand what we’re doing when we visually perceive, at the level that we actually do it—there’s no evolutionary need to communicate all the richness of color perception, so the conscious mind didn’t evolve to encompass it all. And this limitation of our conscious understanding feels to us like a thing we have which cannot in principle be reduced.
† The application of this same principle to the Chinese Room argument is a trivial exercise, left to the reader.
Intuitions don’t matter. If Mary can’t activate her neural pathways participating in creation of experience of seeing red, then she has no means of knowing how she will experience redness. All models she can create in her mind will be external to her as the mind created by actions of human being in Chinese room is external to that human being.
It is not only conscious understanding that is required, we will need a conscious control of individual neurons and synapses to be able to experience qualia given just a description of it. For example, to be able to name color and imagine color given its name, Mary (roughly speaking) should manually connect neurons in her visual cortex to the neurons in her Broca’s area and to the neurons in her auditory cortex.
So I think that, contrary to Dennet, Mary will get new information when she will see colors, as human’s brain construction doesn’t allow to acquire that information by other means. Thus in a sense human’s qualia cannot be reduced.
You may be interested in this paper which makes a similar argument.
Thanks. It is identical argument modulo my inability to make all reasoning and premises sufficiently transparent.
That contradicts one of the assumptions in the thought experiment. You’re establishing qualia as a physical property; in that case, “what it feels like to see red” is amongst the things Mary knows about, by hypothesis.
Also, if it just comes down to activating those neurons, then Mary knows that too and can perform an experiment to activate those neurons without having a ‘red thing’ in front of her, using her incredible superhuman intelligence and resources.
I am not establishing qualia as physical properties of brain’s activity, I think of them as descriptions of specific neural activity in the terms of human’s self-model. And limitations of that self-model (it’s not sufficiently detailed to refer to individual neurons) don’t allow to establish unambiguous correspondence between physical description of brain and self-model description of brain within that self-model.
And what is a difference between seeing red thing and activation of those neurons? The point of “Mary’s room” is to know what seeing red means without actually seeing it.
Depends who’s using it. For Dennett, for instance, the point of Mary’s room is to point out how ridiculous this notion of qualia is, or at least how silly the thought experiment is.
As stated, she knows everything physical about red. So she knows, for instance, how to build a machine that will activate her red-seeing neurons in the absence of the color. Also as stated, she can perform whatever experiments she needs to in order to become an expert color scientist. So she can have whatever experience would come from having those neurons activated.
If you think there’s nothing else to the experience, then I think we’re in agreement so far.
So we have no intuition for that understanding level’s qualia? ;-)
Yeah, I realized the unintended recursion there, and have edited accordingly...
Well, of course a verbal description of red light is different from seeing red light. One is an auditory stimulus, and one is a visual stimulus. They do different things to my neurons. Are qualia about something other than neurons?
I voted up WrongBot’s redefinition of his question, ‘are qualia about something other than neurons?’. Is qualia something other than a word that has been awkwardly defined? Why is colour always the example used to illustrate qualia? Is there something different between colour and things like position, texture, pitch? Has anyone thought of a better or even different way to experience our experiences?
Readers may be interested in my approach to the problem, or rather, the problem that remains even after any terminology issues are settled.
Summary: What we identify as “qualia” is the encoding of memories that we cannot yet compare directly between people, to the extent we can’t compare them. This incommensurability can easily arise among agents who are similar, but who self-modify in a way that does not place any priority on the ability to directly transfer memories to other agents.
In that case, their methods of storing memories are ad-hoc, and look like garbage to each other—but with the right assumptions and interaction, they can achieve a limited ability to compare, and thereby have terminology like “red” that means something to all agents, even as it doesn’t call up exactly the same idea for each one.
I was intrigued when I first read this when you last posted it, and I thought about it for a while. The problem with it, it seems to me, is that this is a good explanation for why qualia are ineffable, but it doesn’t seem to be come any close to explaining what they are or how they arise.
So, I could imagine a world (it may even be this one!) where people’s brains happen to be organized similarly enough that two people really could transfer qualia between them, but this still doesn’t explain anything about them.
You’re right. But I believe that that the ineffable aspect is closely related to the other two questions, although I don’t have an answer in the same detail as the ineffability question (which would still be progress!).
To give a sketch of what I have in mind, my best explanation is this: conscious minds form when a subsystem is able to screen itself off from the entropizing forces of the environment (similar in kind to a refrigerator or other control system). This necessarily decouples it from the patterns that exist in the environment, as well as other minds that have done the same.
So the formation of a conscious mind will coincide with the formation of incompatible encoding methods, unless special care is taken to ensure that the encoding protcols are the same. Therefore, we shouldn’t be surprised to notice that, “hey, everything that’s conscious, also has ineffable experiences with the other conscious things.”
But again, I don’t claim this part is as well-developed or thought-out.
What is it that you feel/see/touch/taste/think/etc. instead of simply acting? Why is there a “you” you experience, instead of mere rote action? We label these sorts of things that we use to distinguish between empty existence and our own subjective (personally observed/felt) experience. The thing about humans that distinguish them from P-zombies.
Why do you group together sense perceptions (which I have) with thoughts (which I have), and call them qualia (which I don’t have)?
How are these different?
How can existence be “empty”? Is subjective experience just sense perception? Because sense perception doesn’t seem like it warrants all this mysteriousness.
That’s odd. I thought the sequence on P-zombies made it pretty clear that they don’t exist. Why do we need to be distinguished from confused, impossible thought experiments?
Perhaps you simply do not have qualia or subjective experience. Some people do not have visual mental imagery, strange though that may seem to those of us who do. Similarly, maybe some people do not have anything they are moved to describe as subjective experience. Such people, if they exist, are the opposite of the logically absurd p-zombies. P-zombies falsely claim that they do have these things; people without them truthfully claim that they do not.
You might just be Socratically role-playing, but even so, there may be other people who actually do not have these things. That is, they would express puzzlement at talk about “the redness of red”, “awareness of one’s own self”, and so forth (and without having been tutored into such puzzlement by philosophers arguing that they cannot be experiencing what in fact they do experience).
Is there anyone here who does experience that puzzlement, even before knowing anything of the philosophical controversy around the subject?
There is this example:
I see that as a cheap way out, I think “do I have free will ?” is just a confused question whose answer depends of the way you unconfuse it. I’m just in the minority of humans who refuses to answer that confused question—I’d like to say I refuse to answer all confused questions, but that’s probably not true.
Still, it is possible that confusion and disagreement about “qualia” and “free will” are just due to differences in personal experience, not to different interpretation of those labels.
People who lack visual mental imagery have atypical performance on certain kinds of cognitive tests, as Yvain’s article describes, and if I believed that such people existed, I would expect that testable difference. What type of test should I expect to distinguish between those who have qualia and those who do not?
The most direct test would be this:
“Do you have qualia?”
Yes
No
But you’d have to use naive subjects who haven’t philosophised themselves into ignoring their own experience.
A little more indirectly, people without qualia would profess puzzlement at the very idea, and argue that there is no such thing. If they are philosophers, they will write articles on the incoherence of the concept. If they are psychologists, they will practice psychology on the basis that mental phenomena do not exist. If they are teachers, they will see the brain as a pot to be filled, not the mind as a fire to be ignited. Those who do have qualia will be as tenacious on the other side.
Nothing that those who do have qualia say about qualia will make sense to those who don’t, and those who don’t will have no difficulty in demonstrating that it is nonsense. Those who do have qualia will be unable to explain them even to each other, since they know no more about what they are than they know about how thought happens. All of their supposed explanations will only be disguised descriptions of what it feels like to have them.
Looks pretty much like our world, doesn’t it?
How is that direct? First you’d have to explain what you mean by that, and “understanding” such an explanation would pretty much require convincing oneself that there are such things to be had in the first place.
There are some things you can test by asking; I can imagine asking someone, “do you ever get a twisty kind of feeling in your stomach or nearby, when you’ve just had something very bad happen to you and it slipped your mind for a while but then it intrudes again on your awareness—and the twisty feeling comes precisely at that moment”.
That’s a feeling. It’s describable. I have it sometimes. It’s an empirical matter whether other people recognize an experience of theirs in that description, or not. It’s much like pointing to a red thing and asking people “is this red”, and then they confirm that it’s red to them.
How are “qualia” different?
Failing to understand would amount to a “No”.
It does. If psychologists came out with a study that 1 out of 10 people don’t experience qualia, I would feel rather certain that I was one of those out of 10 that don’t experience it. Just like WrongBot, I think. However, my actual expectation is that we are all the same at that level of brain organization, and wonder what aspect of my experience people are labeling ‘qualia’.
Above, Orthonormal wrote,
Actually, this is exactly what I hypothesized qualia were: little reference tags of meaning that we attach to things we recognize.
When using an entirely new medium, I feel like I experience the creation of new qualia. For example, here on Less Wrong, each comment has a username. After some experience on Less Wrong, it feels like the username is different from (and more than) a set of green underlined letters in bold font at the upper right hand corner that tells you the person who wrote the comment—it’s like a separate object that means the source of the comment, and as soon as it has that extra meaning, it gains this elusive quale-like aspect.
I have an opposite hunch: that the further removed any part of our internal constitution is from the world outside our skins, the more we vary.
My reason is that there are many ways of doing the right thing to survive and reproduce. The genome isn’t big enough to contain a blueprint for a whole brain, so evolution has come up with a general mechanism (which no-one actually knows anything about yet) for the whole thing to organise itself when the newborn is dropped into an unknown environment. The organisation an individual brain ends up with is constrained by nothing more than the requirement to make the organism function in that environment.
Look around you at the variation in people’s personalities. They’re even more different inside their heads than that.
“The greatest obstacle to discovering the shape of the earth, the continents and the ocean was not ignorance but the illusion of knowledge.”—Daniel Boorstin
If I understand it, I can build it.
Richard Feynman’s inverted (not A implies not B == B implies A) quote.
Dualized.
-- Judith Martin (“Miss Manners”)
See also: The Simple Truth.
-Anthony de Jasay
A drawing instead of a quote.
(This one is also interesting. I didn’t spot much else worth sharing in the rest of the “comic”, however.)
″...you must consider what you are, seeking to know yourself, which is the most difficult task conceivable. From self knowledge you will learn not to puff yourself up, like the frog who wanted to be as big as an ox.”—Don Quijote
That’s interesting, considering who the character is. I googled up some context, and Quixote is in one of his lucid phases here, where he speaks with wisdom on everything except himself.
Yeah, it’s an ironic quote. For what it’s worth, I also see some shades of GEB in this.
-- C.S. Peirce
“They never do tests. Not many real deeds either. Oh, conversation with your grandmother’s shade in a darkened room, the odd love potion or two, but comes a doubter, why, then it’s the wrong day, the planets are not in line, the entrails are not favorable, we don’t do tests!” -Tyrian, Dragonslayer
— Sam Harris, Edge: THE NEW SCIENCE OF MORALITY
Edit: It was of course Sam Harris who said this.
-Alive by The Land Canaan
The song is about creation of the first FAI, and has tons of amazing lyrics. I couldn’t find the full lyrics online, and I’m too lazy to transcribe them, so you’ll have to listen for yourselves: http://www.archive.org/details/OnLeaving
I also recommend Wild Child which is about a not so friendly AI.
Transcribed:
On the nature of courageous nature of atheism:
-Jacob Grimm, Teutonic Mythology
There ain’t no such thing as government interference.
--Robert Anton Wilson
-- D’Arcy Thompson, On Growth and Form (1917)
This quote confuses me. At first I read it as a restatement of the famous Lord Kelvin quote on the topic—if you don’t have numbers, your understanding “is meager and unsatisfactory.” Hooray for that as far as it goes. But the second half seems to suggest, reversing another famous quote, that it is better to be precisely wrong than vaguely correct.
I favour D’Arcy Thompson’s view. If you are precisely wrong, it will be easy for evidence to refute you and make you less wrong. But if you are already vaguely right, how will you attain to being precisely right? How will you discover that you are actually vaguely wrong, if your wiggle room lets you explain away contrary evidence? As Francis Bacon wrote, “Truth arises more readily from error than from confusion.”
Being vaguely right is only better when you need to decide an action now and have no opportunity to improve your knowledge. But being precisely right is better still. “Weak Bayesian evidence” is worth as much as a penny lying in the road: if you need to pick up a penny, you need a lot more than a penny.
But in fact, that is not the sort of vague rightness that the quote you linked to is about. Here is its context:
Carveth Read, Logic: Deductive and Inductive, p.351.
Popular thought, poetry, eloquence, manners, fine art, literature, politics, religion and moral philosophy. Is our “vague rightness” in such matters anything more than an illusion, a subjective sense of “meaningfulness” when we utter our words that has as much backing as a sub-prime mortgage?
ETA: That last sentence of the extended quote is rather good, and deserves to be quoted on its own. ETA again: except the final phrase: such people are up against the limits, not of its possibility, but of their own capacities.
This seems to imply that we should delegate decision-making to a system that is certain the sky is rgb(0,255,0) over a system that assigns the bulk of its probability to various shades of blue. But, if we know that the sky really is some shade of blue, the system with the less precise prior that the sky is blue will do better than the system that precisely thinks it’s (bright lime!) green as new evidence becomes available.
I can’t imagine that this is actually what’s meant by the original quote or your reply. What is D’Arcy Thompson’s view?
Here’s some context for D’Arcy Thompson:
[ETA: Here, p.122, is the context for the reference to Herschel.]
And he goes on to rebuke the life sciences for having been slow to follow the same course. He suspends judgement on whether the mysteries of the mind and consciousness can be solved by physical science, “But of the construction and growth and working of the body, as of all else that is of the earth earthy, physical science is, in my humble opinion, our only teacher and guide.”
I think that is a perverse reading of Carveth Read’s maxim.
-- H.P. Lovecraft
duplication
-- Piergiorgio Odifreddi (parodying the Nicene Creed)
Um...those of you who rushed to downvote, may I suggest reading a bit more slowly, maybe even clicking on the links?
I think you may have hastily mistaken atheist wit for tree-hugging postmodernism.
Pardon the bluntness, but I noticed the “parodying the Nicene Creed” bit, saw it as a failed attempt to be clever by adapting religious recitations to a naturalist end, was unimpressed (not because of some general principle against such adaptation), and voted it down still, without seeing it as tree-hugging.
Very well; deleted accordingly.
(Pardon me, however, if I’m a bit skeptical about there not being a “general principle” involved here—Odifreddi’s “credo” may not be all that witty—and perhaps that’s partly my fault, since the above was my translation—but it would still deserve a place in this thread, were it not for aversions to “imitating” religion, and to reverential or emotional-sounding language in general. After all, since when do rationality quotes have to be witty or clever?)
Okay, looking back, I think that’s a fair point. I meant something more like: Modifying religious hymns/creeds to express a rationalist view looks cliche (or a more apt term I can’t think of) to me, so I hold them to a higher standard. And anyone can do a word swapout. So, in a sense, there is a general principle involved that I believe cuts against that kind of quote.
Rationality quotes do have to be witty or clever in the sense that they have to do a bit more than just state a rationalist tenet. It wouldn’t be appropriate to post a quote as simple as, “You should update your beliefs on evidence.”
I didn’t like the quote overall, but that’s the part that I took exception to. Death is the enemy.
I think that line means the opposite of how you interpreted it. I read “I await the dissolution of death” not as “I await the dissolution that is death” but as “I await the point when the threat of death is dissolved”.
Edit: What komponisto said.
I don’t quite see how the subsequent clause would make sense under that reading.
If that “but” were an “and”, I would agree with you.
What about interpreting it like this:
Yeah, this may just be a parse error on my part. Apologies for the noise.
“But” makes perfect sense to me: “I, too, hope to triumph over death, but not in the way that religious people do.”
Well, if death is the enemy, all the more reason to await its dissolution!
(Seriously, that’s how I parsed it the first time I read it.)
-- Blaise Pascal
-- Citizen of Earth, in a Slashdot post