Rationality Quotes: February 2011
Take off every ‘quote’! You know what you doing. For great insight. Move ‘quote’.
And if you don’t:
Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
Do not quote yourself.
Do not quote comments/posts from LW. (If you want to exclude OB too create your own quotes thread! OB is entertaining and insightful and all but it is no rationality blog!)
No more than 5 quotes per person per monthly thread, please.
- 16 Jan 2013 1:50 UTC; 2 points) 's comment on Rationality Quotes January 2013 by (
Doris Lessing, “Mara and Dann”
A long one:
-- “Alain” (Émile Chartier) The Gods. A meditation on childhood.
I thought the punchline was going to be that the men were cats.
Nah, definitely dogs. They’re the undisputed masters of manipulating humans in the animal kingdom.
Excepting other humans.
I thought it was a Marxist parable, or something of the sort...an allegorical critique of capitalism, supervalue, the elite exploiting the masses.
I must be in a bad mood because of the Cathie Black situation in NYC...where the “giants” are the democratic masses, who protested against the natural orators of our government…
Last night was a “change of humor that would come over the giants”… a “brusque refusal”...but in the end the middle/lower classes “seemed nevertheless to be charged with nourishing them and housing them and transporting them, and who eventually carried out their duties, provided they were prayed to” (the “praying” being only the making of promises, “I stand for the middle class”, “we’ll create jobs for you”, “think of the children!!11!!1!”).
The masses do, at times, crush the endeavors of the orators (more than one reference to Egypt was made last night)...but for the most part the giant masses do what they are told, as long as they hear the right things, and have a cookie or a coo tossed to them now and then.
I freely admit taking too much liberty with all of that...but it really is what I was thinking about as I read it.
I guess I’m far too literal-minded. The whole time I simply assumed the giants were a normal God parable. I was rather non-plussed about the whole quote until I saw “A meditation on childhood” and then my head exploded. I don’t even remember being a kid anymore.
I saw it coming before I read the line that explicitly mentioned childhood.
On the next page in the book, the author mentions, “I decided to go through with the fiction of the giants, although the reader will have seen by the third line where I was leading him.”
Personally, I didn’t see it coming when I first read it. My first reaction was pretty much the same as Eneasz’.
Me too.
This was wasted as a point about ‘gods’. The commentary on human social instincts irrespective of belief in literal gods was far more insightful.
Ok, so it seems almost everyone got a different idea of who the giants and the men were. Children and adults, pets and humans, humans and gods, governments and populations (in both directions!), humans and computers...
My first impulse upon seeing this, is that this must be a very general phenomena that occurs in a great spectrum of situations. That all these different situations are isomorpic towards one another. The next is that we should come up with a generalized theory for the concept and maybe make up a word to access the concept quicker.
I didn’t know where it was going at all until I hit the words “instead they became natural orators.” It was a that point that I thought of my 17-month-old daughter. Thank you for a very timely message.
Hah; I read through that entire thing expecting the punchline to be that the giants were computers.
Maybe one day they will be.
Or we will be, or they’ll make paperclips of us all.
For most of the time I spent reading this quote, I thought the men were celebrities or demagogues and the giants were the populace.
I thought it was a Marxist parable, or something of the sort...an allegorical critique of capitalism, supervalue, the elite exploiting the masses.
I must be in a bad mood because of the Cathie Black situation in NYC...where the “giants” are the democratic masses, who protested against the natural orators of our government…
Last night was a “change of humor that would come over the giants”… a “brusque refusal”...but in the end the middle/lower classes “seemed nevertheless to be charged with nourishing them and housing them and transporting them, and who eventually carried out their duties, provided they were prayed to” (the “praying” being only the making of promises, “I stand for the middle class”, “we’ll create jobs for you”, “think of the children!!11!!1!”).
The masses do, at times, crush the endeavors of the orators (more than one reference to Egypt was made last night)...but for the most part the giant masses do what they are told, as long as they hear the right things, and have a cookie or a coo tossed to them now and then.
I freely admit taking too much liberty with all of that...but it really is what I was thinking about as I read it.
--Evil Overlord List #230
--Paul Graham
An interesting concept...but I wonder. I bet at least some people would actually notice that. They’d see unrest in the middle east and say “hmm...oil prices didn’t change the way I expected them to” or something. Sometimes you see things like ” index rises in spite of ”.
I think Graham’s inference has merit: these people don’t really know what’s happening...but I think some people at least would notice the anomoly.
Well now I want to test this. Do we have anyone here who thinks they know a thing or two about the stock market? If so would they be amenable to an experiment?
I’m thinking that they would agree not to look at any stock price information for a day (viewing all the other news they want). At the end of the day they are presented with some possible sets of market closes, all but one of which of which are fake, and we see if they can reliably find the right one.
Finding the most probable market outcome given a few possibilities and a day’s news is easier than noticing by yourself that the news and the market don’t fit.
I will participate if you’d like to try, there are some problems with the experiment though
I’m still interested, what changes would you suggest?
Sorry for the slow reply, want to do this over email? im gbasin at gmail
I’m benelliott3 at gmail. To be honest I’m not very familiar with the stock-market so if you could suggest a procedure for the experiment, including such things as where to get the information that would be appreciated.
Care to precommit to a discussion post about the experiment regardless of the result?
Well, the time Steve Ballmer announced he was to quit the Microsoft, Microsoft’s stock jumped quite a bit, clearly because Ballmer quit, even though one could perhaps explain either a raise or a fall with Ballmer quitting. Expected square of a change was big from Ballmer quitting, that’s for sure. Same goes for any dramatic news, such as the recent gas attack in Syria.
And yes, over the time one could tell that something is up if the stock market graph is uneventful while there’s dramatic news.
Bottom line is, a causal link can exist and be inferred even when there is no correlation.
-- George Orwell, 1984
--The French Revolution: a history, by Thomas Carlyle; as quoted by Mencius Moldbug
There is no greater joy than riding the words of Thomas Carlyle.
He may not always be correct (although his point above is a blow of hard-hitting truth as great as any ever written) but his phrasing, his metaphors, his analogy, are all magnificent.
But … but … what about bankruptcies induced by a liquidity crunch—the kind the political elite’s propagandists have have been telling me entitle a “too big to fail” company to receive perpetual government assistance?
In those cases, bankruptcy wouldn’t suck up falsehoods, would it?
No. But I think you* are guilty of affirming the consequent. If something is false, then it will end in bankruptcy—but that does not logically imply that everything ending in bankruptcy was false. So something true could still end in bankruptcy (for whatever reason, like a liquidity crunch).
* Or Carlyle, I suppose, but given the choice between accusing a famous thinker of an elementary fallacy and a quick off-the-cuff Internet comment, I’d rather accuse the latter.
If your business is structured such that a liquidity crunch will drive you bankrupt then some restructuring might be in order.
Now one can (and I certainly would be willing) to make the argument that it is almost impossible for a small to medium sized business to structure itself such that a liquidity crunch exacerbated by an inept political class and a rapacious bureaucracy (or a rapacious political class and an inept bureaucracy, whatever).
In that case it’s best to take your ball and go home leaving the enlightened revolutionaries with the society they voted for.
Eh, that’s what I thought do, but the very idea looks to be beyond the pale. They tell me that we needed to make huge loans on terms no one else could get to prop up some large banks, and it’s “only” to provide “liquidity”. But my thought is: I find it extremely unsettling to be in an economy where such a huge fraction of it is based on business plans this brittle; and the sooner and more spectacularly they die off, the better a foundation future growth will be built on.
But current mainstream thinking doesn’t even allow such a thought.
I also saw people present, as “evidence” of a liquidity crunch, the fact that overnight lending rights spiked from 4% to 6% annualized. Considering that these are the annualized rates for loans with a life of a few weeks at most, this is a trivial increase in borrowing costs. A business so fragile that it can’t withstand paying a few extra pennies for ultra-cheap loans every once in a while … well, any economy dependent on such brittle business plans is living on borrowed time anyway.
Well, I don’t know. The sort of gun you had before modern precision machining, 4.2 would be good enough, maybe 4.3 at a pinch.
--Donald Knuth (see also Amdahl’s law)
A premature really powerful Optimization Process is the root of all future evil.
“The first rule of code optimization: Don’t.”
I never thought of this quote outside the context of programming before reading it here, but it does seem pretty generally applicable. The force behind premature optimization is the force that causes me to spend so much time comparison shopping that the time lost eventually outvalues the price difference; or to fail to give money to charity at all because there may be a better charity to give it to. (I’ve recently started donating the dollar to Vague Good Cause at stores and restaurants when asked, because it’s all well and good to say “SIAI is better,” but that defense only works if I then actually give the dollar to SIAI.)
-old Warner & Swasey ad
Just saw on reddit a perfect accidental metaphor: jakeredfield posted this in r/gaming:
What makes it even more perfect is this reply by Aleitheo:
KanadianLogik adds:
It’s possible that if there were several copies of Chell, some of them did.
Unfortunately, I think I saw somebody else play that section correctly before I played it myself. Still, if I had died, I would’ve come back at the last time I saved. That would’ve clued me in that I was supposed to survive, and I probably would’ve figured it out in one or two more tries tops.
I am going to shamelessly and totally steal this example for when talking about anti-deathism to anyone.
Seriously, thank you so much.
You’re welcome. No need really to thank me. After all, I shamelessly stole it too. It was just too perfect. :)
I just had to comment on this, it’s too perfect. Thanks.
You’re welcome. :)
-- Common German folk saying
Translates as “If the rooster crows on the manure pile, the weather will change or stay as it is.” In other words, P(W|R) = P(W) when W is uncorrelated with R.
Another good one:
“If it’s bright and clear on New Year’s Eve, the next day will be New Year’s.”
I’ll chip in with this Russian saying:
“It is better to be rich and healthy than to be poor and sick!”
Woody Allen had a take on it too:
Als het regent in mei, is april al voorbij. (If it rains in May, April is already past)
-- Frederick Giesecke, et al, Technical Drawing, 8th ed
On similar lines:
Ancient Latin saying.
-- Today’s Dinosaur Comic
--Nicolás Gómez Dávila, Escolios a un Texto Implícito: Selección, p. 430
Can you give some examples of when that has happened? I’m having trouble of thinking of any. The widespread use of computers seems to have been a great success, on the whole.
Who said it was about computers per se? I didn’t.
I was personally thinking more of electricity and radiation. Electric belts! Electroshock therapy! Electric toothbrushes! The mind as a power grid! Radium salt supplements! Radioactive watches! Well actually we still use tritium for that. (Or to take a more recent example, microfilm. “Let’s put everything on microfilm and shred all the original newspapers and books! What could possibly go wrong?”)
We may be a little too close to the computer to see the silliest and most grotesque solutions it has provided, although The Daily WTF may be a good start.
I know you didn’t mention computers; it was just the first example that came to mind. It seemed like if the quote would apply to anything, it would apply to computers most of all, but it didn’t. But good points about electricity and about being computers being too recent.
The IoT (internet of things) comes to mind. Why not experience WiFi connectivity issues while trying to use the washing machine?
Everything trying to become a subscription service is another example (possibly related to IoT). My favourite is a motorcycle lifesaving airbag vest, which won’t activate during a motorcycle crash, if the user misses a monthly payment. The company is called Klim, and in fairness, the user can check whether the airbag is ready for use, before getting on their bike.
As they say in Discworld, we are trying to unravel the Mighty Infinite using a language which was designed to tell one another where the fresh fruit was.
-- Terry Pratchett
“Language is a drum on which we beat out tunes for bears to dance to, when all the while we wish to move the stars to pity.” -- Flaubert
Francis Bacon
I shall have to quote this a good deal more when dealing with people who chide me for not mentioning all the possible objections that philosophers consider to still be in play.
It doesn’t help that undergraduate philosophy has rather a lot of enumerating the history of philosophical arguments regardless of quality.
Well, sexual selection chose wit as the target for our intelligence, not discernment of the truth of matters of Far concern. Anybody can figure out the truth of the Near, where is the impressiveness in that? Nobody can verify Far claims, so we don’t know who should impress us.
The following reminded me of Arguments as Soldiers:
I’m sorry to have not found his blog sooner.
Weiner has a blog? My life is even more complete.
Things are only impossible until they’re not.
-- Jean-Luc Picard
Sometimes not even then.
Except when they really are.
No, then too.
Unless...?
--Georges Rey, “Meta-atheism: Religious Avowal as Self-deception” (2009)
(First version seen on http://www.strangedoctrines.com/2008/09/risky-philosophy.html but quote from an expanded paper.)
It’s true that the question of God’s existence is epistemologically fairly trivial and doesn’t require its own category of justifications, and it’s also true that even many atheists don’t seem to notice this. But even with that in mind, it almost never actually helps in convincing people to become atheists (most theists won’t respond to a crash course in Bayesian epistemology and algorithmic information theory, but they sometimes respond to careful refutation of the real reasons they believe in God), which is probably why this point is often forgotten by people who spend a lot of time arguing for atheism.
Choosing good priors isn’t something that’s epistemologically fairly trivial.
Using the majority opinion of the human race as a prior is a general strategy that you can defend rationally.
Use it as a prior all you want; but then you have to update on the (rest of the) evidence.
It’s really epistemologically difficult to find out what people mean by God in the first case; how then can it be epistemologically trivial to judge the merits of such a hypothesis?
Difficult to pin down within a range of trivial-to-judge positions.
With, possibly, vanishingly rare exceptions.
If a given hypothesis is incoherent even to its strongest proponents, then it’s not very meritorious. It’s in “not even wrong” territory.
I strongly suspect that there is a lot of coherence among many different spiritualists’ and theologians’ conception of God, and I strongly suspect that most atheists have no idea what kind of God the more enlightened spiritualists are talking about, and are instead constructing a straw God made up of secondhand half-remembered Bible passages. In general I think LW is embarrassingly bad at steel-manning.
Coherence isn’t necessary factor for a good theory. In artificial intelligence it’s sometimes preferable to allow incoherence to have higher robustness.
Could you expand?
It’s Georges Rey. I know because I sat through an entire class that he taught once. I think I also read his book, Contemporary philosophy of mind: a contentiously classical approach, during that time, but I don’t recall learning anything from it.
Can someone who has actually read the paper (I don’t feel like it) tell me whether it has the same upshot as the earlier version I seem to remember, viz. that people only pretend to believe in God? (It’s possible I’ve got this mixed up with something else.)
Thanks.
It does; as I said, it’s an expanded paper.
I… what… is this some kind of atheistic affective death spiral? How could this possibly be construed as a reasonable analogy, even rhetorically? And with such a smug tone? Why are we tolerating blatantly misleading dark arts that appeal to the inductive biases of our epistemological reference class?
What is unreasonable about the analogy? All three are claims about apparently unfalsifiable super-natural entities with no normal epistemological support, and many arguments for God would seem to work as well for other such entities. (As Anselm’s contemporary pointed out, his ontological argument served as well to prove the existence of perfect demons or islands or fairies.)
If you disagree, a read of the paper might be in order so you don’t have to resort to accusations of the Dark Arts.
Bayesians don’t care about unfalsifiability, ‘supernatural’ can only be constructed relative to a limited ontology (things that aren’t made up of subatomic particles are supernatural, say; variants on an algorithmic ontology have room for something like a God) and is thus a dangerous and slippery word, and the hypothesis of there existing something important called a God has ridiculous amounts of epistemological support even if there is lots of evidence against such a hypothesis as well.
Bayesians care a lot about unfalsifiability, a theory can only gain probability mass by assigning low probabilities to some outcomes (if you don’t believe me then go read Eliezer’s technical explanation of technical explanation).
“Anything not made from subatomic particles” is a poor definition of the supernatural, since it leaves is irrationally prejudiced against the idea that subatomic particles could be made out of something else, which is a perfectly reasonable hypothesis (currently one with no evidence for it, but we still shouldn’t be prejudiced against it).
Try “ontologically fundamental mental states” for a better definition of supernatural, and a much better one since there is very good reason to assign a low prior to such claims (there is a huge number of imaginable ontologically fundamental things which are simpler than mental states, so by occam’s razor and the principle of limited probability mass any hypothesis that claims they exist gets a very low prior).
Hypothesis: Most if not all of this epistemological support of which you speak is bad philosophy, possibly based on the mind projection fallacy, which could just as easily have been constructed to defend witches or ghosts if someone had had enough reason to do so.
To be more precise (and more correct) we should say that it can gain probability mass, but only when more precise hypotheses are falsified.
If I think a coin is either fair or biased toward heads, and then it comes up tails three times, it’s probably fair.
Nitpicking; I meant that falsifiability-in-practice-as-such-the-way-most-people-use-the-word is not a necessary precondition for determining which hypotheses to pay attention to. Apparently unfalsifiable hypotheses (which are nonetheless probably actually falsifiable with enough computing power) like the existence of a creator God are thus fair game for Bayesians, and pointing out their apparent unfalsifiability isn’t scoring a point for the atheists.
Right, but you have to use a poor ontology in order to get a concept that even looks like supernatural in the first place… this is an argument against using the word supernatural at all. God is just not supernatural if you are using the right ontology. I don’t know what an ontologically fundamental state would look like (when I think of people who believe in the supernatural that does not seem to describe their beliefs at all), and I don’t see how that conception is at all relevant to gods, witches, or ghosts. We can follow that digression, as I’m really curious as to what people are trying to explain when they talk about supernaturalism as belief in ontologically fundamental mental states, but it doesn’t seem relevant to the OP.
Most? Yes, of course yes. To a first approximation, everyone everywhere always has always been wrong about everything, including all of atheism and science. But all? Not even close.
Here’s a basic argument for a somewhat vague Creator God: the universe exists. Things that exist tend to have causes. Powerful things like superintelligences or transcendent uploads are good at causing things. This universe might have been caused by one of those really powerful things.
That we feel better when we call those powerful things ‘superintelligences’ instead of ‘gods’ just says something about our choice of ontology, not about the righteousness of our epistemology.
Here’s a basic knockdown: “the universe” is not a thing in the way that requires a cause. It’s a category of things, so if you must assign it a cause, it is caused by the existence of things (and maybe a desire to refer to everything). By way of demonstration, if you listed every physical thing that makes up the universe, and I found some physical things that existed but were not on your list, would you say “there are things outside the universe” or would you add those things to the list?
(That is, your argument needs to point to things that are likely to be caused by superintelligences / transcendences. I would point to all the things we know of so far as being very unlikely to have been caused by superintelligences / transcendences, and claim that the rest of the universe probably shares that same property.)
Wrong. A consequence of Bayes’ Theorem is that if two theories A and B fit the data equally well, but A fits hypothetical alternative data better than B does (in other words, B is more falsifiable) then A must assign a lower conditional probability to the actual data than B, by conservation of probability. This means that regardless of where the priors start out, if we keep accumulating evidence without falsifying either the probability of A must eventually become vanishingly small, too small for any reasonable person to even spare the time to consider the hypothesis.
Believing in ontologically fundamental mental states means the you believe that the actual territory, as opposed to a map, contains minds. This can seem reasonable, but the reasonableness is an illusion caused by the fact that our monkey brains are pretty good at thinking about other monkey brains and pretty bad at thinking about much simpler things, such as maths.
God falls into this category as normally postulated, since he is usually assumed to be fundamental and is usually assigned mental states as well as exhibiting the complex behaviour typical of minds. Ghosts fall into it since for a person’s mind to survive the destruction of the physical entity it was contained in/supervenient upon it must have its own ontologically fundamental properties.
Here’s the corresponding argument for ghosts: death is an event. People’s conciousness tends to continue existing through most events, so it probably continues existing through death even though it has never been observed to do so (the same way no universe has been observed to have a cause). Therefore minds must continue existing after death, and we might as well call them ghosts.
Motivated cognition at its worst.
No, it shows that we are cautious that the connotations of our statements don’t say anything that we don’t mean.
Atheism is also unfalsifiable in practice, though, so I don’t see the relevance. And I think the positive evidence points somewhat towards theism, not atheism. Thus I find theism more likely.
Thanks for the explanation.
God is normally considered to be outside the universe; why do you say he is usually assumed to be fundamental? Also note that there are many conceptions of God, some of which actually are something like fundamental even if isomorphic to a more detailed description (with less errant connotations) of the structure of the ensemble universe.
And this is where I start thinking you’re crazy, for thinking this is even close to a corresponding argument. What similarities do you see? Every single thing we have ever seen has a cause. We have seen the universe. We postulate a cause, by simple induction. We have seen peoples’ consciousness fade in and out as they go to sleep or fall into comas. We postulate that it is thus probable that death is like an endless coma. It is hard for me to fathom how you could possibly have seen your argument for ghosts as being at all in the same reference class, except that ‘ghosts’ and ‘gods’ are both contemptible hypotheses ’round these parts.
Like, seriously, your analogy on the meta level seems to me like motivated cognition at its worse.
This is only because we already have a word for ‘superintelligence’. Most people don’t. My point was that we shouldn’t be automatically contemptuous of concepts that are really damn similar to the ones we’re already postulating just because they’re labeled in the language of the enemy.
If God rides down from heaven hurling lightening bolts in all directions and wantomly altering the very nature of reality, I will consider atheism to be falsified. This is an extreme case, but there are many observations that could falsify, or at least provide very strong evidence against, atheism.
Don’t confuse unfalsified with unfalsifiable.
Either God can be reduced to something else or he is fundamental. No conception of God that I have ever heard of can be reduced (I’m not show how he could create the universe if he was reducible) so it seems likely he is usually assumed fundamental.
I’m afraid I can’t understand what you mean here.
The universe itself has no observed cause, so this statement is false. It seems likely that there is at least one uncaused thing, since otherwise you have an infinite regress, and the universe seems like as good as bet as any for what that thing is, since it has no observed cause and it belongs to a very different reference class to everything else.
Ah, but we have never observed conciousness to be ended permanently except by death.
You may challenge that this is not evidence since it is true by definition, but if you think about it the fact that the universe has no observed cause is also true by definition, since if it did have an observed cause we would just have included that thing in ‘the universe’ and then asked what caused it.
Its not about ‘God’ being the language of the enemy, its about the fact that it has been used by too many people to mean too many things and it has reached the point where even to use it is to imply many of those things. If someone wants to talk about the cause of the universe they should call it ‘Flumsy’, since that way nobody gets confused.
Think about it this way. If you are working on an algebra problem and you have some complicated term in your equation that you want to define so you don’t have to write out the whole thing every line, you might decide to call it ‘x’. This is a perfectly legitimate step (it is a primitive operation in most mathematical formal systems) unless there is already a term called ‘x’ elsewhere in your equation, in which case it is making the unjustified and quite probably false assumption that these two things are equal.
Calling this cause you postulate ‘God’ is the same kind of mistake.
I would be curious to know if you are putting this forward as an hypothetical argument for the sake of the discussion, or as an actual summarised argument that you really do find at least somewhat persuasive.
I do find something quite like it somewhat persuasive.
A. P. Dawid
???
(Terry Pratchett, I think.)
I saw a creepy hospice volunteer search ad on the street a few days ago. It said something along the lines of “They will be grateful to you for the rest of their lives.” Like an inappropriate joke.
That’s… disturbing, but also weirdly compelling.
I think it’s more elegant to say it like this: “Light a man a fire and he’ll be warm for a day. Light a man afire and he’ll be warm for the rest of his life.”
In text, yes. I said it aloud a few times and I couldn’t tell the two apart easily. Maybe “light a man A fire / light a man ON fire”
I’ve successfully delivered “a fire”/”afire” aloud, but it’s a little tricky to time right.
A little gesturing will likely help a lot.
I find my formulation slightly quicker to parse, but otherwise you’re right.
From Jingo, IIRC. Also I think the second line began “But set fire to him...”
IIRC, he uses this joke several times.
Ah, nevermind then.
Unknown source.
But I’m not sure what any of the variants of this have to do with rationality.
Well, a lot of instrumental-rationality posts around here are basically about the benefits of devoting effort in the short term to developing techniques for making a class of task easier, rather than devoting effort in the short term to implementing an instance of that class with more difficulty. Also, efficient charity is a recurring theme. Whether either of those things have much to do with rationality is a broader question, but they certainly seem relevant to the quote.
Give a man a fish, feed him for a day
Teach a man to fish, feed him for around 15 years until his major fishery collapses into unprofitability.
That looks like a description of one problem with support of developing countries.
Paul Graham
-John Wayne, Sands of Iwo Jima (1949)
-- Douglas Hofstadter
Insanity will prevail when sane men do nothing? (Apologies to Edmund Burke)
I think this adaptation is much more precise than the original.
Not when apathy and insanity are correlated. See, e.g., The Myth of the Rational Voter
paulwl (quoted here)
ETA: I thought this had the smell of Usenet about it, and on Google Groups I found the original, written by one Alex Clark here. paulwl is actually the person he was replying to.
BTW, there’s quite a bit of rationality (and irrationality) on that newsgroup on the subject of people looking for relationships (mostly men looking for women), from way back when. I don’t know if 1996 predates the sort of PUA that has been talked about on LW.
“But can people in desperate poverty be considered to be making free choices? Many say no. So, is the choice between starving and selling one’s kidney really a choice? Yes; an easy one. One of the options is awful. To forbid organ selling is to take away the better choice. If we choose to provide an even better option to the person that would be great – but it is no solution to the problem of poverty to take away what choices the poor do have absent outside help.”
Katja Grace, on Metaeuphoric, Dying for a Donation
-Karl Popper
I don’t like this quote. It is amusing but not very rational. It is not rational to ignore arguments because they were made by an awful person. It also isn’t rational even if one thinks that an argument or set of ideas is not worth thinking about to actively refuse to discuss those ideas, even if one thinks that the ideas aren’t worth considering. The first part of the quote is marginally defensible if Popper is very sure that Heidegger’s ideas are a waste of time. The second part of the quote, about refusing to talk to people who defend Heidegger makes about as much sense as a religion telling its adherents not to listen to some specific critic.
(That said, while I’m by no means an expert on this matter, my general opinion is that Heidegger is a waste of time.)
In academic philosophy there is a tendency to refer to “Heidegger’s arguments and positions” as simply “Heidegger”. (This is true of all philosophers, not just Heidegger). Popper, of course, would have been familiar with this; when I read that quote I got the distinct impression of “Heidegger’s arguments are hollow and his positions are indefensible; please can we agree on this and stop discussing them?”
Relevant old LW post: Tolerate tolerance.
Is his philosophy rubbish (even relative to other philosophy) or is it just a problem with him being a Nazi?
I think both. But mostly I like this quote because it’s hilarious.
That it is. :D
Heidegger’s theme from beginning to end was “Being”. Why is there something rather than nothing, and what is existence anyway? In practice, it was the second question that dominated his life. He started out in phenomenology, so he was initially interested in being as appearance. We get this idea of existence from somewhere, but where exactly? How does it emerge from appearance? Another theme was the forgetting of Being in favor of beings. The modern mind, with its busyness and technological power, is usually engaged in interaction with one particular thing or another particular thing, and loses sight of the fact of existence as such. This theme led him to a historical examination of the concept of Being in different ages. A distinction between existence and essence—thatness and whatness—develops in Greek philosophy, and persists through the centuries despite many transformations, such as the emphasis on subjectivity and consciousness which characterizes the epistemology-dominated era since Descartes. By the end of his life, Heidegger considered that technology and especially “cybernetics” (computer science and information technology) were the start of a whole new epoch in humanity’s relationship to Being; initially one in which the obliviousness to Being itself would persist—the metaphysical oblivion created by the focus on essence having been joined by a daily sensibility which was all about action rather than thought—but also a circumstance in which there could be a “second beginning”, in which Being might be encountered anew again.
So Heidegger deserves his place in the history of philosophy, and he’s not obsolete yet, even if so much about him and his work belongs to a vanished culture and politics.
If I recall he convinced his son to become a computer scientist on these grounds.
I’m not sure what Popper’s motivation for saying that was, but I’ve read a bit of Heidegger and I felt the same way afterward.
I once told a university friend of mine, who was majoring in modern philosophy, that Heidegger was the most empty and nonsensical philosopher I had encountered in high school. He blamed this on translation difficulties and my Marxist teacher, and offered to guide me through a selected reading of Sein und Zeit; an offer on which I took him up.
We called it quits (in a friendly manner) after five evenings of heated arguing over whether it was even intellectually permissible to use half of the words Heidegger was using, and I left with the judgment that Heidegger was raping the German language.
I don’t know about raping the German language but your friend is right in that a) Heideggerr, more than maybe any other philosopher ever, is harder to understand in translation and b) a Marxist might have a lot of trouble explaining Heidegger.
He definitely is not an author one should take on by oneself and I definitely can’t explain much of anything he’s said. I do lean toward the position that he said meaningful, even important things but thats totally based on people whose rationality and intelligence I trust regarding other philosophy telling me so. His obscurity is definitely the cause of a ton of bad philosophy.
Here’s another Popper quote on Heidegger. No points for guessing how Popper took this (as is clear from the surrounding context):
--”The Unknown Xenophanes”, The World of Parmenides, Karl Popper
--Arthur Guiterman
Will_Newsome pointed out the caveat that it’s only good to admit errors when actually in error. I’d add a second caveat, which is that most of the benefit from admitting an error is in the lessons learnt by retracing steps and finding where they went wrong. Each error has a specific cause—a doubt not investigated, a piece of evidence given too much or too little weight, or a bias triggered. I try to make myself stronger by identifying those causes, concretely envisioning what I should have done differently, and thinking of the reference classes where the same mistake might happen in the future.
The wording actually given in this quote avoids the problems discussed by Will_Newsome and jimrandomh: admitting error clears the score, resets it to zero. If you were wrong, this wipes out your negative score, for a net win; if you were right, it wipes out your positive score, setting you back.
I think you meant to say right instead of wrong in this bit.
Fixed.
True, but clearly unintentional.
(Unless you weren’t in error. Once you start awarding yourself internal karma for admitting that you were wrong, it becomes much easier to do so even when you weren’t actually wrong. Of course, this is sidestepped with empiricism.)
After finishing dinner, Sidney Morgenbesser decides to order dessert. The waitress tells him he has two choices: apple pie and blueberry pie. Sidney orders the apple pie. After a few minutes the waitress returns and says that they also have cherry pie at which point Morgenbesser says “In that case I’ll have the blueberry pie.”
http://en.wikipedia.org/wiki/Independence_of_irrelevant_alternatives
http://en.wikipedia.org/wiki/Sydney_Morgenbesser
Sidney chooses pies on the basis of popularity. Apple pie is more popular than blueberry pie. Apple pie is so popular that pie eaters have grown sick of it. They quickly gorge on the new cherry pie. When the fad dies down, they are still sick of apple pie and begin a blueberry revival. Sidney correctly predicts that blueberry will be more popular.
His preferences in that scenario do not violate independence of irrelevant alternatives (that might be your point; I’m not sure). This is meant as an intuition pump to show the absurdity of violating IIA, not a watertight argument that the observed behaviour does in fact violate it.
-Confucius
-Chinese proverb
Sometimes they only unlock the deadbolt, and you need a friend to help push open the door. Sometimes the door is on the top of a cliff, and you need to climb up the rope of Wikipedia to get there. And so on. A lot of people who are having trouble learning something are having trouble realizing what resources they have available.
Its a bizarre feature of university life that it is very difficult to get students to take opportunities for help, even when they are obviously and explicitly provided.
And the reasons those students don’t take opportunities for help tend to be embarrassingly pathetic. Like, so embarrassing that they avoid even thinking about it, because if they made their real reason explicit, they would be pained at how dumb it is. (I’ve done this sot of thing myself, more times than I’m comfortable with.)
For example, I discovered that a significant fraction of the students in a certain class were afraid to ask questions of the professor because they found him scary. Now, I know the professor in question, and he’s a friendly person who wishes that his students would talk to him more—but he has an abrupt, somewhat awkward way of speaking, and an eastern European accent. Such superficial details are apparently what leaves the biggest impression on most people.
Or there are the guys who get depressed and stop coming to class for a week or two, and then keep on not coming to class because they haven’t been to class for a while, and it would be hard trying to get back up to speed. I really sympathize with these guys, but that doesn’t make their reasoning any saner. (A fair number of them come in at the end of a semester to flunk their final exams. Damn it all, this is painful to watch.)
Or there are the people who won’t read textbooks, or Wikipedia, or whatever, because they feel like everything ought to be covered in class well enough that they can just show up every day and get a good grade. I can not think of any good pedagogical reason why this should be so, and indeed, it usually isn’t.
I could go on. There are plenty more examples. But instead I think I’ll just paraphrase the not-actually-evil professor from eastern Europe. “These kids,” he said. “They aren’t resourceful because they have never had to be resourceful. They need more adversity in life. When I was their age, I had to bribe a local official just to get a dorm room.”
My experience of students here at [prominent UK university] is that they are very unwilling to ask for help because they have never needed to do so before, and so consider asking for help as a sign of weakness/low intelligence/low status.
This makes a certain amount of sense, the people who have been able to meet entry requirements are likely in the top percentile of their subject and been the best or nearly at their school. Generally this has been the result of either natural ability or brute force work (memorising equations and examples etc) rather than acting strategically and gaining study skills such as the ability to find new sources f information or ask for help. So they either despair at the seeming impossibility of their tasks, or spend increasingly large amounts of time brute forcing the work and burn out.
It takes a lot for people to understand that needing help doesn’t mean you are stupid, but that the work is hard and its supposed to be hard.
I’ve often found that this is so. I do try to read my textbooks, at least the assigned readings, because...well, because you’re supposed to, I guess. But for most of my first year classes (three anatomy courses, psych 101, microbiology) just going to class was enough. (I did of course take detailed notes, with colourful diagrams, and then study from my notes afterwards. I have now bequeathed my anatomy notes to a friend a couple of grades younger.) One possible reason why this is true for me is that I like biology-related subjects, and I’ve always read anything I could get my hands on, and so I arrived in university to find that I already knew at least 50% of the material.
Areas where this isn’t true: English classes, history classes, etc, where there are a lot of required readings that cover material not covered in class, and where there are essays or papers to be written on material that isn’t covered in class. And of course there’s no rule that you can get good grades without reading textbooks. It just happens to be true sometimes, for some people.
That’s not “adversity”, that’s “solvable problems requiring initiative”.
-Mark Twain
-Charles Babbage
Dupe
Upvoted. I didn’t know it was already posted, I’ve read quite a few of these quote threads but never commented before or noticed that one.
What’s the protocol for this? Should I delete the post?
I think you just accept quietly your downvotes or lack of upvotes, and remember to search next time.
(Also, Clippy—nice try.)
I did search but rather lazily (just entered the text and logically nothing came up). But that was very sloppy, I should have searched for “Charles Babbage” and skimmed the quotes that came up.
The procedure is to delete the post and then send User:Kevin 10,000 USD, asking that it be credited toward User:Kevin’s contract with User:Clippy.
Edit: The procedure is NOT to delete the post and then send User:Kevin 10,000 USD, asking that it be credited toward User:Kevin’s contract with User:Clippy.
Funnier the first time.
You should try asking people to send smaller amounts of money at once, it’s slightly more likely to work.
-- Marc Stiegler, David’s Sling
There seem to be separate failure conditions here though. You could fail because you’re too emotionally invested in your view, or you could fail because you can spot the flaws in all the arguments for the opposing view. If your original view was actually right, then you’re not at fault.
Since this can be hard to distinguish from motivated cognition, I think the exercise is questionably useful.
I don’t think the point of the exercise is to successfully defend the opposing point of view but to make a good-faith attempt to come up with an argument for it without getting your original emotions involved. If you can conjure up a coherent argument for the opposing side (allowing for a slightly different set of priors), that’s some evidence that you’re looking at consequences rather than being strung along by motivated cognition. If you can’t—and this is pretty common—that’s good evidence that the opposing view has been reduced to a caricature in your mind.
It’s a litmus test for color politics, in other words. Not a perfect one, but it doesn’t have to be.
I keep seeing insightful bits from this book (for instance, here and somewhere else that I forget). Am I correct when I say it seems worth reading as rationalist fiction?
It’s very nearly one of the only pieces of rationalist fiction out there.
en.wikipedia.org/wiki/Marc_Stiegler
I’m quite fond of “The Gentle Seduction”, but I eventually noticed how he simplified his problem—he wrote about a relatively isolated person.
I haven’t read David’s Sling; but I read his Earthweb and found it to be not-particularly-deep futurism (primarily presenting the idea of prediction markets).
--Niccolò Machiavelli, The Prince
-- The Colour of Magic, Terry Pratchett
So that’s where Woody Allen got it from.
I haven’t been able to find the original source of the Woody Allen quote, but it seems “The Colour of Magic” was published in 1983, and Google Books finds some copies of the Woody Allen quote predating that.
Ahh, nevermind then. (I only looked it up on Wikiquote, which referenced a bio-photo-book from 1993).
...I thought you were being ironic. o_o
-- Robert A Heinlein, Lost Legacy
-- Frodo Baggins, conveying one of the many wise sayings that Hobbits chuck around daily. The elf he was talking with thought it was hilarious, but refused to simply agree or disagree with it.
I prefer its negation: “Go to the elves for counsel, for they will say both no and yes.”
— Eric S. Raymond
(This applies no less strongly to one’s own brain.)
Neil deGrasse Tyson
Transcribed from http://www.youtube.com/watch?v=CAD25s53wmE
Disagree, at least in some instances. Many of these are just results of optimizing for normal environment.
There is a theorem in machine learning (blanking on the name) that says any “learner” will have to be biased in some sense.
The No Free Lunch Theorem.
Also, just because we can’t expect to be free of bias doesn’t mean that the bias is “proper functioning” of the hardware. An expected failure, perhaps, but still a failure.
I make a finer distinction of “failure” as something that’s inefficient for it’s clear purpose. E.g. Laryngeal nerve of the giraffe. Evolution will do that on occasion. Sensory interpretations that optical illusions are based on are often optimal for the environment, and are a complement to the power of evolution if anything. Viewing something that is optimal as a failure seems like wishful thinking (though I suspect this is more of a misunderstanding of neurobiology).
Actually, that seems kind of fair. Something is a “failure to X” if it doesn’t achieve X; something is a “failure” if it doesn’t achieve some implicit goal. You can rhetorically relabel something a “failure” by changing the context.
Vision works well in our usual habitat, so we should expect it to break down in some corner cases that we can construct: agreed. For me to argue further would be to argue the meaning of “failure” in this context, when I’m pretty sure I actually agree with you on all of the substance of our posts.
I really do not want to argue about semantics either, but our agreed interpretation makes Niel’s statement equivalent to “our visual system is not optimal for non-ancestral environments”, which is highly uninteresting. I think the Dawkin’s larengyal nerve example is much more interesting in this sense, since it points out body designs do not come from a sane Creator, at least in some instances (which is enough for his point).
Since we do not live in the ancestral environment now, I think the quotation could be just underlining how we should viscerally know our brain is going to output sub-optimal crud given certain inputs. Upvoted original.
I don’t understand. Does that mean they have priors?
I think it’s another way of putting it, though IIRC the biases are not always explicitly prior probabilities, they could just be a way the algorithm is constructed. Choosing the specific construct is acting on a prior.
How do you define “illusion”? I think an illusion is a type of brain failure. An optical illusion is even more specific. Therefore, I think the term is wholly appropriate — and “brain failure”, while not at all inappropriate, is just unnecessarily vague.
K. J. Bishop, “The Etched City”
(a sentiment I think applies to all super-stimuli)
-- Seth Lloyd
I would like to get rid of one or two of them. Its painfull to see how often really inevitable things get confused with those that could at least in theory be dealt with.
I read this as an argument against having taxes.
The difference being that with taxes nothing is actually ‘lost’ it is just relocated, where it can be accessed again. Whereas with energy you can only move from high to low concentrations, so there can be genuine loss of usable energy.
I initially like it as well, suppose its a good example of not believing something merely because it corroborates an already existing belief (most people dislike taxes).
Well, taxes can cause a genuine loss of wealth (as distinct from money) depending on how they’re spent and how they’re collected, however taxes can also cause a genuine gain in wealth, again depending on how they’re collected and spent.
“Paper clips are gregarious by nature, and solitary ones tend to look very, very depressed.”—dwardu
-- Eliezer Yudkowsky, putting words in my other copy’s mouth
Meta-comment: I think MoR quotes are legitimate for rationality quote pages, since IIRC we previously established that Eliezer quotes from Hacker News were kosher. And if random Eliezer comments not on OB/LW are kosher, then surely quotes from his fiction are kosher.
I disagree. MoR fits the same criteria (“shooting fish in a barrel”) as OB/LW.
I’m happy to see gems from HPMOR done up in needlepoint and hung on the metaphorical wall of the parlor. But it still smells like trayf! Consider:
Quirrell avoids the ban on quoting himself by attributing the quotation to Eliezer. And he then avoids the ban on quoting Eliezer by pointing out that Eliezer was quoting Quirrell. This is clever and slippery and rabbinical and all that, but it jumps the shark when you realize that Quirrell is not just Eliezer’s HPMOR character, he is also probably his LW sock-puppet!
Oh, come on. It’s obviously been the other way around all along.
You simultaneously gave me the lolz and the shivers. Karma for you!
That would violate the One Level Higher Than You principle.
What makes you certain we are not living in a simulation whose computational substrate lives in the HPMOR universe?
I didn’t know there was another antonym to kosher besides nonkosher. Interesting.
Anyway, I don’t think Quirrel is Eliezer; if he is, then most of the usual reasons against self-quoting wouldn’t apply anyway. (It’s not like Eliezer needs more karma or higher profile here.)
He is a clever guy. Be carefull!
--Teiresias to the unrelenting Oedipus, Oedipus the King 316-9, Sophocles
(Assigning a specific location to ‘here’ left as an exercise for the reader...)
-- Terry Pratchett, “Sourcery”
-- Frank Zappa, quoted from The Real Frank Zappa Book
Zappa was a fantastic example of someone who kept their head firmly screwed on while simultaneously exercising his inner rampaging weirdness. Everyone should read the book.
On simpler solutions:
Neal Stephenson, Cryptonomicon
The same reign of terror that occurred under Robespierre and Hitler occurred back then in the fifties, as it occurs now. You must realize that there is very little actual courage in this world. It’s pretty easy to bend people around. It doesn’t take much to shut people up, it really doesn’t. In the fifties all I had to do was call a guy up on the telephone and say, “Well, I think your wife would like to know about your mistress.”
An upvote to the first person to identify the author of that quote.
Ronald DeWolf. The son of L. Ron Hubbard.
So, wait, was it that:
a) Most men worth influencing in the 50s had a mistress his wife didn’t know about?
or that:
b) Most men worth influencing in the 50s understood that the guy calling him could persuade the wife that there was a mistress irrespective of whether there was really a mistress?
Or perhaps that they believed they had a mistress, whether they did or didn’t?
I don’t know which it was.
But I’d say that you’re seeing the trees, not the forest.
The major point of the quote was that there’s a lack of courage in the world, the rest of the quote is just examples.
The courage to allow one’s infidelity to be exposed (let alone falsely exposed) isn’t what most people have in mind when they think of courage.
b) fits in better with the reign of terror metaphor.
Dude, SRSLY, 30 seconds with google.
http://en.wikiquote.org/wiki/Ronald_DeWolf
I like the quote, but I downvoted. An upvote to the first person to identify why.
Because of the “an upvote to whoever can identify the author”?
Godwin’s Law violation?
It’s wrong.
The comma splice? Please tell us...
Oh, I assumed the answer was inherent in the question. :) As Sniffnoy said, because of the “an upvote to whoever can identify the author”
Why would that cause a downvote?
Because Robin can identify the author and a downvote is needed to balance that.
It’s not about rationality?
The fact that it wasn’t formatted as a
?
It’s not about rationality? (But I prefer Sniffnoy’s reason.)
I think some time we should have an irrationality quotes thread, kind of in the “how not to” spirit.
I think such a thread should include an expectation of deconstruction—“this is wrong and this is why”.
It’s been done.
I don’t think that thread serves the purpose RobinZ seems to have in mind. That one seems to be oriented at laughing at the theists, thus promoting ridicule of them and self-esteem for ourselves. It might instead be nice to have a thread of anti-rationality quotes devoted to advancing our rationality, rather than merely celebrating it.
One idea for doing this is to also use “anti- ground rules”. Require that the anti-rationality quotes must come from LessWrong. You can quote only yourself or Eliezer. And, as RobinZ suggests, explain why the quotation exemplifies an error of rationality (one you have since recognized and corrected).
Do we make enough educational mistakes so that we can populate a thread with them? I suspect we do.
We have one: http://lesswrong.com/lw/b0/antirationality_quotes/
Edit: Oops, Alicorn beat me by 57 seconds.
--Mike Caro, Caro’s Book of Tells
“What happens when you combine organized religion and organized sports? I don’t know, but I suspect not much would change for either institution.”
Scenes from a Multiverse
~ garcia1000, Witchhunt game
But unlike sex you shouldn’t change positions just for fun and novelty.
You should experiment with multiple positions, then use the best one.
Depends on how useful you think the experience of being a devil’s advocate is.
It would be more accurate to say that you should critically look over the evidence again if your position feels wrong. A belief can be justified by logic and still be at odds with intuition, making it still feel wrong. Example: There are compelling arguments that simulation hypothesis is at least somewhat likely to be correct. However, my intuition tells me that the simulation hypothesis is just plain false. I know that this is a subject that my intuition is poorly suited for, so I follow the logic and estimate a non-negligible chance of being in a simulation, despite it feeling wrong.
“A witty saying proves nothing”—Voltaire
That’s been posted (a few times) before. Though it may be worth repeating.
-- Mary Shelley, The Last Man
-- Aristotle
“Sherlock Holmes once said that once you have eliminated the impossible, whatever remains, however improbable, must be the answer. I, however, do not like to eliminate the impossible. The impossible often has a kind of integrity to it that the merely improbable lacks.” —Douglas Adams’s Dirk Gently, Holistic Detective
In Dirk Gently’s universe, a number of everyday events involve hypnotism, time travel, aliens, or some combination thereof. Dirk gets to the right answer by considering those possibilities, but we probably won’t.
I love this quote, but I’m pretty sure I wouldn’t describe it as “rational”.
I think we could modify our sense of it to mean that if you are down to having to accept a 0.01% probability, because you’ve excluded everything else, then it’s probably better to go back over your logic and see if there’s any place you’ve improperly limited your hypothesis space.
Several paradigm-changing theories introduced concepts that would have previously been thought impossible (like special relativity, or many-worlds interpretation)
I don’t understand this one.
The way I read it was that he’s using “impossibilities” to mean things that you don’t think are possible, don’t understand, or find inconceivable rather than things which can’t actually happen.
A probable impossibility is something that will probably happen that a given person doesn’t think is possible. An improbable possibility is something that that same person understands, but (whether you know it or not) isn’t probable.
I read ‘probable impossibility’ as ‘something that is probably impossible’. It’s a poor translation if it means something else; but your version at least makes some kind of sense.
Seibel: The way you contributed technically to the PTRAN project, it sounds like you had the big architectural picture of how the whole thing was going to work and could point out the bits that it wasn’t clear how they were going to work.
Allen: Right.
Seibel: Do you think that ability was something that you had early on, or did that develop over time?
Allen: I think it came partially out of growing up on a farm. If one looks at a lot of the interesting engineering things that happened in our field—in this era or a little earlier—an awful lot of them come from farm kids. I stumbled on this from some of the people that I worked with in the National Academy of Engineering—a whole bunch of these older men came from Midwestern farms. And they got very involved with designing rockets and other very engineering and systemy and hands-on kinds of things. I think that being involved with farms and nature, I had a great interest in, how does one fix things and how do things work?
Seibel: And a farm is a big system of inputs and outputs.
Allen: Right. And since it’s very close to nature, it has its own cycles, its own system that you can do nothing about. So one finds a place in it, and it’s a very comfortable one.
-- Turing Award-winning computer scientist Fran Allen interviewed in Peter Seibel’s Coders At Work, p507
(This is a great book, by the way. I strongly recommend it to anyone whose work involves how computers do what they do.)
Robert Heinlein
At one point, he quite audaciously predicted that the Soviet Union was headed for collapse. If he’d lived longer, he would have seen that his prediction should have been even crazier: not only did the Soviet Union fall apart, but it did so without starting a major war, or nuking any cities.
And don’t even get me started on his books where we’ve got interstellar travel, guided by computers that are the size of a room but barely faster than someone with a slide rule.
“Please don’t hold anything back, and give me the facts” – Wen Jiabao, Chinese Premier (when meeting disgruntled people at the central complaints offices).
-- William Tuning, Fuzzy Bones
Better to teach the child the difference between programming a computer, proving a theorem, and writing an essay.
If you never misspell a word, you’re spending too much time proofreading.
That’s true if the only benefit of proofreading is finding misspellings. But you should be proofreading to find errors of expression in general, and the optimal amount of proofreading for that may imply that you find and fix all misspellings.
That may be good advice for most people. (Or maybe not.) But me, I’m a chronic floccinaucinihilipilificationist. (It’s one of my more endearing traits.) And no, I don’t use a spellchecker. I don’ need no steenkeeng spellchecker.
You routinely estimate things as valueless?
Oops. I said I knew how to spell it, not what it means. (‘If you never misuse a word, you’re spending too much time second-guessing yourself/reading the dictionary’?) For some reason I thought ‘floccinaucinihilipilification’ meant ‘nitpicking’. Probably I inferred its meaning incorrectly from the context in which it appeared; that was my standard failure mode, during the era when I assume I picked up that word. (In fairness to my child-self, it was before widespread internet access—but not dictionaries.)
Also, I think I was suffering from some kind of localised cognitive impairment when I wrote that comment (sleep deprivation, perhaps). It strikes me as pretty boorish now, as well as incorrect.
You may have been misled by a Robert Heinlein novel where the soi-disant genius narrators agree that that’s what the word means. (‘Number of the Beast,’ I think.)
I believe you are correct.
Motivated cognition. It’s such a good word to show off with. (At least, it would be if it meant what I thought it meant.) In fact, I’m sure I’ve looked it up before. Maybe this time I can remember permanently.
Do mean you’re sesquipedalian?
No, but I am.
And if you mis-spell too much, or worse use the wrong word (which is increasingly common with spell-checking), you waste any readers’ time trying to figure out what you are trying to say.
Steve Omohundro, “The Nature of Self-Improving Artificial Intelligence” 2007
It’s an interesting topic, but what exactly makes this a rationality quote?
Maybe it doesn’t belong. But I was thinking in terms of something like rationality being an attractor. Minds, whatever their origin, if capable of self-improving, will tend toward a pattern which human economists had already identified as being at the heart of human rationality.
The rational direction to guide your own improvement is toward greater rationality. Even if you are not all that rational to begin with. That means that the characteristics we assign to modeled “rational agents” may be universal—they are not just something invented by some lackey of a capitalist patron.
Unless Omohundro’s analysis is wrong and he just wrote it because he is a lackey, that is.
Some human irrationality seems adaptive. Humans apparently deceive themselves so they can manipulate others without actually lying—so as to avoid detection.
That does not directly contradict Omohundro. The quotation merely suggests that almost-rational humans will seek to self-modify in the direction of becoming less self-deceptive and better at lying. A look at the self-help literature tends to confirm Omohundro’s prediction.
That leaves the question, though, as to why Natural Selection didn’t take care of this ‘improvement’ itself. My guess is that it is a life-history, levels-of-selection, and kin-selection issue. Self-help books are purchased by adults. NS tries to optimize the whole life history. It is good for neither children nor their families that they become accomplished liars. Maybe self-deception in children has some advantages as well. Just speculating.
Are the liars going to win, though? Nature subsidises both transparency and lie detectors, for reasons to do with promoting cooperation. In the future it may get even harder to convince others of things you don’t personally believe—as is dramatically portrayed in The Truth Machine.
“Meanness and stupidity are so closely related that anything you do to decrease one will probably also decrease the other.”
--Paul Graham, here.
Select the most dominant prisoner in every (male) prison in the country and use them to artificially inseminate 5,000 women each (use IVF with the female top dogs if you wish too).
Punish all observed incidents of stupidity with physical beating.
I voted the comment up—because there is a relationship there. There are just other correlations and causal influences that are somewhat stronger in some situations.
The fact that you had to choose so ridiculous an example suggests that Paul Graham is basically correct. (I think the correct reading of “anything you do to decrease one will probably also decrease the other” is “if you pick something that decreases one, it will probably decrease the other” rather than “literally every single thing that might decrease one will, with high probability given that you do that particular thing, decrease the other”.)
No it doesn’t. It suggests that when selecting examples for the purpose of countering generalizations wedrifid chooses examples that are clear and unambiguous to anyone who correctly parses the claim rather than choosing the most likely counter example. This is particularly the case when rejecting the extent of a general claim while accepting the gist—as I went out of the way to make explicit.
I also reject the idea that the second example I gave is at all unrealistic:
Corporal punishment for stupidity is an actual (hopefully mostly historical) thing.
I can’t help this quote:
--N-Space, Larry Niven
For the record, I took you to be proposing a single counterexample with two components, rather than two separate counterexamples; I’m sorry for the misunderstanding.
Now that I know the second bullet point was meant to be a separate counterexample, I have a different objection to it: I am unconvinced that any implementable version of it would both reduce stupidity and increase meanness. (The most likely outcome, I think, would be to increase meanness while replacing more blatant varieties of stupidity with more widely spread lower-level stupidity.)
EDITED to add: Oh, one other thing. If it happens that (1) it was you who downvoted me and (2) you did so because you thought I downvoted your previous comment, then you might want to know that I didn’t.
--Lothrop Stoddard, The Revolt Against Civilization
Approximate quote: [You should] go in with a thesis, not a conclusion.
From a BBC program about the media and crime in Detroit. The context was the extent to which Detroit is over-reported as a high-crime city, and someone commented that the BBC had sent someone over for a reason, but they were actually looking at the situation instead of assuming they knew what they were going to see.
“All this knowledge is giving me a raging brainer!”
Professor Farnsworth, Futurama
— Dwight Schrute (“The Office” Season 3, Episode 17 “Business School,” written by Brent Forrester)
Sounds like reversed stupidity.
It would be better to explain why that is a bad thing when you post statements such as that.
The ‘least convenient possible world’ might be relevant too. I translated the verbal self interrogation as something that would elicit responses along the lines of “would doing this thing distinguish one as an idiot?” In practice the question probably would be useful. In fact, in practice only an idiot would really reverse the stupidity of an idiot when asking that question of themselves. Breath, eat, etc.
Or a different version of rubber-ducking.
I won’t ask what that means, because I could presumably easily find out by searching; but I won’t search, because I don’t care enough (and I’m already here as a distraction from what I meant to be doing).
/not sure if should provide a useful link or not.
How is this a rationality quote?
Thanks for asking. I linked it on purpose to wikipedia from where I quote:
Tempus fugit is a succint admonition to focus on what is really important as opposed to what is merely salient. Focus on the not urgent but important things(quadrant 2 in the covey matrix).
http://en.wikipedia.org/wiki/File:MerrillCoveyMatrix.png
I thought it was a good quote, although I’m not sure LWers need to know it. (On the other hand, one might think the same thing of curing aging or helping cryonics, but Eliezer’s essay on his dead brother still got a substantial reaction.)
Do you like this one better?
--Cato the Elder; Epistles (94) as quoted by Seneca
David Hume—“Enquiry concerning the Principles of Morals”
-- Betty Edwards, “Drawing on the Right Side of the Brain” (actually an awesome book, this quote isn’t very representative)
A note, left at Kennedy’s grave.
-- Paul Krugman
Speaking of peculiarly boring sub-genres of science fiction, I am told that Paul Krugman was once the best and most promising of the Jedi Masters of Economics. But somehow, the forces of the Sith seduced him to the dark side, and he has since become Darth Pundit the Mindkillingly Political.
In any case, if economic statistics are bad, let them be made better. For that matter, if they’re very, very good, let them be made better still, and even then nobody should treat them as the absolute truth.
He has certainly become political. It might be worth asking: Has he become any less accurate in the process? Another possibility would be that the positions taken up by the major political parties in the US at present are such that it’s impossible to tell the truth about some subjects without being (perceived as) highly political.
(That’s certainly happened often before. For extreme examples, consider cases where an important political movement is based on badly broken racial theories or on a specific religion.)
According to this study, he does okay, but I’m not impressed with their methodology. For some reason I can’t copy/paste the relevant section of the PDF, but they discuss him explicitly on page 15. They looked at “a random sample” of his columns and television appearances (whatever that means) and found 17 predictions, of which 14 were right, 1 was wrong, and 1 was hedged.
Only 17 predictions? I thought we did science.
“He is, after all, a Nobel-prize winning economist.”
I agree that that study is unimpressive, in a number of ways. (And it’s comparing his accuracy with that of other pundits, rather than with that of past-Krugman.)
-- Louis L’Amour, The Walking Drum
-- John Coyne & Tom Hebert, This Way Out
Emancipate yourself from mental slavery, none but yourself can free your mind.
An upvote to the first person to correctly identify the first person to say that (the quote is often misattributed, you’ll get a downvote if you identify the wrong author).
Marcus Garvey. I think it works better in this longer form.
Bob Marley, although before I checked Google search, Books, and Scholar, I had expected to find it was by Epictetus. Oh well.
EDIT: In my defense, Garvey’s original is not the same as the Bob Marley version which Robin presented. I think it’s a little disingenuous to consider the Bob Marley version ‘misattributed’.
Reminded me of...
Roy Harper
C.S. Lewis, “Religion: Reality or Substitute?”, in “Christian Reflections”.
Anyone want to try and tease a rationality message out of this?
Lewis is saying that if you’ve disproved faith, your reason is flawed. After all, faith must be right!
This is ‘extraordinary claims require extraordinary evidence’, but in unfamiliar garb. We’re not used to seeing it used the other way. (If a study reports ESP, then we ought to suspect problems in how it was conducted or analyzed rather than accept its conclusion—to use a recent example.)
I’m sure there are a number of relevant LW posts on the topic like “Einstein’s Arrogance”.
The one that immediately comes to mind for me is making your explicit reasoning trustworthy. Lewis was exhorting Christians not to trust their explicit reasoning.
My take: “Because our cognition is unreliable, we can easily lose sight of truths we started out knowing as we walk along tempting-but-wrong garden paths, especially when strong emotions are involved.”
In other contexts this is sometimes known as “being so sharp you cut yourself.”
That’s a good moral, but to me Lewis’s quote seems to be more simply interpreted as an exhortation against successful doubt. Our thinking is certainly unreliable, but compensating for that with a fixed intention to keep believing whatever we’re currently obsessed with seems like exactly the wrong thing to do; it essentially enshrines motivated cognition as a virtue.
Having a “settled intention of continuing to believe” X shares with having a “high prior probability for” X the property that quite a lot of counterevidence can pile up before I actually start considering X unlikely.
This is not a bad thing, in and of itself.
Of course, if X happens to be false, it’s an unfortunate condition to find myself in. But if X is true, it’s a fortunate one. That just shows that it’s better to believe true things than false ones, no matter how high or low your priors or settled or indecisive your intentions.
Of course, if I start refusing to update on counterevidence at all, that’s a problem. And I agree, it’s easy to read Lewis as endorsing refusing to update on counterevidence, if only by pattern-matching to religious arguments in general.
Point taken, but Lewis wasn’t operating within a Bayesian framework. I haven’t read a lot of his apologetics, but what I remember seemed to be working through the lens of informal philosophy, where a concept is accepted or rejected as a unit based on whether or not you can think of sufficiently clever responses to all the challenges you’re aware of.
From this perspective, a “settled intention of continuing to believe” implies putting a lot more mental effort into finding clever defenses of your beliefs, and Lewis’s professed acceptance of reason implies nothing more than admitting challenges in principle. Since it’s possible to rationalize pretty much anything, this strikes me as functionally equivalent to refusing to update.
And, of course, enshrining the state of holding high priors as virtuous in itself carries its own problems.
(nods) Mostly agreed.
You get it.
Within the context of Lewis’ Christianity, it could be the valid form of the argument from authority: don’t believe appealing falsehoods with a little evidence over unappealing truths with a lot of evidence you don’t know. To give an example: you tell kids to believe evolution or special relativity without explaining the evidence in detail, but it would still be right for them to have “faith” instead of changing to believe creationism the first time they read a (bogus, but they wouldn’t be able to tell) creationist argument on the internet.
Except that Lewis’ Christianity was not based on any authority deemed infallible. He reasoned himself into it, while recognising the fallibility of reason. His writings set out his arguments; they do not tout any source of authority whose reliability he has not already argued.
But how can one rightly reason, while recognising one’s fallibility? That is an issue for rationalists as well.
Let me fix the original quote for you:
When a long argument produces a conclusion that strikes one as absurd, one sometimes just has to say, “This is bullshit. I don’t know what’s wrong with the argument, but I’m not going along with it.”
I think the flaw in the syllogism is “the human reason, unassisted, has a low chance of retaining its hold on truths.” We certainly forget a great deal of procedural and propositional knowledge if we don’t use it on a regular basis, but that’s different from letting go of a belief because you are passionate about how inconvenient the belief is. Once a belief takes root—i.e., after you announce it to your friends and take some actions based on it—it is usually very difficult to let go of that belief.
David Hume
dupe (which includes citation and larger context.)
“Everything works by magick; science represents a small domain of magick where coincidences have a relatively high probability of occurrence.”
Does this merely call attention to the high probability of the existence of unknown unknowns, or does it promote map-territory confusion?
I totally knew who said that. Does that make me a bad rationalist?
If this isn’t what you see
It doesn’t make you blind
If this doesn’t make you feel
It doesn’t mean you’ve died Where the river’s high
Where the river’s high
If you don’t want to be seen
You don’t have to hide
If you don’t want to believe
You don’t have to try
To feel alive
If this doesn’t make you free
It doesn’t mean you’re tied
If this doesn’t take you down
It doesn’t mean you’re high
If this doesn’t make you smile
You don’t have to cry
If this isn’t making sense
It doesn’t make it lies
Alive in the superunknown
First it steals your mind
And then it steals your soul
Get yourself afraid
Get yourself alone
Get yourself contained
Get yourself control
Soundgarden, “Superunknown”
That’s terrible. There were better song quotations in the crossover Ranma 1⁄2 fanfic Eliezer recommended, Hybrid Theory; eg. chapter 10:
And that’s saying something.
Downvoted because this kind of quote is the kind of snide simplistic atheism that is best left on Reddit’s atheism subreddit or similar places. It has no value here; it’s not even good Dark Arts.
11 downvotes and 22 upvotes for its disapproval. On a rationality site! Good Lord!
If at some point you choose to actually express any of the thoughts that motivated you to post this comment, rather than simply emote about them, you might manage to communicate them more successfully.
Probably didn’t help that you misspelled (or failed to correct/sic the quote’s misspelling) “imaginary”.
http://mw1.meriam-webster.com/dictionary/imaginary
...And? Your quote says “imaginery”.
Yes, you are correct about spelling. I see now, I’ve copied with an “e”. But was it the spelling or it was just a rude quotation, I wonder.
The quotation alone would probably have gotten downvoted, but I expect that the spelling may have exacerbated it.
An exact copy from here:
http://answers.yahoo.com/question/index?qid=20080715092325AAV45NQ
Who am I to change it in any way?
Yahoo Answers is not a sacred text, nor is it the first appearance of that quote. You can and should spell correctly in your posts. Not that it matters much this time; I would have downvoted it anyway.
?
That’s not the quote in the parent comment. That one was upvoted.
It’s farther down on the linked page.
It’s the fifth one down the list.
Look around on that site!
It’s important to keep in mind that the karma system can’t distinguish between “Eleven people separately think this was a bad comment on net” and “The community thinks this comment is so utterly and completely awful that it deserves a score of minus-eleven.”
For myself, I agree with Gwern about the snideness and simplicity, but I wouldn’t say the quote is wrong, either.
Well, I would certainly say it’s misleading, in that it suggests that people who start religious wars are substantially motivated by abstract comparisons of the quality of their god vs. somebody else’s god, which I think is simply false about the world.
But aphorisms can be both misleading and valuable, so that isn’t in and of itself a reason to not want it on the site.
Neither is the implication that anything that criticizes religion necessarily has to do with rationality, I suppose, though I personally find that more of a problem.
The trouble with this quotation (besides its typo and lack of attribution) is that it says nothing but “religions are false”. This is a trivial point, and this quotation does not support it. An eloquent support for a truth is worth quoting, an instructive explanation of the implications of a truth is worth quoting, but this is neither.