Rationality Quotes October 2012
Here’s the new thread for posting quotes, with the usual rules:
Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
Do not quote yourself
Do not quote comments/posts on LW/OB
No more than 5 quotes per person per monthly thread, please.
--Teller (source)
My experience has been that when people try to understand what went into a magic trick, they usually come up with explanations more complex than the true mechanism. Oftentimes a trick can be done either through an obvious but laborious method, or through an easy method, and people don’t realize that the latter exists. (For instance, people posit elaborate mirror setups, or “moving the hand quicker than the eye”, or armies of confederates, when in fact simple misdirection, forcing, palming, etc. suffice.)
--Michael Lewis’ profile of Barack Obama
Possibly also explaining this trend in the world of academia.
I’m assuming many are already aware of this, but he’s talking about decision fatigue here.
Hastie & Dawes, Rational Choice in an Uncertain World, pp. 67-8.
Related:
Dawes, in JUU:HB p. 392.
Jem and Tessa, Clockwork Angel by Cassandra Clare
--Bertrand Russell (Google Books attributes this to In praise of idleness and other essays, pg 133)
Upvoted for entertainment value, but could someone enlighten me on the rationality value?
Belief in belief in the wild?
-Seth Godin
Spoken like a true cat.
I’m going to adopt at different social strategy and not be the obnoxiously nosy guy with no boundaries. Some things I’m curious about really aren’t my business and actively seeking to uncover information that people try to keep secret is usually a personal (and often legal) violation. The terms ‘industrial espionage’ and ‘stalking’ both spring to mind.
Curiosity didn’t kill the cat. The redneck with the gun killed it for tresspassing.
As I was growing up around here, I discovered that there are certain curiosities which are always welcomed in this redneck sort of area. They include such lovely questions as;
“What church do you go to?”
1. “You root for the home sport team, right?” 2. ”...Do you follow sport at all?” 3. “Why not?!” (They progress like this the more you answer “No”)
“Politics? Politics? Politics? Politics? Politics? Politics? POLITICS?”
Any curiosity more complex than this is usually just there to serve these three topics.
But if you answer correctly (cough) these questions three, it’s basically like using the Konami Code or something. Just in case you’re ever in the South.
Now I’m curious about how the progression continues. (In Italy, I am asked what football (soccer) team I support all the time, but when I say “I used to support Juventus, but I haven’t actually followed football in years” they usually leave it at that, and when they do ask me why and I say stuff like “I just don’t enjoy it anymore” they never progress any further.)
Usually I try to give similar answers that halt the line of conversation.
“I’ve never cared for sports, I shouldn’t play for health reasons, it’s not interesting to me, I don’t understand the point, I’ve got other things to do, my dog was killed by a rogue football and I’ve never been the same since that fateful day”, etc.
I’ve never actually answered “No” to the question “Why not?!”, but I feel as though I should try, now...
So, I’ve never really let it progress beyond that point. As a kid, I did that with both religion and politics, by giving noncommittal answers.
-- Paul Graham
Thanks. That article (link) is very relevant to me after a discussion I just had on LW. Good advice, too, as far as I can tell.
--Eminem, “The Real Slim Shady”
Eminem seeks his comparative advantage and avoids self-handicapping.
I wonder how many other Rationality Quotes we can find in rap lyrics...
There is an Ice-T quote here.
Karl Popper, The Open Society and its Enemies
Terry Pratchett, Wintersmith
Greg Egan, Diaspora
.
This sounds like it ought to mean something, but every time I try to think what it might be I fail. Is it just clever?
.
But it might cause it. Or it might not. If there’s a correlation then that’s interesting, surely? There’s no smoke without a misunderstanding of causality.
.
--Eric Hoffer, on Near/Far
Invertible fact alert!
Men In Black
It’s a lot easier to hate Creationists than to hate my landlady.
Mad libs:
It is a lot easier to than to .
And sometimes it’s true with s/easier/harder/. (“feel compassion for”.) Hence invertibility.
Well, yes, but the invertibility is conditional.
Compassion is easier with a concrete person for a target. As is… idk. There’s probably some (respect? romantic love? Loyalty?).
Hate is easier with a diffuse target. As is, say, idolizing love, disgust, contempt, superiority, etc.
The invertibility isn’t in that you can flip “harder” to “easier” and then have it make just as much sense. You have to change the emotion too, which signifies that there is a categorization of emotions: useful!
If you insist that this is invertible wisdom, then I must say you are misapplying the heuristic.
Depends. A klansman may find it easy to hate “niggers” but much harder to hate his black neighbour. A literary critic who values her tolerance may it find difficult to hate an abstract group but can passionately hate her mother-in-law. I am not sure whether the difference stems from there being two different types of hate, or only from different causes of the same sort of hate.
It is easier to than to .
It is harder to than to .
Isn’t the actually a and the an ?
I don’t think hate is necessarily easier with a diffuse target. People hold personal grudges well. There’s also the fact that there are sometimes legitimate reasons to hate specific people, but there are basically never legitimate reasons to hate entire groups of people.
Can you summarize your understanding of legitimate reasons for hate?
I’m not asking for examples, but rather for the principles that those examples would exemplify.
Semi-legitimate might be a better descriptor. If someone destroyed me or the ones I loved out of spite and took pleasure in it, I would probably hate them and probably feel that my hate was legitimate. If I went through any traumatic experience like torture or rape, I would probably come out of that with some hate.
I’m an egoist, not a utilitarian (I have strong utilitarian preferences though). That probably has implications for this as well.
It is easier to control how you relate to a theoretical group than a concrete individual. If you believe it is proper to hate Creationists, you can do so with little difficulty. If you change your mind and think it is better to pity them, you can do that.
But if you landlady has actually helped or hurt you, and you know a strong emotional response isn’t actually called for, you’re going to have a very hard time not liking or hating her.
Linus van Pelt
Noah Smith
-Dana Scully, The X-Files, Season 1, Episode 17
--Terry Pratchett, Hogfather
Pretty sure the lies are out there too. I think I prefer Scully.
The quote can be said to mean that reality (“out there”) doesn’t lie—falsehoods are in the map, not in the territory. But truth is what corresponds to reality...
Other people’s maps are part of my territory.
Quibble: “Your” territory?
This point is also relevant to Eliezer’s post on truth as correspondance. A belief can start unentangled with reality, but once people talk about it, the belief itself becomes part of the territory.
Yes, this.
Other people’s expressions of verbal symbols that are not even part of their map are also part of the territory.
-- G. K. Chesterton, “The Appetite of Tyranny”, arguing against pretending to be wise
Two WAITWs don’t make a right.
In this quotation, Chesterton writes against people who compare war to vigilante justice. But his argument is not that this is a poor comparison, but that instead the analogy doesn’t go far enough. So, he compounds the error of his opponents with an error of his own.
There’s also some scenario slippage—in the peacenik argument, the citizen “avenges” himself, but by the time Chesterton gets to him, the dead man was just “standing there within reach of the hatchet.” That alone gives you a hint about you what kind of hearing the accused is likely to get in Chesterton’s court.
--Ambrose Bierce, The Devil’s Dictionary
The international equivalent is not a police and justice system, it’s vigilante justice. Doing nothing is not much worse than killing the attacker, being killed by the attacker’s friends who believe the victim had started it, and starting a vendetta. How do you arrest a state? Ask the UN for permission to carpet-bomb it?
Under the assumption that a lesser power is unable to punish injustice done by a greater power, the three possible alternatives at any level of power are “Injustice is dealt with by a greater power”, “Injustice is dealt with by peers”, and “Injustice is dealt with by nobody”. The first system sounds nice, except that infinite regression is impossible, and so eventually you end up at the greatest level of power, choosing between systems two and three. In that case, system two seems preferable, “vigilante” connotations notwithstanding.
--Will Wilkinson
That comment did move Intrade shares by around 10 percentage points, I think, though I’m only going on personal before-and-after comparisons. The good Will may have picked the wrong time to criticize his instincts.
So? That just means that some of the people who trade on intrade also made the mistake Will alludes to.
Nate Silver’s model also moved toward Obama, so it’s probably reflecting something real to some extent.
But the gains have been already cancelled by Romney’s better performance in the first debate. You could spin this in two ways. One one hand, you could argue that the “47%” comment did move the polls, and that ceteris paribus it would have reduced significantly Romney’s chances of winning. On the other hand, you could say that ceteris should not be expected to be paribus; polls are expected to shift back and forth, and regress to the mean (where “the mean” is dictated by the fundamentals—incumbency, state of the economy, etc), and that if 47% and the debate hadn’t happened, other similar things would have.
Silver’s model already at least attempts to account for fundamentals and reversion to the mean, though. You could argue that the model still puts too much weight on polls over fundamentals, but I don’t see a strong reason to prefer that over the first interpretation of just taking it at face value.
Has there been any analysis of how accurate Silver’s predictions have been in the past?
He basically jumped to fame when he predicted the result of many of the Obama-Clinton primaries far more accurately than the pundits. He then got right 49 out of 50 states in Obama-McCain (missing only Indiana, where Obama won by 1%). He also predicted correctly all the Senate races in 2008 and all but one in 2010. In the House in 2010 the GOP picked just 8 seats more than his average forecast, which was well within his 95% confidence interval. (All info taken from Wikipedia.) I do not know of any systematic comparison between his accuracy and that of other analysts, but I would be surprised if there was someone better.
Silver makes and changes his predictions throughout the campaign season. Which predictions is this referring to?
The ones on the eve of election day.
Why? That might mean that, I don’t see how it would necessarily mean that.
-- Mark Schone
A customer-facing skills course I went on, many years ago, used the word “tape” to describe pervasive habits of speech.
--Randal Munroe, A Mole of Moles
It’s called “show, don’t tell”.
Did Munroe add that? It’s incorrect. There are lots of situations in which it’s reasonable to calculate while throwing away an occasional factor of 2.2.
Yeah, but the way he shows that the Avogadro number is approximately one trillion trillion is still hilarious (though it does work).
--Kruschke 2010, Doing Bayesian Data Analysis, pg56-57
-Steven Kaas (via)
It always irritates me slightly that Holmes says “whatever remains, however improbable, must be the truth”, when multiple incompatible hypotheses will remain.
My Holmes says, “When you have eliminated the possible, you must expand your conception of what is possible.”
You have an inequality symbol missing at the end of the quote (between i and j). That made it slightly difficult for me to parse it on my first read-through (“Why does it say ‘for all i, j’ when the only index in the expression is ‘i’?”).
I don’t know if you know, but just in case you (or someone else) don’t: There is no inequality symbol on the computer keyboard, so he used a typical programmer’s inequality symbol which is ”!=”. So yes, it is not easily readable (i! is a bad combination...) but totally correct.
A space between variable & operator would help.
The symbol wasn’t there when I wrote my comment. It was edited in afterwards.
The way to handle that is whitespace:
i != 0
. (I once was teased by my tendency to put whitespace in computer code around all operators which would be spaced in typeset mathematical formulas.)EDIT: I also use italics for variables, boldface for vectors, etc. when handwriting. Whenever I get a new pen I immediately check whether it’s practical to do boldface with it.
Of course, an infitesimal prior dominating the posterior pdf might also be a hint that your model needs adjustment.
-- The Lord of the Rings: The Two Towers (extended edition)
What Faramir says contains wisdom but so do Frodo’s words. The enemy is trying to destroy the world with some kind of epic high fantasy apocalypse. Frodo does not terminally value the death (heh) of specific foot soldiers. They may be noble and virtuous and their deaths a tragic waste. But Frodo has something to protect and also has baddass allies who return from the (mostly) dead with a wardrobe change. But he doesn’t have enough power to give himself a batman-like self-handicap of using non-lethal force. Killing those who get in his way (but lamenting the necessity) is the right thing for him to do and so yes, people would do well not to hinder him.
Agreed. Though of course, I don’t really see Faramir as disagreeing—it was, after all, the Rangers of Ithilien who ambushed the Haradrim and killed the soldier they’re talking about.
I’m a little bit proud that I don’t know who all these people are.
downvoted. You’re saying you don’t know anything about the context provided by a story that is apparently of interest to (at least) several readers here, and you’re proud of not sharing the context. Doesn’t seem like something to crow about without first finding out if the content is frivolous.
No I wasn’t. I could give you an analysis of likely outcomes of a battle between Mirkwood and Lorien archers depending on terrain. It isn’t often that my knowledge of utterly useless details of fantasy stories is outclassed. I may as well enjoy the experience.
I’d ding you for having confessed to being proud of your ignorance, except that what you confessed ignorance of was not, technically speaking, a fact.
I’m never quite sure what to think about being proud of not knowing a fact. On one hand, knowledge itself almost certainly has positive value, even if that value is very small. On the other hand, making the effort to acquire very low-usefulness knowledge generally has negative expected utility, so I can understand prioritized a particular body of knowledge as “not worth it.”
Of course, pride is really about signaling, so it makes sense to look at what sort of signal one’s pride is sending. If someone seems particularly knowledgeable about a low-status topic, such as celebrity gossip, I judge them negatively for it. I assume most people do this, though with different lists of which topics are low-status (or am I just projecting?).
Ultimately, I think the questions to consider are:
As an individual, does prideful ignorance of a topic you consider not worthwhile send a signal you want to send, and
As a community, is this the sort of signal we want to encourage?
--Ta-Nehisi Coates, “A Muscular Empathy”
Nate Silver
From the introduction to The Signal and the Noise: Why So Many Predictions Fail—But Some Don’t, section entitled “The Prediction Solution”.
From the stories I expected the world to be sad
And it was.
And I expected it to be wonderful.
It was.
I just didn’t expect it to be so big.
-- xkcd: Click and Drag
Richard Posner, Catastrophe: Risk and Response
Well they’re maybe a little more admirable than some other types of worker. Let’s not go overboard here.
Yet a policymaker for science must either be a scientist (ish), or a Pointy-Haired Boss.
Plenty of dogberts get in on the action as well.
Eric Schwitzgebel
what is left are the data points that align with your narrative about yourself.
Indeed, which together with the quote implies “you” = “your narrative about yourself”. See also Dennett’s “The Self as a Center of Narrative Gravity”.
I resent this attitude. People often assume that I don’t care about the things that I forget. Really, I am tired of a whole host of prejudices against people with poor memories. People assume that I am just like them, and that if I fail to remember something they would have remembered, it was deliberate.
Nevertheless; for any given person, the more he cares about something, the less likely he is to forget about it.
Can we just agree that English doesn’t have a working definition for “self”, and that different definitions are helpful in different contexts? I don’t think there’s anything profound in proposing definitions for words that fuzzy.
Jeff Bezos
-Franz Kafka (quoted in Joy of Clojure)
-Antonio Machado
Translation:
...
--Richard Dawkins on the ontological argument for theism, from The God Delusion, pages 81-82.
That sounds like the sort of thing you’d say if you’d never heard of mathematics.
And that sounds like the sort of thing you might say if you were unaware of countless examples of analytic-synthetic distinction in actually applying math (say, which geometry do you live in right now? And what axioms did you deduce it from, exactly?).
He has a point. It isn’t obvious that Dawkins’ objection doesn’t apply to math. The ontological argument probably has more real-world assumptions used in it than does arithmetic.
Do people who’ve never heard of mathematics often say such things?
Yagyu Munenori, The Life-Giving Sword (translated by William Scott Wilson).
Ibid.
Commentary: I see this in the martial/kinesthetic context as acting without conscious censorship of your action, using the skills you have leaned through conscious censorship; and similarly in a LW context of approaching questions in Near Mode, without consciously adjusting for bias.
--Frank Herbert, The Tactful Saboteur
A good heuristic. Barack Obama limits his wardrobe choices, Feynman decides to just always order chocolate ice cream for dessert. Leaves more time and energy for important stuff.
When I was a kid, removing my niggling and nagging choices, distractions, and petty inabilites sounded grand. It kinda backfired at first because I started over-planning the details of my daily activities, like ya do. And anything I actually took an interest in, to quell my confusion and streamline my time, drew people towards me for my arcane skills.
Is there any honor in hiding your abilities (when it’s not your job) so people don’t ask for help with simple stuff?
I was… uh… the family IT guy. My dad still needs the computer’s power button pointed out to him.
Place a notebook next to the computer. When you tell someone how to do something, tell them to write it down, every step, in the notebook. Tell them to write it down so that they will be able to understand it later. Next time they ask you the same question, refer them to the notebook. If this fails to help, consider insisting on some minor cost (such as ‘buy me a chocolate’ - nothing expensive, more an irritant than anything else, merely a cost for the sake of having a cost) for reiterating anything that has been written in the notebook.
It may or may not help, but if it doesn’t help, then at least you’ll get a certain amount of chocolate out of it.
I used to do step-by-step instructions and those XKCD diagrams (all of which were promptly torn down for being “dern confusicating”, but I’ll try all that. Thanks.
Neil deGrasse Tyson, “Atheist or Agnostic?”
-- Harry Potter and the Natural 20
This is a clever little exchange, and I’m generally all about munchkinry as a rationalist’s tool. But as a lawyer, this specific example bothers me because it relies on and reinforces a common misunderstanding about law—the idea that courts interpret legal documents by giving words a strict or literal meaning, rather than their ordinary meaning. The maxim that “all text must be interpreted in context” is so widespread in the law as to be a cliche, but law in fiction rarely acknowledges this concept.
So in the example above, courts would never say “well, you did ‘attend’ this school on one occasion, and the law doesn’t say you have to ‘attend’ more than once, so yeah, you’re off the hook.” They would say “sorry, but the clear meaning of ‘attend school’ in this context is ‘regular attendance,’ because everyone who isn’t specifically trying to munchkin the system understands that these words refer to that concept.” Lawyers and judges actually understand the notion of words not having fixed meanings better than is generally understood.
Yes, but the setting in question is a D&D universe and many things work differently, rules-in-general most certainly included.
Well, a great many D&D players / DMs would argue that Jay_Schweikert’s explanation applies equally well to the rules of role-playing games.
Not the fun ones.
Ah, fair enough. I suppose the title of the work and the idea of an actual course on Munchkinry should have been clues about the setting.
In Italy, IIRC, some kind of rule explicitly specifies the maximum number of days, and the maximum number of consecutive days, a school child can be absent (except for health reason). Otherwise, would going to school four days a week count as “attending”? Natural language’s fuzziness is a feature in normal usage, but a bug if you have a law and you need to decide how to handle borderline cases.
Well, that was a fun way to spend my Saturday. I haven’t had a fanfic monopolize my time this much since Friendship Is Optimal.
Best part so far:
-- mme_n_b
Émile Zola
A good article on Slate.com by Daniel Engber
This thread needs a mention of this saying: “Correlation correlates with causation because causation causes correlation.” (I don’t know if anyone knows who came up with this.)
xkcd said it better:
I found the article rather confused. He begins by criticising the slogan as over-used, but by the end says that we do need to distinguish correlation from causation and the problem with the slogan is that it’s just a slogan. His history of the idea ends in the 1940s, and he appears completely unaware of the work that has been done on this issue by Judea Pearl and others over the last twenty years—unaware that there is indeed more, much more, than just a slogan. Even the basic idea of performing interventions to detect causality is missing. The same superficiality applies to the other issue he covers, of distinguishing statistical significance from importance.
I’d post a comment at the Slate article to that effect, but the comment button doesn’t seem to do anything.
ETA: Googling /correlation causation/ doesn’t easily bring the modern work to light either. The first hit is the Wikipedia article on the slogan, which actually does have a reference to Pearl, but only in passing. Second is the xkcd about correlation waggling its eyebrows suggestively, third is another superficial article on stats.org, fourth is a link to the Slate article, and fifth is the Slate article itself. Further down is RationalWiki’s take on it, which briefly mentions interventions as the way to detect causality but I think not prominently enough. One has to get to the Wikipedia page on causality to find the meat of the matter.
I have a lot of sympathy for the article, though I agree it’s not very focused. In my experience, “correlation does not imply causation” is mostly used as some sort of magical talisman in discussion, wheeled out by people who don’t really understand it in the hope that it may do something.
I’ve been considering writing a discussion post on similar rhetorical talismans, but I’m not sure how on-topic it would end up being.
I would like to see an article which advised you on how you could:
Recognize when you are using such a talisman, and/or
Induce thought in someone else using such a talisman.
I think I have a pretty good idea of when I’m doing it. It’s a similar sensation to guessing the teacher’s password; that ‘I don’t really understand this, but I’m going to try it anyway to see if it works’ feeling.
This is my view as well.
Also, isn’t your ETA something we can fix? The search term “what does imply causation” (and variations thereof) clearly isn’t subject to a lot of competition. I’m half-tempted to do it myself.
Someone (preferably an expert) could work on the Wiki article, and LessWrong already has a lot of stuff on Pearl-style causal reasoning, but beyond that, it’s a matter of the reception of these ideas in the statistical community, which is up to them, and I don’t know anything about anyway. Do we have any statisticians here (IlyaShpitser?) who can say what the current state of things is? Is modern causal analysis routinely practiced in statistical enquiries? Is it taught to undergraduates in statistics, or do statistics courses go no further on the subject than the randomised controlled trial?
Good questions. The history of causality in statistics is very complicated (partly due to the attitudes of big names like Fisher). There was one point not too long ago when people could not publish causality research in statistics journals as it was considered a “separate magisterium” (!). People who had something interesting to say about causality in statistics journals had to recast it as missing data problems.
All that is changing—somewhat. There were many many talks on causality at JSM this year, and the trend is set to continue. The set of people who is aware of what the g-formula is, or ignorability is, for example, is certainly much larger than 20 years ago.
As for what “proper causal analysis” is—there is some controversy here, and unsurprisingly the causal inference field splits up into camps (counterfactual vs not, graphs vs not, untestable assumptions vs not, etc.) It’s a bit like (http://xkcd.com/1095/).
(see here)
Upvoted for the quote, I didn’t read the article.
Pierre Simon, Marquis de Laplace, “A Philosophical Essay On Probabilities”, quoted here. (Hat tip.)
James Stephens
What testable predictions does this make and have they been tested? The typical interactions of various emotions with each other is something we should be able to find out but I’m not sure if the message of the quote is supposed to be anything to do with making a claim about reality.
Purely anecdotal: I was a lot more frightened of spiders before I got a book about them out of the library and read it. They are pretty interesting little creatures. Mind you, I live where there are no actually dangerous ones.
Speculation: To the extent that a lot of fear is fear of the unknown, and curiosity attracts us to the unknown so we can know more of it, I can see how curiosity would help reduce fear.
Testable prediction: someone who reports a fear of something they don’t know much about will report less fear of it after they can be encouraged to express and follow up some amount of curiosity about it. It’s conditional on such curiosity actually existing. Possibly extreme phobias shut off any attempts to discuss or think about the object of fear.
Curiosity probably even helps with well-founded fears of things like bears, hydrofluoric acid or blue-glowing bits of metal. I’m content not to conquer well-founded fears I think.
Another anecdote: I was much less bothered by thunder (admittedly, I was distressed rather than panicked) when I found out that there were people who made a hobby of recording thunder. This caused me to listen to it rather than just be upset by the loud noise.
Hmmm… I’ve got an irrational loathing of piped music. Any suggestions?
Well, my thunder story isn’t going to help—the hook for me is that thunder is a little different every time.
One of my friends found that after she’d had a meditation practice, she suddenly saw piped music (I’m assuming you mean recorded music played in public places) as a benevolent thing—an effort to make people’s lives more pleasant. This might take a habit of meditation rather than just trying to see benevolence.
You could take a look at the basis of your loathing. Maybe there’s an underlying premise that doesn’t make sense.
Actually that sounds neat. I’ll try to cultivate that attitude. I think the problem is that music (with words) disrupts my (continuous) inner train of thought. To the point where if there’s music in a supermarket I’m much less capable of buying things. I’m pretty sure this problem is much worse for me that for most people (otherwise piped music would be illegal).
But if I’m going to get disrupted anyway, it would be nice not to be annoyed as well. And if I could find a way not to be annoyed that might reduce the disruption.
There’s also earplugs, though I agree it’s plausible that getting aggravated adds to the distraction.
This could be partially due to the idea of safety margins. If I meet something that I know very little about, I give it a wide safety margin, and get nervous if I am closer to it than my safety margin allows; once I know more about it, I can narrow down my safety margins considerably.
For example, if I know very little about snakes except that some are poisonous, and I happen to find a house snake in my house, my reaction would be one of caution (or, if I come across it very suddenly, even of fear) - with the aim of extricating myself from the situation unbitten. However, if I know enough about snakes to identify the snake as a member of a nonvenomous species, which apparently make reasonable pets, and that their first reaction to danger is to flee, then I would be a lot less afraid of the snake in question (though a cobra would still trigger a panic-run-away response).
Is bravery a mental state (or something) that conquers fear, or is it bravery to conquer fear (e.g. because you’re curious)?
Richard Feynman
(Partially quoted here, but never given in a Rationality Quotes thread before.)
I recently read Surely You’re Joking, are there other good Feynman autobiographies, or other scientist autobiographies, that I should check out? I don’t want anything that gets too technical, but neither do I want things that are totally descriptive and biographical. I want insights into the overall way that good scientists think, but I also want to avoid specifics and technical concepts insofar as that is possible.
I think there’s a sequel to Surely You’re Joking, but I’m not sure what it’s called.
What Do You Care What Other People Think?
Scott Adams
While I don’t ever feel that way, I understand that many people have such internal verbal or non-verbal conversations with one or more other “selves”. These are also common in fiction, probably in part as a literary device, but also probably as a reflection of the author’s mind. Hmm, maybe it is worth a poll.
Lucky him—his internal persons are friends.
Sure is a little turbulent with up to fourteen voices all expressing their opinions and viewpoints. Don’t know how anyone keeps it under proper control.
Frankenweenie
The new one, or the original?
Definitely in the new one. I haven’t seen the original.
Milan Cirkovic
So… the formal FAI theory will only be developed after an AI fooms? Makes perfect sense to me… We are all doomed!!
-- Robert H. Thouless, Straight and Crooked Thinking
Experience trumps brilliance.
— Thomas Sowell
This belief seems to me very convenient for the brilliant, implying that they got where they are by hard work and properly deserve everything they have. Of course brilliant people also have to put in hard work, but their return on investment is much higher than many other contenders who may have put in even more work for lower total returns. Just-world hypothesis; life is not this fair. And while I do go about preaching the virtue of Hufflepuff, I also go about saying that people should try to Huffle where they have comparative advantage.
My reading of the quote is that empiricism is superior to rationalism (the old philosophical schools, not the sort we discuss here). If I have a proof that my bridge will hold a thousand pounds, and it breaks under a hundred, then the experiment trumps the proof.
By “proof”, do you mean experimental evidence, or armchair rationalization?
A correct mathematical proof based on an experimentally verified model of bridges and seemingly obvious assumptions about your particular bridge.
That doesn’t sound like the sort of thing a rationalist (in the sense Vaniver was using) would care for at all.
In practical terms, though, experience does frequently trump brilliance. This does not mean this is a good thing to have happened, only that it does. Experience makes one more likely to be good at competition.
Genius is one percent inspiration, ninety-nine percent perspiration.
-- Thomas Edison
Lacking sufficient inspiration, I shall reduce my perspiration until recommended ratio is met.
Have you considered LSD, for the inspiration? I mean, if the sources don’t matter, just the ratio...
I haven’t, but the suggestion seems overwhelmingly problematic to me. I don’t have any personal experience with LSD, but I don’t think it would be of great help in my present problem.
However, we could test this. If you have LSD and would like to try your hand at solving it, please let me know. I must say, though, I doubt you would find it to be an interesting problem in the first place.
Sorry, the only research I’ve heard of was about technical matters and implied it only worked for the person who had been thinking deeply about it.
Unfortunately, this will produce only a very small quantity of genius.
Yes, but it’s the best you can do sometimes. And the excess sweat would otherwise be wasted.
Not necessarily. You can always apply your excess perspiration to someone else’s excess inspiration (and then claim 99% of any resultant profits—assuming that you provide all the perspiration, of course).
Anecdotally, I seem to observe more excess inspiration than excess perspiration, so I don’t think that excess inspiration will be hard to find.
Hmm. Corollary:
Lacking sufficient perspiration, I shall reduce my inspiration until recommended ratio is met.
Eh. Doesn’t sound quite as awesome.
No, it doesn’t, but might be almost equally wise. Just as it doesn’t make sense to keep working hard without something worth working hard on, it probably doesn’t make sense to keep trying to come up with brilliant ideas if you’re already so awash in brilliant ideas that you can’t implement them all.
Caveat 1: If you can find better inspiration into which to direct your limited supply of perspiration, and don’t further deplete your capacity for perspiration in the process, it may still be a good idea to go for more inspiration.
Caveat 2: If you have a good way to sell your excess inspiration or buy more perspiration, and you have a strong comparative advantage in inspiration, you may want to do that, but selling inspiration is hard, as is buying good quality perspiration.
I think that Caveat #1 is extremely important here. Considering the amount of perspiration needed to turn inspiration into genius, it’s probably best to spend a bit of extra time searching for the best possible inspiration to which to direct your available supply of perspiration.
A true genius would do nothing and then steal the results of other people’s inspiration and perspiration.
OWAIT
— Kozma Prutkov
(translation mine)
-- Tom Murphy
What would be a better way to teach young children about the nuances of the scientific method? This isn’t meant as a snarky reply. I’m reasonably confident that Tom Murphy is onto something here, and I doubt most elementary school science fairs are optimized for conveying scientific principles with as much nuance as possible.
But it’s not clear to me what sort of process would be much better, and even upon reading the full post, the closest he comes to addressing this point is “don’t interpret failure to prove the hypothesis as failure of the project.” Good advice to be sure, but it doesn’t really go to the “dynamic interplay” that he characterizes as so important. Maybe instruct that experiments should occur in multiple rounds, and that participants will be judged in large part by how they incorporate results from previous rounds into later ones? That would probably be better, although I imagine you’d start brushing up pretty quickly against basic time and energy constraints—how many elementary schools would be willing and able to keep students participating in year-long science projects?
That’s not to say we shouldn’t explore options here, but it might be that, especially for young children, traditional one-off science fairs do a decent enough job of teaching the very basic idea that beliefs are tested by experiment. Maybe that’s not so bad, akin to why Mythbusters is a net positive for science.
Well, doing experiments to test which of several plausible hypotheses is more accurate, rather than those where you can easily guess what’s going to happen beforehand, would be a start. (Testing whether light can travel through the dark? Seriously, WTF?)
That is a large part of the reason why we have problems like the file drawer effect and data dredging.
I don’t think that thinking categorically and mechanically would be feasibly productive.
It’s a reality that we have to think messily in order to solve problems quickly, even if that efficiency also causes biases.
However, we should at least be aware of what the proper way to do it would be.
Yeah. But I think there are different levels of propriety, and that is what the quote is getting at. We should mention that the ideal form of science would look very rigid and modular and be without bias. Then, we should talk about how actual science inevitably involves biases and errors, and that these biases to a limited extent are sometimes compensated by increased efficiency. Then, we should talk about how to minimize biases while maximizing the efficiency of our thought processes.
Level One: Ideal
Level Two: Reality
Level Three: Pragmatic Ideal
A class or book on Level Three would be very useful to me and I’m not aware of any. Anyone have suggestions? Less Wrong seems to cover Level One very well and Level Two is obvious to anyone who is a human being but Level Three is what I would really like to work on.
Fred de Martines, a pork farmer who does direct marketing
“Anything that real people do in the world is by definition interesting. By ‘interesting’, I mean worthy of the kind of investigation that puts curiosity and honesty well before judgment. Judgment may come, but only after you’ve done some work.”—Timothy Burke
People, even regular people, are never just any one person with one set of attributes. It’s not that simple. We’re all at the mercy of the limbic system, clouds of electricity drifting through the brain. Every man is broken into twenty-four-hour fractions, and then again within those twenty-four hours. It’s a daily pantomime, one man yielding control to the next: a backstage crowded with old hacks clamoring for their turn in the spotlight. Every week, every day. The angry man hands the baton over to the sulking man, and in turn to the sex addict, the introvert, the conversationalist. Every man is a mob, a chain gang of idiots.
This is the tragedy of life. Because for a few minutes of every day, every man becomes a genius. Moments of clarity, insight, whatever you want to call them. The clouds part, the planets get in a neat little line, and everything becomes obvious. I should quit smoking, maybe, or here’s how I could make a fast million, or such and such is the key to eternal happiness. That’s the miserable truth. For a few moments, the secrets of the universe are opened to us. Life is a cheap parlor trick.
But then the genius, the savant, has to hand over the controls to the next guy down the pike, most likely the guy who just wants to eat potato chips, and insight and brilliance and salvation are all entrusted to a moron or a hedonist or a narcoleptic.
The only way out of this mess, of course, is to take steps to ensure that you control the idiots that you become. To take your chain gang, hand in hand, and lead them.
From Memento Mori by Jonathan Nolan
—George Polya, How to Solve It
Arthur Schopenhauer
That follows in a fairly straightforward way from his central theme in his dissertation, The World as Will and Representation, which is that the world is, well, the title spoiled it.
That made me think of this:
The world stands out on either side
No wider than the heart is wide;
Above the world is stretched the sky -
No higher than the soul is high.
Edna St. Vincent Millay
The trick is to combine your waking rational abilities with the infinite possibilities of your dreams. Because, if you can do that, you can do anything. -Waking life (2001)
It’s always “you can do anything” and never “you can do more than you currently believe you’re capable of” with these motivational quotes.
“more than you currently believe you’re capable of” is any-thing.
No, it is “not less than one thing that is not in the set of things that you believe you are capable of”. “Anything” includes “more than you currently believe you’re capable of” but the reverse isn’t true.
To make it tangible, someone who believes they wouldn’t be able to get a date with a particular prospective mate when they in fact could and also believes they can not fly at faster than the speed of light is capable of doing more than they believe they are capable of but it still isn’t correct to tell them “you can do anything”. Because they in fact cannot fly faster than the speed of light.
Right. More concisely put: If you do so-and-so, it may expand the set of things you can attain, but it won’t remove all limitations.
I’m not convinced about the infinite possibilities of my dreams. Pretty sure large parts of my brain are not functioning as well during REM sleep as they are while I’m awake. For example, I don’t think I can read in my dreams, or write computer programs. So possibly the things I can dream about are only a subset of the things I can think about while awake.
And that’s leaving aside my heuristic judgement about all non-rigorous uses of the word “infinite”.
Daydreaming? I think we should not take “dream” to literal here.
“Infinite” is problematic, indeed. I think there is just a finite number of dreams of finite length.
Take “infinite” as you would take the recursiveness of language, there is a set of finite words or particles from which you can just “create” infinite combinations.
About the numer of dreams, do you reckon there is something like a pool of dreams we use one by one until it’s empty?
To get an infinite set of texts with a finite set of characters, you need texts of infinite length. I think it is similar for dreams—the set of possible experiences is finite, and dreams have a finite sequence of experiences.
The pool of possible dreams is so large that we will never hit any limit—and even if (which would require experienced lifetimes of 10^whatever years), we would have forgotten earlier dreams long ago.
You get an infinite set of texts with a finite set of characters and texts of finite length merely by letting the lengths be unbounded. Proof: Consider the set of characters {a}, which has but a single character. We are restricted to the following texts: a, aa, aaa, aaaa, aaaaa,… We nevertheless spot an obvious bijection to the positive integers. (Just count the ’a’s) So there are infinitely many texts.
Sorry, I was a bit unprecise. “You need texts without size limit” would be correct. The issue is: Your memory (and probably lifetime) is finite. Even if you convert the whole observable universe to your extended memory.
But outside an infinitesimally small subset, so tiny that any attempt to express it as a fraction would just give you zero, you couldn’t appreciate any of the texts within a human lifetime, even if you managed to get clever and extend the lifetime to the heat death of the universe.
I do wonder about a culture running out of ideas, high concepts, that sort of thing. Dreams can be long and messy, so the permutation space of the limited-by-finite-lifespan-of-physically-embodied-agents set of dreams is still huge. Good ideas, on the other hand, are often things you can distill to a short sentence in ordinary language. You can describe an interesting idea in ten common words. Let’s say it takes a day on average to evaluate whether any one idea is good or not and that there are ten thousand common words. There are 10000^10 = 1e40 such sentences, a small minority of which will describe coherent ideas.
It’s seems quite physically possible to have a civilization last several millions of years. This would give the civilization a total of 1e10 days. That’s 1e30 ideas to think about each day. A good galactic civilization should be able to colonize all of the Milky Way, giving it something in the excess of 1e10 stars to build habitats around. An average Dyson sphere built around a star populated by 1e20 people gets a population density of around 500 people per square kilometer, around the same density as in the Netherlands.
So all you’d need is a Dutch galactic supercivilization spanning some millions of years and really, really obsessed with word permutations to utterly exhaust the ideas expressible in ten common words. Anything interesting they won’t have thought reasonably carefully about will be literally inexpressible in ten words, unless you start expanding the language with new words.
And compared to sets of strings with unbounded length, there is nothing particularly outlandish about those numbers. Science routinely handles far larger orders of magnitude of both time and space.
Quite the caveat.
That’s just if we’re operating in a 1950s future paradigm, where you need to do everything with regular humans running around and occasionally colonizing neighboring solar systems when things get crowded.
If we’re allowed to do a bit of virtualization, then things get more interesting. We could get things a bit more compact if we take a few centuries at the start of the project to develop solid brain emulation technology to ease those pesky problems of needing lots of living space, sleep, and eventually devolving into stone-age cannibalism with mystery cults around permuting ten word sentences. Estimating the computation involved in human cognition is tricky, but 1 exaflops is floating around. Say that a well-engineered and focused emulated mind can evaluate one permutation in an average 1e4 seconds, a bit less than three hours, since it doesn’t have to worry that much about maintaining a society.
So that means the exhaustion process would require 1e18 1e4 1e40 = 1e62 computation steps. A kilogram of Drexlerian nanocomputers appears to be able to do something around 1e25 flops.
So if you were in a big hurry and wanted all simple new ideas ruined in just 300 years, you could just grab Jupiter, turn all of it into drextech computronium, and fill it with your loyal EM programs. A technologically advanced Kardashev II civilization might end up having exhausted a lot of simple ideaspace after a single millennia of hanging around in a single solar system.
But that’s just not true. There is a finite limit to the length of text that can be produced. Evaluate a Busy Beaver function at Graham’s Number.
Now take the aforementioned maximum text length in characters. Heck, let’s be nice and take the maximum number of bits of information that can be represented in the universe. Raise that number to the power of itself. Now raise that number to the power of itself. You’re not even CLOSE to the number you got in the first paragraph. We’re quite a long way from infinity.
I think it’s OK to take “dreams” literally when contrasted in the same sentence with “waking”. I’ll give the writer the benefit of the doubt along one axis: either they were expressing insightless nonsense clearly, or they are not great at communicating their brilliant insights ;)
Indeed, it seems difficult to dream of the Kloopezur, infinite meta-minds whose n-dimensional point-thoughts are individual configuration frames in spacetime arrangements of relative velocities of all particles in our current universe, who have long solved the meta-problem of solving infinite problems with finite resources.
It seems particularly difficult to dream about the infinite lives of an infinity of Kloopezur.
Terry Pratchett, The Last Hero
Found here.
It seems to be a misquotation of this.
-- Slavoj Zizek, The Plague of Fantasies
“A car with a broken engine cannot drive backward at 200 mph, even if the engine is really really broken.”
--Eliezer
Good quote, of course, but it’s against one of the rules:
Out of curiosity, does that rule extend to, say, material originally posted on Yudkowsky’s personal site and later re-used or quoted as a source in a LW/OB article/post/comment? Is that a gray area?
Yes. It’s also slightly gray to post quotes from other prominent Lesswrongians.
Where I make my ‘slightly gray’ evaluation based on whether the quote is sufficiently baddass to make it worth stretching the spirit of the thread. Sometimes they are. It’s when the quotes aren’t even all that good that I’d discourage it.
Hmm. So we’re weighing badass-ness (as in wedrifid’s comment (sister to this one)) against the “don’t post quotes that are already part of the general LessWrong gestalt” (in whatever capacity that exists) valuation, in such cases?
When did this rule come about? As recently as six months ago it was considered normal to quote EY as long as it wasn’t from LW.
I’m surprised by this. I never noticed this “considered normal”.
I’m pretty sure gray areas aren’t rules. The actual non-gray rule is listed in the OP.
Well, Yudkowsky was one of the top authors for 2011.
I figured the intent of the rule was “don’t turn quotes threads into LW ingroup circlejerks”, so the idea’s to not do any quotes from e.g. the people in the “Top contributors” sidebar, no matter where they showed up. Do other people have other interpretations for the rule?
tch. Should’ve caught that.
Frédéric Bastiat.
Prior to WW2, Germany was the biggest trading partner of France.
Irrelevant. The quote is not “If goods do cross borders, armies won’t.”
And of course, one of the historical peaks of globalization and European integration was reached in 1914.
Yeah, but at least with respect to Germany, that was on the basis of the treaty of Versailles. History doesn’t offer lots of clean examples of anything, but this is a very dirty example of ‘trade, then war’.
The Great War, pre-Versailles, was a dirty example of ‘trade, then war’? I would have said it was a fantastic example, much better than pointing to French-German integration post-Versailles and pre-WWII...
Ah, I got my date wrong for the end of WWI, and so misinterpreted your comment. This is terribly embarrassing. You’re quite right (now that I look it up) that this is a very good example of ‘trade then war’.
ETA: Though now I suppose my complaint should be ‘If no trade, then war’ isn’t contradicted by cases of ‘trade, then war’. It would be contradicted by cases of ‘no trade, no war’.
True, logically it could just be the case that both trade and no trade lead to war… I think most people would interpret claims more meaningfully, however, in which case trade and war is useful to have examples of.
‘No trade then war’ could well be an informative causal claim, or a reliable generalization even if its also sometimes true that war follows trade as well.
Conversely, I doubt there is much trading between Bhutan and Tuvalu, and I don’t expect them to fight anytime soon.
It is hard for both goods and armies to cross nonexistent borders. That doesn’t say anything about what happens between nations that do share a border.
I thought the quote’s intent was more general, and that the border didn’t need to be a physical one that both countries shared. E.g., if Spain and Britain were prohibiting all trade between them, Bastiat would probably expect them to fight soon.
Of course, he also implicitly meant the quote to apply to cases where the lack of trade was due to restrictions, not to distance and lack of interest like in my counterexample.
Libertarian quote, or rationality quote?
A libertarian would assert that it is both. (Most others would probably agree with claim or at least with the implied instrumental rationality related message.)
I happen to agree with the quote; I just don’t think it’s particularly a quote about rationality. Just because a quote is correct doesn’t mean that it’s a quote about how to go about acquiring correct beliefs, or (in general) accomplish your goals. The fact that HIV is a retrovirus that employs an enzyme called reverse transcriptase to copy its genetic code into the host cell is useful information for a biologist or a biochemist, because it helps them to accomplish their goals. But it is rather unhelpful for someone looking for a way to accomplish goals in general.
Traditional Aphorism
-- Princess Bubblegum
Attributed to Charles De Gaulle.
i read many comments here which is very interesting about the different point of view.i would like to share this information with my friends.i am writing dissertation help ,if you want any help kindly visit here. http://www.propaperswriting.com/Dissertation-Help
Trinity: “You always told me to stay off the freeway.” Morpheus: “Yes, that’s true.” Trinity: “You said it was suicide.” Morpheus: “Then let us hope that I was wrong.”
— The Matrix Reloaded
I think you must have made a mistake. This film doesn’t exist.
Hypothetical quotes are the best kind of quotes...
G.K. Chesterton
Nicholas Humphrey
I disagree. We’re obligated to do things to the best of our ability based on the knowledge we have. If those decisions have bad outcomes, that doesn’t mean our actions weren’t justified. Otherwise, you displace moral judgement from the here and now into inaccessible ideas about what will have turned out to be the case.
I guess there is a slight ambiguity in the way Nicholas Humphrey uses the word ‘right’ in the sentence: “none of this would give you a right to administer the poison”. I doubt he is making a moral statement. What he is pointing out is that your beliefs will have to be judged by reality. Your beliefs do not affect the fact that what you are administering is poison.
In fact, he points out that having incorrect beliefs might make you morally less culpable. But it doesn’t make you right.
Having incorrect beliefs and acting on them is the right thing to do. Acting on the right thing to do makes you right. I disagree with your reading of Humphrey’s statement because the idea that you can be less morally culpable given certain conditions still seems to imply a certain degree of culpability.
Your beliefs have to be judged by your other beliefs because pure objectivity is epistemically inaccessible. The quote is at best useless because no one intentionally poisons their child if they love their child. The quote serves to make other people (eg us) more overconfident in their (eg our) subjective beliefs that they (eg we) perceive as objective. It also makes us more willing to look down on the person who accidentally poisoned their child. I don’t like either of those things so I don’t like the quote.
Explanation for minuses anyone?
Well, I can’t speak for anyone else, but:
This raises immediate concerns, and that’s just in the first sentence.
Incorrect. If I believe that an apple will fall if I drop it, I can test this emperically by dropping an apple. I can judge many beliefs based on results, not on other beliefs.
Those seem to be the most blatant reasons to me.
Consider that from the inside it’s impossible to distinguish between the true beliefs that you have and the false ones that you have. I think any system of morality that doesn’t take place within a realistic understanding of human limitations is broken as a guide to human action. The converse of my first sentence is what seems truly concerning to me, the view that people who make mistakes are doing something wrong.
Your interpretation of empirical results is mediated by your subjective beliefs. For instance, you believe that empiricism is a valid way of knowing the objective world. You believe that the concept of an apple is a meaningful one, as is the concept of dropping and falling.
At this level of simplicity, it might seem trivial to mention such beliefs, but you shouldn’t deny that those beliefs are necessary for your argument to function. You are no god; you are an ape. You do not have access to pure and unfiltered objectivity. Since that’s true, if we used a moral system that forced us to abide by pure objectivity as opposed to subjective interpretations of objectivity, we would be totally paralyzed. The system would be functionally nihilistic because it couldn’t weigh between competing possible futures at all because all of our knowledge about the possible future is somewhat subjective.
Overconfidence in what seems objective and is objective is bad because it’s exactly the same as overconfidence in what seems objective and isn’t objective. We need to recognize that some aspects of our arguments are just necessarily going to depend on assumptions, because if we don’t recognize assumptions for what they are we cultivate habits of thought that help maintain existing biases.
Given that our mistakes and our successes are indistinguishable, we should give both the same moral status. The morality of a person must be judged by their intentions insofar as it’s possible to understand those intentions. Blaming someone for a genuine mistake is a morally wrong and illogical thing to do.
To put on my utilitarian hat for a moment, I would suggest that blaming someone for a genuine mistake is right inasmuch as it leads to better outcomes.
To wit, sometimes punishing genuine mistakes will correct behavior.
Also,
What is this supposed to mean? Was there an implied syllogism that I didn’t spot? Why did logic even enter the conversation?
It means that if certain assumptions are made, including about what basic words mean (I make no comment on whether these assumptions are correct) then reaching the conclusions that someone is to be blamed relies on making a logic error. So yes, you did miss an implied syllogism—perhaps because you don’t accept the implied premises so don’t consider the syllogism important.
Asking this question is a dubious move. Commenting on whether things make any sense at all is reasonably relevant to most conversations and this conversation seems to be about evaluating whether a line of reasoning (in a quote) is to be accepted. That line of reasoning being illogical would be pretty darn relevant if happened to be true (and again, I’m not commenting on the validity of the required premises).
Yeah, I was just confused. I see “illogical” being used in situations that don’t seem to be about logic, and looking at a dictionary to see if I was assuming a wrong meaning didn’t seem to help.
So based on your explanation, it seems like if Alice says “illogical” to Betty like that, I should 1) assume Alice thinks Betty is making a logical argument, 2) figure out what logical argument Alice thinks Betty is supposed to be making, and 3) figure out what Alice thinks is wrong with that argument.
Of course, that sounds like a lot of work, so I’ll probably just start skipping over that word.
That seems practical. Usually a similar thing can be done with ‘immoral’ too, and ‘right’, and ‘should’.
We should distinguish between the notion of blame insofar as it’s useful as opposed to blame insofar as it suggests that there is something wrong with the person for making the initial decision. Don’t confuse different issues, there’s a difference between desirable metaethics and desirable societal norms.
It seems illogical to have a moral system which requires people to do something impossible.
My desirable metaethics contains a complete lack of the notion referred to as “blame”. The closest it gets is rewards for things that encourage prevention of things that would be “blamed” under other metaethics, such as the one which I currently hold (there are lots of hard problems to solve before that desired metaethics can be made fully reflectively coherent).
That seems to be begging the question. You posit that these things are objectively impossible, but assert that there is no way to obtain objective truth, and no way to verify the impossibility of something.
Also, a moral system which requires for maximal morality that all minds be infinitely kind requires all minds to do something impossible, yet seems like an extremely logical moral system (if kindness is the only thing valued). You can have unbounded variables in moral systems. I see no error there.
Not illogical, just annoying and pointless.
Shut up and do the impossible.
I think that concept is overused. Almost all impossible things are not worth attempting.
Regardless, that lesson does not apply here. That lesson is useful is because sometimes when our goal is to try we don’t actually try as much as we possibly could. But if something is literally impossible in the sense that even if your will power was dozens of times stronger you couldn’t do it, the lesson is no longer useful. No matter how hard or how long I try, I will not be able to have a perfectly objectively view of the world. Recognizing my limitations is important because otherwise I waste effort and resources.
That’s not the situation here.
The converse would be something along the lines of “Having correct beliefs is the right thing to do”. This converse would be something that I do agree with. Now, it is highly unlikely that all of my beliefs are correct all of the time; in fact, in the past, I have noticed that some of my beliefs were wrong. Having correct beliefs in every respect is highly unlikely; it is an ideal towards which to strive, not an achievable position.
Having incorrect beliefs, on the other hand, is a situation which one should strive to avoid, and is certainly not the right thing to do.
Having said that, though, acting in the way that one believes to be correct is the right thing to do. (If that is what you meant by your first sentence, then you phrased it poorly).
They are not indistinguishable. If I run an experiment, I can state beforehand the results which I expect to observe. If my expectations match my observations, I call that ‘success’ - if they do not, I call that ‘failure’. Both my observations and my expectations exist. Whether there is actually an apple, and whether or not the concept of ‘falling’ or the action of ‘dropping’ is possible, it remains nonetheless true, at some point in time, that:
I have a memory of planning to drop the apple.
I have a memory of expecting to observe the apple falling after dropping it
I have a memory of dropping the apple
I have a memory of the apple either falling or not falling
Whether these four memories correspond to anything that happened in the past outside of my head is something that a philosopher may debate. However, they form a basis for me to distinguish between success (I have a memory of the observation of the apple falling) and failure (I have a memory of the observation of the apple not falling).
I did not mean to imply that anyone should recieve any blame for a genuine, unavoidable mistake. (In certain cases, a person can be legitimately blamed for failing to check a belief; for example, if a person fails to check the belief that his campfire has gone out, then he can be blamed when the campfire burns down the forest (if he does check, and suitably thoroughly, then it’s different)).
So yes, the morality of a person should be judged by their intentions. But correct beliefs mean that beneficial intentions will more likely result in beneficial outcomes; therefore, it is important to attempt to acquire correct beliefs.
No, we’re obligated to make sure we have enough knowledge and to gather more knowledge if we don’t. If you believe that you don’t have the time and/or resources to do this, that’s also a decision with moral consequences.
In other words, it’s not enough to merely try to make the correct decision.
The possibility that more information will change your recommended course of action is one that has to be weighed against the costs of acquiring more information, not a moral imperative. One can always find oneself in a situation where the evidence is stacked to deceive one. That doesn’t mean that before you put on your socks in the morning you ought to perform an exhaustive check to make sure that your sock drawer hasn’t been rigged to blow up the White House when opened.
You use only the resources you have, including your judgement, including your metajudgement.
Somebody should start a sister site, Less Culpable. It might be More Useful.
What does having a ‘right’ mean in this context? Is Humphrey trying to say that other observers who know that the vial contains poison aren’t obliged to allow the confused parent to administer the poison? I suppose that would be a reasonable point to make. If he is only talking in the sense of degree of blame assigned to the confused parent then his claim is more ethically questionable.
This seems like a “definition of right” quote rather than a moral statement. I’d rather just say “being certain that poison is good for your child makes you subjectively right, but not objectively right, to administer it.” Or if those terms are already being used for something else, we can make up new words.
Then of course we might ask, for example: when determining if criminal action is appropriate, does it matter whether the criminal had a subjective but not an objective right to commit the crime? And that would be an interesting question. In absence of a context, it’s pointless to discuss which of two things should be called “right”.
http://www.slate.com/articles/health_and_science/human_evolution/2012/10/evolution_of_anxiety_humans_were_prey_for_predators_such_as_hyenas_snakes.2.html
Nietzsche, The Gay Science
-Mägo de Oz
Not only is this false, I would make the counter claim “There can be causes that have been abandoned that are less ‘lost’ than other causes that have not been abandoned.”
Let’s contrive an example: If everyone abandoned the cause ‘prevent global warming over the time scale of 30 years’ it would still be less of a lost cause than the cause “raise this child with faith in God such that she is accepted into eternal life in heaven” even though there may be several people diligently and actively working toward said cause.
As a rule of thumb, the word “Truly” in a claim constitutes the announcement “This claim probably relies on No True Scottsman”.
A general rebuttal: Having a misunderstanding of the territory may cause you to formulate a goal that cannot be realized. However, a rational agent may well work to maximize his utility by approximating the resolution of the goal—decreasing the distance as much as possible between the territory and the goal-state. Not working towards the goal does not maximize utility.
Utility functions aren’t necessarily monotonic.
First, you haven’t supported your first statement at all—if everyone stopped trying to prevent global warming, what is the probability of successfully preventing global warming? Global warming could be averted by events such as a supervolcano or comet impact, but “preventing global warming” is a subgoal of “preserve the environment” or “reduce existential risk” so such disasters would not really count as accomplishing the task.
Second, if your mission is to raise a child that is accepted into heaven, it could be successful if somebody creates an AI which simulates the Christian God and uploads dead people into simulated realities in engineered basement universes or something.
I didn’t support the first statement at all because it didn’t need supporting. In fact, I chose a goal that is extremely unlikely to succeed so that it couldn’t be claimed that the selected ‘cause’ was too redundant a cause to be meaningful. The reason the first statement needs little support is because the alternative includes the subgoal of making an omnipotent being exist, rewriting history such that He created the universe and all that is in it and causing an entirely new ‘heavenly’ reality to come into being. You yourself provided two ways that make ‘prevent global warming’ less of a lost cause than that of making God exist, have always existed and be the cause of all that is. (The capitalisation of ‘God’ indicating reference to the specific god that did those things, not some computer that someone wants to call a ‘god’). If you want another example that is less destructive, just try “someone builds an FAI and the FAI fixes global warming as a side effect”—that is at least possible within the laws of physics, even if it is rather difficult.
Prevention of global warming could be adopted as a cause due to it being instrumentally useful in achieving some other goal. That doesn’t mean it isn’t a cause or that achieving the goal doesn’t mean the goal is achieved. Ultimately all causes could be declared to be the mere subgoals of another goal, right up to an ultimate cause of “maximise expected utility”.
If I asked the believers in question whether a simulation of an upload of their dead child is what their goal is they would disagree. Causes being abandoned and substituted for other more practical goals is a boon for those adjusting their strategic priorities but still means the original cause is lost.
The quote “The only truly lost cause is that which has been abandoned” is simply denotatively false even though it can be expected to be the kind of things people may use to be inspirational. The kind of quotes that I like to see are those that manage to be actually correct while also being insightful or inspirational.
I agree with you that a cause does not become “truly lost” simply because you abandon it—you might just get lucky and have your goal state realized by some unforseeable process. So yes, the quote is strictly denotationally false. But “we might get lucky and see our goal realized through dumb luck even after we’ve given up” is not a really valuable heuristic to have. “shut up and do the impossible” is a valuable heuristic, and that’s what I got from the quote, reading between the lines.
Quotes that have to have to have the meaning of the words redacted and replaced with another meaning from your own cached wisdom that is actually a sane message are not rationalists quotes. They belong on the bottom of posters in some corporate office, not here.
There are billions upon billions of statements people of made, millions of which can be shaped as quotable sound bites. Among those there are still countless thousands which are both correct and contain an insightful message. We just don’t need to scrape the bottom of the barrel and quote anything that triggers an applause light for a desired virtue regardless of whether it actually makes sense.
Point taken. Its not raising the level of discourse on Less Wrong or this quote thread—its just a fun quote that pattern matches to approved Less Wrong virtues, as you say. I’m defending the quote mostly because your first reply seemed kinda uncharitable.
I mean, a quote I posted last month “If at first you don’t succeed, switch to power tools” got voted up to +14, and nobody said “Actually that is incorrect, there are situations where switching to power tools won’t help at all LOL.”
Perhaps the difference in reception (and certainly the difference in my reception) is that this example barely even pretends to be a rationalist quote. It’s more a macho-engineer joke. The quote here on the other hand does pretend to be rationalist—giving advice and making declarations about optimal decision making. This means it triggers my ‘bullshit’ detectors. It is a claim being made for reasons completely independent of whether it is actually true or not. This means that while I don’t see why the power tools joke managed to get to +14 in a rationalists quote thread rather than, say +5, it isn’t going to outrage me to see it upvoted significantly.
Note that ancestor quote about causes made an absolute claim about the nature of reality whereas the power tools thing just offers a problem solving heuristic that works sometimes. There difference is significant (to some, including myself).
Fair enough. I will shut up now.
That sounds like a Dark Wizard giving you a free pass to ignore the Sunk Cost Fallacy or something. Better come up with something that’ll make us also consider the win potential of a cause, and whether it would actually be better for a cause to be “lost” or abandoned, if we don’t want to fall prey to the trap.
Good point. Edited to include the rest of the verse from the song. To me it says “shut up and do the impossible”
That’s much better.
The Unbinding by Walter Kirn