Rationality Quotes Thread March 2015
Another month, another rationality quotes thread. The rules are:
Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
Do not quote yourself.
Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you’d like to revive an old quote from one of those sources, please do so here.
No more than 5 quotes per person per monthly thread, please.
Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
- 5 Mar 2015 8:42 UTC; 1 point) 's comment on Stupid Questions March 2015 by (
Misao Okawa, the world’s oldest person, when asked “how she felt about living for 117 years.”
--Richard Feynman, source. Full video (The above passage happens at about the 7:00 mark in the full version.)
N.B. The transcript provided differs slightly from the video. I have followed the video.
Related to: Replace the Symbol with the Substance
Feynman knew physics but he didn’t know ornithology. When you name a bird, you’ve actually identified a whole lot of important things about it. It doesn’t matter whether we call a Passer domesticus a House Sparrow or an English Sparrow, but it is really useful to be able to know that the male and females are the same species, even though they look and sound quite different; and that these are not all the same thing as a Song Sparrow or a Savannah Sparrow. It is useful to know that Fox Sparrows are all Fox Sparrows, even though they may look extremely different depending on where you find them.
Assigning consistent names to the right groups of things is colossally important to biology and physics. Not being able to name birds for an ornithologist would be like a physicist not being able to say whether an electron and a positron are the same thing or not. Again it doesn’t matter which kind of particle we call electron and which we call positron (arguably Ben Franklin screwed up the names there by guessing wrong about the direction of current flow) but it matters a lot that we always call electrons electrons and positrons positrons. Similarly it’s important for a chemist to know that Helium 3 and Helium 4 are both Helium and not two different things (at least as far as chemistry and not nuclear physics is concerned).
Names are useful placeholders for important classifications and distinctions.
I think Feynman’s point was that a name is meaningful if you already know the other information. I can memorize a list of names of North American birds, but at the end I’ll have learned next to nothing about them. I can also spend my days observing birds and learn a lot without knowing any of their names.
I don’t think anyone will disagree with this. The hard part, though, is properly setting up the groups in the first place. Good classification systems took years (or centuries) of work and refinement to become the systems we take for granted today.
Feynman has been quoted elsewhere criticizing students for parroting physics terminology without having the least idea of what they’re actually talking about. There’s the anecdote about students who knew all about the laws of refraction but failed to identify water as a medium with a refractive index.
Feynman wasn’t really wrong, he just failed to mention that if you want to remember anything about a certain bird that you observed you will have to invent a name for it, because ‘the traveler hath no memory’. Original names are OK if you only want the knowledge for yourself.
I’m reminded of another Feynman anecdote: when he invented his own mathematical notion in middle school. It made more sense to him, but he soon realized that it was no good for communicating ideas to others.
Every time I try to learn to sight-sing I get sidetracked by trying to invent better notation for music.
After many repeats of this process I’ve decided that music notation is pretty good, given the constraints under which it used to operate.
Now I’m trying to just force myself to learn to sight-sing, already.
Did you deliberately pick this example, where Feynman speculated that they might be the same thing?
Names are useful as shorthand for a bundle of properties—but only once you know the actual bundle of properties. I sometimes think science should be taught with the examples first, and only given the name once students have identified the concept.
Semantics are important. On the other hand you don’t get additional knowledge from getting the name in an additional language that treats the concept with the same semantic borders.
Yes, this is true.
Knowing the name of the bird tells you next to nothing about it, but once you know the name of the bird it becomes much easier for people to tell you about them.
Also related: Guessing the Teacher’s Password
[Transcript from video, hence long and choppy]
-- Tyler Cowen from a talk on on neurodiversity
Suppose I agree with someone’s conclusion, and disagree with them on the method used to reach that conclusion. Are we political allies, or enemies? That is, of course “politics” is the answer to ‘why should the battle lines be drawn this way?’
Now, for Tyler as a pundit, the answer is different. Staying in an intellectual realm where he thinks like the other people around him makes it so any disagreements are interesting and intelligible.
This is sort of related to what Scott argues in “In Favor Of Niceness, Community, And Civilization”.
I think the reasons for Tyler’s positions are deeper than that.
Don’t think in terms of a single-round game, think in terms of a situation where you have to co-exist with the other party for a relatively long time and have some kind of a relationship with it.
The conclusions about a particular specific issue of today are not necessarily all that important compared to sharing a a general framework of approaches to things, a similar way of analyzing them...
I also had in mind this bit of wisdom from Robin.
As stated, this primarily matters for pundits. Notice that the methods of thinking that he’s talking about don’t reliably lead to the same conclusions; different values and different facts mean that two people who think very similarly (i.e. structure arguments in the same way) may end up with opposite policy preferences, able to look at each other and say “yes, I get what you think and why you think it, but I think the opposite.” And so a particular part of the blogosphere will discuss policies in one way, another part another way, it’ll be discussed a third way on television, and so on. But the battle lines will still be drawn in terms of conclusions, because policy conclusions are what actually get implemented, and it doesn’t seem sensible to describe the boundaries between the areas where policies are discussed as “battle lines,” when what they actually are is an absence of connections.
When dealing with someone who comes to different conclusions than I do, but whose way of thinking I understand well, it’s relatively easy for me to negotiate with them—I can predict what offers they’ll value, and roughly to what degree, and what aspects of their own negotiating position they’re likely to be OK with trading off.
Whereas negotiating with someone whose way of thinking I don’t understand is relatively hard, and I can expect a significant amount of effort to be expended overcoming the friction of the negotiation itself, and otherwise benefiting nobody.
Of course, I don’t have to negotiate with someone who agrees with me, so in the short term that’s an easy tradeoff in favor of agree-on-conclusions.
But if I’m choosing people I want to work with in the future, it’s worth asking how well agreeing on conclusions now predicts agreeing on conclusions in the future, vs. how well understanding each other now predicts understanding each other in the future. For my own part, I find mutual understanding tends to be more persistent.
That said, I’m not sure whether negotiation is more a part of what you’re calling “politics” here, or what you’re calling “punditry,” or neither, or perhaps both.
But negotiation is a huge part of what I consider politics, and not an especially significant part of what I consider punditry.
I continue to disagree. This matters a lot for people who are interested in maintaining the status quo and are very much against any drastic and revolutionary changes—which often enough come from a different way of thinking.
“Are we political allies, or enemies?” is rather orthogonal to that—your political allies are those whose actions support your goals and your political enemies are those whose actions hurt your goals.
For example, a powerful and popular extreme radical member of the “opposite” camp that has conclusions that you disagree with, uses methods you disagree with, and is generally toxic and spewing hate—that’s often a prime example of your political ally whose actions incite the moderate members of society to start supporting you and focusing on your important issues instead of something else. The existance of such a pundit is important to you, you want them to keep doing what their do and have their propaganda actions be successful up to a point. I won’t go into examples of particular politicians/parties of various countries, that gets dirty quickly, but many strictly opposed radical groups are actually allies in this sense against the majority of moderates; and sometimes they actively coordinate and cooperate despite the ideological differences.
On the other hand, a public speaker that targets the same audience as you do, shares the same goals/conslusions that you do, and the intended methods to achieve it, but simply does it consistently poorly—by using sloppy arguments that alienate part of the target audience, or by disgusting personal behavior that hurts the image of your organization. That’s a good example of a political enemy, one that you must work to silence, to get them ignored and not heard; despite being “aligned” with your conclusions.
And of course, a political competitor that does everything that you want to do but holds a chair/position that you want for yourself, is also a political enemy. Infighting inside powerful political groups is a normal situation, and when (and if) it goes to public, very interesting political arguments appear to distinguish one from their political enemy despite sharing most of the platform.
! That’s not how other humans interpret “alliance,” and using language like that is a recipe for social disaster. This is a description of convenience. Allies are people that you will sacrifice for and they will sacrifice for you. The NAACP may benefit from the existence of Stormfront, but imagine the fallout from a fundraising letter that called them the NAACP’s allies!
Whether or not someone is an ally or an enemy depends on the context. As the saying goes, “I against my brother, and I and my brother against my cousins, I and my brother and my cousins against the world”—the person that has the same preferences as you, and thus competes with you for the same resources, is potentially an enemy in the local scope but is an ally in broader scopes.
Allies are those who agree to cooperate with you. An alliance may be temporary, limited in scope, and subject to conditions, but in the end it’s all about cooperation. A stupid enemy who makes mistakes certainly benefits your cause and is a useful tool, but he’s no ally.
Scott Sumner
In practice, the economic “long run” can happen exceedingly quickly. Keynes was probably closer to right with “Markets can remain irrational longer than you can remain solvent,” but if you plan on the basis of “in the long run we are all dead,” you might find out just how short that long run can be.
If we need to look to economics for rationality quotes, we are getting towards the bottom of the barrel, Robin Hanson notwithstanding.
Macroeconomics? Sure, its highly politicized so in many cases I’ll agree with that. But microeconomics is in many ways the study of how to rationally deal with scarcity. IMO, traditional micro assuming homo-economicus is actually more interesting (and useful, outside of politics) than the behavioral stuff for this reason.
-- Émile Auguste Chartier, Propos sur la religion, 1938
Jon Elster, Explaining Social Behavior: More Nuts and Bolts for the Social Sciences, Cambridge, 2007, pp. 136-137, n. 16
I just realized what bothers me about this quote. It seems boil down to Elster trying to admit that he was wrong without having to give credit to those who were right.
Yup, he appears to be doing that. On the grounds that he has other reasons for thinking they don’t deserve credit for it.
Rather than commenting on the credibility of that in Elster’s specific case (which would depend on knowing more than I do about Elster and about the anti-communists he paid attention to), I’ll remark that there certainly are cases in which most of us here would do likewise. (Not literally zero credit, but extremely little, which I think is also what Elster’s doing.) For instance:
One of your friends is an avid lottery enthusiast and keeps urging you to buy a ticket “because today might be your lucky day”. He disdains your statements that buying lottery tickets is a substantial loss on average and insists that he’s made a profit from playing the lottery. (Maybe he actually has, maybe not.) Eventually you give in and buy one ticket. It happens to win a large prize.
Another of your friends is a fundamentalist of some sort and tells you confidently that the current scientific consensus on evolution is all bunk. Any time she reads of any scientific claim about evolution she is liable to tell you confidently that in time it’ll be refuted by later research. One day, a new discovery is made that refutes something you had said to her about evolution (e.g., that X is more closely related to Y than to Z).
Another worships the ancient Roman gods and tells you with great confidence that it will rain tomorrow because he has made sacrifice to Jupiter, Neptune and the lares and penates of his household. You are expecting a dry day because that’s what the weather forecasts say. It does in fact rain a bit.
Is being anti-lottery some kind of badge of honor amongst intelligent people? It is entertainment, not investment. It is spending money to buy a feeling excited expectance. It is like buying a movie ticket. Does anyone consider buying a ticket to scary horror movie irrational? Some people just like that kind of excitement. People who buy lottery tickets just like different kinds of excitement, dream, fantasy.
As for the argument that it is a mis-investment of emotions that is also false, people can decide to work forward the goal then what happens is a lot of grinding, they can still dream about something else, it is not like you cannot dream while you grind. Realistic goals do not need a lot of dream investment but rather time and effort and it is safe to invest dreams in unrealistic ones.
When I have read Eliezer’s mis-investment of emotions argument it came accross to me an elitistic Bay Area upper middle class thing. People in slums usually need to grind until they get a better schooling and job experience to escape it, this takes time investment not dream investment, and this leaves them free to dream about one day being a prince.
I think this is factually untrue. It seems to me that time and effort investment follows dream investment, for basic psychological reasons.
I think that’s because you misread it, or you’re identifying correct financial attitudes with being upper middle class and throwing in the rest of the descriptions for free. Here’s the part where he talks about mechanisms:
Going to technical school is not an “elitistic Bay Area upper middle class thing.” Yes, later he talks about dot-com startups doing IPOs, but the vast majority of new businesses started are things like barbershops and restaurants, and people go to technical school to learn how to repair air conditioning systems, not to learn how to make Yelp. A person who dreams about owning their own barbershop or being an AC repairperson or demonstrating enough responsibility at work to earn a promotion is likely to do better than someone who dreams about being a lottery prince.
That is, I think a key component of grinding successfully is dreaming about grinding.
Basically you are saying constant grinding requires constant motivation—or discipline?
But in reality all it takes is the precommitment of shame.
Example 1. you come from a working-class or slum family, get into a university as the first one in the family. Your mom and grandma brags the whole kin and neighborhood full with what a genius you are. At that point, you are strongly precommitted, not exactly through your own choice, you don’t want the shame of 100 people treating yoiu like a genius to be let down by you dropping out.
Example 2. you get your first real job and it sucks, but your dad is proudly supporting a family for 25 years know on a similarly sucky one and for you to get his approval / not feel ashamed in his eyes you need to stick to it until you get enough experience for a better one.
I think the elitism part is precisely in the lack of this kind of shame-precommitment: elites have discretionary goals, doing what they want, not what they must to get ahead, and thus need constant motivation. If you would quit a job once it stops being fun, you are of the elites in this sense. If you stick to it until it does not feel shameful to quit, then not.
And this is why for the majority constant motivation is not required for constant grinding.
I think it’s quite the reverse: elites have strong shame-precommittment, it’s only a few levels higher. All your family went to Harvard and you’re going to fail?? Your ancestors have Ph.Ds three generations back and you’re not enrolling in a graduate program?? X-D
Of course I mean elites not of the Kardashian kind.
I was careful to specify that your hypothetical friend enjoins you to buy lottery tickets on the grounds that it is good for you financially. I agree that if you get great enjoyment from the thought that you might win the lottery, buying lottery tickets may be worth it for you.
(But two caveats on that last point. Firstly, if you enjoy daydreaming about getting rich then you can equally daydream about unexpected legacies, spectacular success of companies in your pension/investment portfolio if you have one, eccentric billionaire arbitrarily giving you a pile of money, etc. Of course these are improbable, but so is winning much in the lottery. Secondly, “dream investment” may lead you astray by, e.g., making all the most mentally salient paths to success the terribly improbable ones involving lotteries rather than the more-probable ones involving lots of hard work, and demotivating the hard work. Whether it actually has that effect is a question for the psychologists; I don’t know whether it’s one that’s been answered.)
Good point; I’m retracting my comment elsethread.
I’m guessing the hard part is figuring out which way the causation goes—maybe not having mentally salient paths to success involving lots of hard work makes people more likely to buy lottery tickets, rather than or as well as vice versa.
Why do you need to pay money to someone in order to daydream?
The problem is that “dreaming” often replaces grinding.
Don’t people who go to amusement parks or Disneyland basically pay other people in order to have a daydream session? I mean, I can’t imagine people walking around dreaming about winning a lottery, it would be Charlie and the Chocolate Factory. (Now that’s a book about humanity outcompeted by a more profitable life form under the guidance of an omnipotent being.)
No, they pay other people to provide experiences for them, experiences which they can’t get otherwise on their own.
How is ‘you can safely put on a princess’s dress when you are in certain company, and pay in some amount of social embarrassment if you are wrong about the company’ different from ‘you can safely pay a small amount for a chance to put on any dress you want in any company whatsoever’? Buying a ticket is an experience you can’t get otherwise on your own. (I mean yes, I largely agree with you, but I am not sure what exactly I agree with, therefore the nitpicking.)
Huh? I don’t understand.
Well, in what way is buying a ticket not paying other people to provide you an experience which you can’t get otherwise on your own? Earning money is different, you expect to be paid a fixed sum and for many, there are multiple ways to do it.
In the way that I can, on my own, daydream about having a million dollars. I don’t need to pay other people for that.
If you want a strictly positive chance at getting a million dollars and the thrill of looking up the lottery drawings to see if you won, then you have to pay for it. People buy lottery tickets to have a fleeting, tangible hope, not just an imagined one.
You have a strictly positive chance of having a rich and unknown to you relative die and leave her fortune to you.
Ah, that’s a good point. Yes, if you want the gambling thrill, then you have to pay for it, I agree. However from the expected-loss point of view, going to a casino is much better than buying lottery tickets...
For that matter, there’s a strictly positive chance that a meteor made of two tons of platinum will fall from the sky tomorrow and flatten my car in the driveway before I’m done brushing my teeth. The probability of almost anything you can think of is going to be positive, unless it’s physically impossible—and even there you have model uncertainty to take into account.
Yes, of course, which is why talking about “strictly positive chances” in this context (of suddenly acquiring wealth) is kinda silly.
No, you give them appropriate credit for their correct predictions, and and appropriate de-credit for their incorrect predictions.
You shouldn’t give credit or discredit directly for correctness of predictions, if you have information about how those predictions were made. If you saw someone make their guess at tomorrow’s Dow Jones figure by rolling dice, you don’t then credit them with any extra stock-market expertise when it happens that their guess was on the nose; they just got lucky. (Though if they do it ten times in a row you may start to suspect that they have both stock-market expertise and skill in manipulating dice.)
Substantial? The tickets of all lotteries I’m familiar with cost less than a movie ticket.
Yes. Households earning less than $13,000 a year spend a shocking 9% of their money on lottery tickets.
Someone else follows the citation trail and claims the source thinks the actual number is lower:
Upvoted for checking claims :-)
The link actually says that he cannot find the original source for the 9% number, but in the process found a 3% number.
I’ll dig around for better numbers if I have time, but we can also look at significance from the other end:
(Wikipedia)
P.S. An interesting paper. Notable quotes:
And also
Okay, now I can see where all the people giving financial reasons why lotteries are bad are coming from.
$300/year (unless someone is a bored millionaire) is still shocking to me.
Assume a flat distribution from 0 to 10000 and it’s $150 a year, or about a lottery ticket and a half per week at $2 a ticket. Not too unreasonable. But on the other hand, you’ve got to figure lottery spending’s unevenly distributed, probably following something along the lines of the 80⁄20 rule, and that brings us back to a ticket a day or higher.
Seems plenty unreasonable to me. If your income is somewhere on “a flat distribution from 0 to $10000” then you are probably just barely getting by, and perpetually one minor financial difficulty away from disaster. If you were able to save $150/year, that could make a really substantial difference to your financial resilience.
(Though I don’t much like pronouncing from my quite comfortable position on how those in poverty should spend their money. It’s liable to sound like a claim of superiority, but in fact I do plenty of stupid and counterproductive things and it’s entirely possible that if I were suddenly thrown into poverty I’d manage much worse than those people; I doubt I’d be buying lottery tickets, but I’d probably be making other mistakes that they don’t.)
[EDITED to fix a bit of incredibly clunky writing style.]
It still break my formerly favourite analogy, movie tickets—I don’t think the average household making <$10k/year spends $150/year on movie tickets. (Some such households probably do, but I strongly doubt the average one does.)
But more on booze, probably, otherwise how could they bear it.
A family of four can probably blow $50 seeing one movie.
Substantial as a fraction of what you spend on lottery tickets. Obviously if you don’t spend much you can’t lose much.
I think you’ve boiled away Elster’s message there. He’s vivifying a general observation about (meta-)motivated cognition (the first quoted sentence) with an embarrassing personal example (the following sentences).
Jane Elmer, Explaining Anti-Social Behavior: More Amps and Volts for the Social Sciences
EDIT: In case it wasn’t clear, I disagree that “it is often easy to detect the operation of motivated belief formation in others”. Also, when your opponents strongly believe that they are right and are trying to prevent a great harm (whether they have good arguments or not), this often feels from the inside like they are “manifestly hysterical”.
Or just:
How is having a paragraph that applies to [$POLITICAL_BELIEF] not the same as making a fully general argument?
Or are you just saying that the original statement about Communism was a fully general argument?
I’m saying that I think the original quote (which I did think was good) would have been improved qua Rationality Quote by removing the specific political content from it. (Much like the “Is Nixon a pacifist?” problem would have been improved by coming up with an example that didn’t involve Republicans.)
I think the problems associated with providing concrete political examples are in this case mitigated by the author’s decision to criticize people on opposite sides of the political debate (Soviet communists and hysterical anti-communists), and by the author’s admission that his former political beliefs were mistaken to a certain degree.
True.
I do appreciate the correct classification of global warming as a political belief :-D
I was substituting “[$POLITICAL_BELIEF]” for “Communism”, which is what Pablo_Stafforini’s quote referred to.
But I could also use it for “global warming” without making a statement against anthropogenic climate change, considering that even people who believe the science on climate change is mostly settled can also believe that
climate change is political in the “Politics is the Mind-Killer” sense
how we should respond to climate change is in large part a political question
Emanuel Lasker
It’s also not always good advice. Sometimes you should just satisfice. Chess is often one of these times, as you have a clock. If you see something that wins a rook, and spend the rest of your time trying to win a queen, you’re not going to win the game.
Of course it isn’t. But I don’t think that’s a very good standard to be holding most forms of advice to. Very little advice is always good advice; nearly all sayings have exceptions. The fact is, however, that Lasker’s (sort of Lasker’s, anyway) quotation is useful most of the time, both in chess and out of chess (since unless you’re playing a blitz game, you’re likely to have plenty of time to think), and for a rationality quote, that suffices.
I don’t think that’s the case. Of LW I would expect that more people suffer from perfectionism than there are people who optimize for satisfaction too much.
On LW, certainly. In general, no.
This raises an interesting question—What should I do with Rationality Quotes entries which I think are preaching to the choir, i.e. they are good advice for most of the general population but most of the people who will actually read them here had better reverse? Should I upvote them or downvote them?
Would you rather see more quotes like that?
Or fewer?
Or are you not sure?
It’s not at all obvious to me that the failure mode of not looking for a better move when you’ve found a good one is more common than the failure mode of spending too long looking for a better move when you’ve found a good one—in general, I think the consensus is that people who are willing to satisfice actually end up happier with their final decisions than people who spend too long maximising, but I agree that this doesn’t apply in all areas, and that there are likely times when this would be useful advice.
In the particular example I gave, if you’ve already found a move that wins a rook, then it’s all-but irrelevant if you’re missing a better move that wins a queen, as winning a rook is already equivalent to winning the game, but there are obviously degrees of this (it’s obviously not irrelevant if you settle for winning a pawn and miss checkmate). This suggests you should be careful how you define a “satisficing” solution, but not necessarily that satisficing is a bad strategy (in the extreme, if your “good move” is a forced checkmate, then it’s obviously a waste of time to look for a “better move”, whatever that might mean).
Hm… I’m not sure you’re interpreting me all that charitably. You keep on mentioning a dichotomy between satisficing and maximizing, for instance, as if you think I’m advocating maximizing as the better option, but really, that’s not what I’m saying at all! I’m saying that regardless of whether you have a policy of satisficing or maximizing, both methods benefit from additional time spent thinking. Good satisficing =/= stopping at the first solution you see. This is especially common in programming, I find, where you generally aren’t a time limit (or at least, not a “sensitive” time limit in the sense that fifteen extra minutes will be significant), and yet people are often willing to settle for the first “working” solution they see, even though a little extra effort could have bought them a moderate-to-large increase in efficiency. You can consciously decide “I want to satisfice here, not maximize,” but if you have a policy of stopping at the first “acceptable” solution, you’ll miss a lot of stuff. I’m not saying satisficing is bad, or even that satisficing isn’t as good an option as maximizing; I’m saying that even when satisficing, you should still extend your search depth by a small amount to ensure you aren’t missing anything. (And I’m speaking from real life experience here when I say that yes, that is a common failure mode.)
In terms of the chess analogy (which incidentally I feel is getting somewhat stretched, but whatever), I note that you only mention options that are very extreme—things like losing rooks, queens, or getting checkmated, etc. Often, chess is more complicated than that. Should you move your knight to an outpost in the center of the board, or develop your bishop to a more active square? Should you castle, moving your king to safety, or should you try and recoup a lost pawn first? These are situations in which the “right” move isn’t at all obvious, and if you spot a single “good” move, you have no easy way of knowing if there’s not a better move lurking somewhere out there. Contrast the situation you presented involving winning a pawn versus checkmating your opponent; the correct move is easy to see there. In short, I feel your chess examples are a bit contrived, almost cherry-picked to support your position. (I’m not saying you actually did cherry-pick them, by the way; I’m just saying that’s how it sort of feels to me.)
So basically, to summarize my position: when you’re stuck dealing with a complicated situation, in chess and in life, halting your search at the first “acceptable” option is not a good idea. That’s my claim. Not “maximizing is better than satisficing”.
Taken literally, this is obviously and trivially true. You get more resources, your solution is likely to improve.
But in the context, the benefit is not costless. Time (in particular in a chess game) is a precious resource—to justify spending it you need cost-benefit analysis.
Your position offers no criteria and no way to figure out when you’ve spent enough resources (time) and should stop—and that is the real issue at hand.
Position is also a precious resource in chess. You need to structure your play so that the trade-off between time and position is optimal, and cutting off your search the moment you think of a playable move is not that trade-off. Evidence in favor:
I’ve personally competed in several mid-to-high-level chess tournaments and have an ELO rating of 1853. Every time I’ve ever blundered, it’s been because of a failure to give the position a second look. Furthermore, I can’t recall a single time the act of giving the position a second look has ever led me to time trouble, except in the (trivial) sense that every second you use is precious.
I have personally interacted with a great deal of other high-rated players, all of whom agree that you should in general think through moves carefully and not just play the first good-looking move that you see.
Lasker, a world-champion-level player, was the one quoted as giving this advice, and according to Wikipedia (thanks, bentarm), the saying actually predates him. If the saying has survived this long, that’s evidence in favor of it being true.
Nor am I claiming to offer such a way. I agree that the optimal configuration is difficult to identify, and furthermore that if it weren’t so, a great deal of economics would be vastly simpler. My claim is a far weaker one: that whatever the optimal configuration is, stopping after the first solution is not it. This may sound trivial, and to a regular LW reader, it very well may be, but based on my observations, very few regular (as in not explicitly interested in self-improvement) people actually apply this advice, so it does seem important enough to merit a rationality quote dedicated to it.
By the way, in certain situations it’s analytically solvable—see e.g. here.
That’s really interesting. Thanks for the link!
You’re successfully demolishing a strawman. Is anyone claiming what you are arguing against?
Perhaps lesson is that all such sayings mere wisdom-facets, not whole diamond. Appreciate the facet for its beauty, yes, but understand that there are others, including the one most opposite on the other side...perhaps should be something generally understood in thread such as this.
Do not sense real disagreement in this conversation. Thinking has benefits, all agree, and thinking has costs, all agree...doubt Lasker himself waited to move until he knew he had the most perfect move, and yet he no doubt lost and observed others losing because of a move played too rashly....
That’s the optimal situation :-) Sometimes such sayings are a body part of an elephant. And occasionally—of a donkey X-D
No, which is why I feel Lasker’s quote is a good rationality quote. If people are constantly expressing disagreement, that’s evidence that something’s wrong. (A decent level of disagreement is healthy, I feel, but not too much.) What happened is this: bentarm interpreted my position differently from what I intended and disagreed with his/her interpretation of my position, so I clarified said position and (hopefully) resolved the disagreement. If there’s no longer anyone arguing against me, then that means I accomplished what I aimed to do.
Of course it isn’t. But I don’t think that’s a very good standard to be holding most forms of advice to. Very little advice is always good advice; nearly all sayings have exceptions. The fact is, however, that Lasker’s (sort of) quotation is useful most of the time, both in chess and out of chess (since unless you’re playing a blitz game, you’re likely to have plenty of time to think), and for a rationality quote, that suffices.
It is worth noting that Lasker often played the opponent, not the board (e.g. he was known to pick a move he knew was not optimal, but which his opponent found most uncomfortable). He would go for tactics vs positional players, and for slow positional play vs tactical players. He was very annoying to play against, apparently. Also was the champion for 27 years, while having an academic career.
See also “nettlesomeness”:
http://marginalrevolution.com/marginalrevolution/2013/11/nettlesomeness-and-the-first-half-of-the-carlsen-anand-match.html
See also: “The Perfect/Great is the enemy of the Good”
Without the Perfect, the Good would have no standard for measurement. This is especially important when making popcorn or building airplanes.
--Sam Vimes, Jingo, Terry Pratchett
George Savile, 1st Marquess of Halifax, Political, Moral and Miscellaneous Reflections
Savielly Tartakower, on the starting position in chess. Source.
I don’t play chess, or know how to play at all well, nor am I interested in learning. But are there any books by or about chess masters that I might find interesting, for teaching good habits of thought? Or even just a list of famous chess quotations?
“Willy Hendriks, Move First, Think Later: Sense and Nonsense in Improving Your Chess. To me, more interesting as behavioral economics and as epistemology than as a chess book. The author claims that most chess advice is bad, and that we figure out positional strategies only by trying concrete moves, not by applying general principles. You do need chess knowledge to profit from the book, but if you can manage it, it is one of the best books on how to think that I know. - See more at: http://marginalrevolution.com/marginalrevolution/2013/04/what-ive-been-reading-24.html#sthash.PdwwzDJR.dpuf″
Chess fundamentals by Capablanca. Still the best book on learning positional chess, and in general “good taste” in position evaluation. There is a certain clarity of thought in this book. I am not sure how useful it is or whether it can “rub off.”
Available for free.
I think there are some vaguely autobiographical things by Botvinnik on preparing for matches, but it’s more about discipline than thought habits.
The Art of Learning: A Journey in the Pursuit of Excellence by Josh Waitzkin is the memoir of a chess child prodigy who later became a Tai Chi Chuan world champion. It’s organized around his advice on developing the good habits of thought that he discovered when he was training for chess. But they are applicable to many domains: he makes the argument that the habits that made him excel at chess were also what made him a world-class competitor in Tai Chi Chuan.
There is something in Nate Silver’s The signal and the noise.
Luckily you only have to make fewer mistakes than your opponent to win.
Describing good play as “making few mistakes” seems like the wrong terminology to me. A mistake is not a thing, in and of itself, it’s just the entire space of possible games outside the very narrow subset that lead to victory. If you give me a list of 100 chess mistakes, you’ve actually told me a lot less about the game than if you’ve given me a list of 50 good strategies—identifying a point in the larger space of losing strategies encodes far less information than picking one in the smaller space of winning.
And the real reason I’m nitpicking here is because my advisor has always proceeded mostly by pointing out mistakes, but rarely by identifying helpful, effective strategies, and so I feel like I’ve failed to learn much from him for very solid information-theoretic reasons.
Have you discussed this with him? Perhaps he hasn’t noticed this and would be delighted to talk strategies. Perhaps he has a reason (good or bad) for doing as he does. (E.g., he may think that you’ll learn more effectively by finding effective strategies for yourself, and that pointing them out explicitly will stunt your development in the longer run.) Perhaps his understanding of effective strategies is all implicit and he can’t communicate it to you explicitly.
I’ve tried talking to him about it: he really does seem to possess only implicit understanding of what works and what doesn’t. Well, that, and it just doesn’t seem to occur to him, even upon my repeated requests, to lay out guidelines ahead of time.
Actually, most chess players define a mistake as a move that falls outside the subset of moves that either maintains equality OR leads to victory. This classification significantly reduces the size of mistake-space in chess.
True, but it still leaves mistake-space the much larger space.
My first downvote, yay! Didn’t feel that bad :)
Anyway, my comment was merely an attempt to allay the philosophical worries expressed in the parent quote and so I used the same terms; it wasn’t meant as pedagogy.
Minor nitpick, surely you mean possible moves, rather than possible games? The set of games that lead to defeat is necessarily symmetrical with the set that lead to victory, aside from the differences between black and white.
--Patrick J. MacDonald
An interesting quote, but isn’t it basically the definition of identity? The part that remains the same while changing all the while?
The specific context of that is in changing bad habits; the only way to improve is to do something different. Typically people would rather keep doing the same thing, but with better consequences.
Changing while remaining the same is what Algebra is all about. Identify the quality you wish to hold invariant, then find the transformations that do so. Changing things while leaving them the same in important ways is how problems are solved.
-Niccolo Machiavelli, The Discourses
--R. Erik Plata and Daniel A. Singleton, A Case Study of the Mechanism of Alcohol-Mediated Morita Baylis-Hillman Reactions. The Importance of Experimental Observations.
“The danger in trying to do good is that the mind comes to confuse the intent of goodness with the act of doing things well.”
Ursula k. Le Guin, Tales from Earthsea
Mary Catelli
“Politics selects for people who see the world in black and white, then rage at all the darkness”—Megan McArdle
Which people do you mean with that?
Politicians might talk in terms of black and white to appeal to voters but most of them don’t think that way.
When you talk in terms of black and white all the time, it is very easy to forget that you don’t think that way.
This looks like a result of mind kill.
The fact that you let yourself be blinded by someone’s strategy means that you fail at reasoning. It doesn’t help to moralize.
Follow the link and it will become clearer.
Rob Lyman, in a discussion of why so many politicians have an angry persona.
The what? Rob never stubbed a toe in the dark and then launched an angry tirade on the offending piece of furniture?
The number of times I told my first, very bad car to eat a bag o’ penises is, well, high.
And there is the saying that programmers know the language of swearing best—many bugs make one angry, not angry at something clear, just angry. Angry at the situation in general. Like why the eff had this had to happen to me when I need to run this script before I can go home? Aaargh. That kind of thing.
The furniture was put there by a thoughtless person not following the social rules. The bad car was built by shoddy engineers not living up to your expectations. The bug in the code was put there by a careless programmer not following agreed practice. And so on.
Your examples merely serve to reinforce the notion that what makes us angry is people breaking the (possibly unwritten) rules or violating social cohesion.
That clashes with my introspection, unlike DeVliegendeHollander’s account. When I stub my toe in the dark and start swearing, my thoughts are not anything to do with social rules or their violation (at least not at a conscious level); typically no one else is around, no other person enters my mind, and I’m just annoyed that I’m unnecessarily experiencing pain, and that annoyance doesn’t feel like it has a moral element to it. It feels like a straightforward reaction to unexpected, benefit-free pain.
Sounds rather forced to me. How about a simpler hypothesis that anger is frustration, the expression of the bad feelings coming from expectations not being fulfilled?
So would you get angry if a sabre-toothed tiger charged at you when you weren’t expecting it? Do you get angry when a clear day gives way to rain? Do you get angry when a short story has a twist ending?
Expectations not being fulfilled doesn’t necessarily cause anger. It may lead to sadness, or laughter, or fear, or disappointment, or any number of emotions. But it normally only leads to anger when the frustrated expectation is about social rules.
FWIW, Salemicus::anger (“how dare you!”) and annoyance feel slightly but not very different to my System 1, much more similar to each other than, say, the various feelings that English labels as “love”, and I don’t normally feel the need of using different words for the two unless I want to be pedantic.
I realize that anger is supposed to be what “They offered me a lousy offer in this Ultimatum game so I’d better turn it down even if I CDT::will be worse off otherwise people TDT::would continue to make me similarly lousy offers” feels from the inside, but my System 1 has only a vague understanding of that, let alone of the fact that unanimate objects aren’t actually playing Ultimatum with me (and I can’t be alone on this last point otherwise no-one would have ever hypothesised that lightning came from Zeus), but YMMV.
BTW, are you two native English speakers? (FTR I’m not.) This might be a case of languages labeling feeling-space differently, rather than or as well as people’s feeling-spaces being different.
I am not, but I got convinced by Salemicus’s argument. I realized that what I translate as “anger at the weather” is better translated as “being mad at the weather” or “being pissed at the weather” and anger here is not something like a short fuck-you feeling but more like the urge to launch a long rant or dressing-down.
I am a native speaker, yes.
I find it interesting that our intuition clashes so. I immediately found RL’s account compelling on the basis, whereas others did not. This could be a case of different labelling, or even different emotional experience.
The weirdest thing is that I do have the intuition “corresponding” (FLOABW) to the fact that if deterring someone from doing something can work in principle it might be a good idea to try but if it cannot possibly work it makes no sense to try (the “Sympathy or Condemnation” section of the “Diseased thinking” post makes perfect sense to me); when Mencius Moldbug pointed out that people react differently to the threat of anthropogenic global warning differently to the way they’d react to hypothetical global warning due to the Sun, I knew exactly what he was talking about. But, Rob Lyman’s example is a very poor choice of a pointer to that intuition for me, exactly because it points me to stuff like stubbing a toe in the dark instead.
That’s perfectly rational behavior. The two causes give different predictions about likely future warming.
He explicitly specified that the predicted increase of radiative forcing due to solar activity in his hypothetical would equal the predicted increase of radiative forcing due to greenhouse gases in the real world.
Sure, there is still a difference between the two situations akin to that described in the Diseased Thinking post I linked upthread, in that shaming people into not emitting as much CO2 might in principle work whereas shaming the Sun into not shining as much cannot possibly work (though Moldbug still has a point as the cost-effectiveness of the former is probably orders of magnitude less than most people would guess). I know you can’t shame a saber-toothed tiger into not charging you either, but still Moldbug’s example worked for me and Lyman’s didn’t for whatever reason.
EDIT: Might be because I’d think of an increase in the solar constant in Far Mode but I’d think of a saber-toothed tiger in Near Mode.
For me at least, the answers are no, yes, and no respectively. We can further refine the prior hypothesis by stipulating that the bad feelings arise from expectations not being fulfilled in an unpleasurable way, which would stop it from generating the third situation as an example. As for the first, perhaps one might experience anger if it were not being overridden by the more pressing reaction of fear. Or perhaps the hypothesis is off base, but it seems to generate some correct predictions of anger which the hypothesis that anger only arises from frustrated expectations about social rules fails to generate.
My intuitive answer would be yes, but now I am realizing that for me sadness or fear is probably much closer to anger than for you. In my mind they all are “feel bad, be unhappy and express it too”.
I suppose if we define anger in a very granular and precise way and not just as a general bad feeling, “being mad at” but more like, giving a long rant, it can only apply to humans because I will swear to the rain but only briefly, to let steam out, I will not give a long angry rant to it. I will be “mad at it”, but not angry in that social sense that is clear.
Halfway conceded: anger in the very granular sense only applies to humans.
But. Can you think of a counter-example where 1) humans violate our expectations 2) but it is no a social rule or cohesion violation, and do we get angry or not?
This is very tricky, because our expectations are, of course, based on social rules! Usually. Now I am searching for a case when not.
I already did give such an example—a short story with a “twist” ending. Such an ending violates our expectations (that’s what makes it a “twist”) but it doesn’t break any social rule, so people often find these amusing, clever, etc. On the other hand, a “twist” ending in a context where there is a social rule against such endings might well make people angry—for example, if the recent movie Exodus: Gods and Kings had ended with the Israelites being drowned in the Red Sea and the Pharoah triumphant, that would no doubt have upset many viewers.
Hmmm… most social rules generally want people to behave in a predictable ways, for various reasons, so they avoid surprises. It seems almost like surprises are only allowed in special cases…
I almost accept your point now, but one objection. A good and a weak soccer teams play a match. Surprisingly, the weaker one wins. It was fair play. Nobody violated a rule. Still the fans of the losing one are angry—at their own team, because how could they let a much weaker team win. Is that a social rule violation that if you are generally better you are never allowed to lose? Or just an expectation violation? Is it more of a bias on the side of fans: their team must have violated to rule to try hard and not be lazy, because they cannot imagine any other explanation?
If you generally agree, I accept your point with a modification: anger is about perceived social rule violation: but people are not perfect judges of social rule violations, there are mistakes made both ways, and tententious, bias-driven mistakes.
Thus, as in my soccer example, sometimes all you see at first is a violated expectation. You see no rule violation. Then you need to figure out why exactly may other people think it is a rule violation. This is not always easy and we don’t do it that often, and thus often we just see a violated expectation, and not see how others perceive it as a rule-violation.
I just want to say I am glad to have lost this debate, because it is working. For me. I mean, yesterday I was able to manage my anger better by asking myself questions like “what social rule I think is broken here? Is that a real one or just my wish? If real, a reasonable one?” even when the answer was yes/yes just being conscious of it worked.
I think I will shamelessly steal and apply this idea in discussions where it can be useful. Thanks a lot.
I think according two common usage of the terms they refer to different emotions.
Anger is a state of energy. Frustration is a rather passive state.
Anger doesn’t get triggered for every unfulfilled expectation. It get’s triggered if things aren’t as they “should” be. If you think you don’t deserve what you are expecting you get frustrated upon not getting it but not angry.
And since the concept of “should” evolved as a primarily social mechanism, it makes sense that anger would be triggered by (perceived) social affronts.
“Things are not as they seem. They are what they are.” ― Terry Pratchett, Thief of Time
-- Arthur Eddington, “The Future of International Science”, as quoted in An Expedition to Heal the Wounds of War: the 1919 Eclipse Expedition and Eddington as Quaker Adventurer
-- Richard McKenzie, quoted on Econlog
Related engineer joke: “anybody can build a bridge that won’t collapse–but it takes a real engineer to build a bridge that just barely avoids collapse.”
Frank Knight, “The Role of Principles in Economics and Politics” p.11
Probably not found anywhere online, my favorite college professor, Ernest N. Roots, used to say, ” Things that are simply remarkable, become remarkably simple, once they are understood”. This has been my personal defense against arguments from ignorance ever since.
Welcome to LW! We post a new Rationality Quotes thread every month; the current one is October 2015 for a few more days, but you can find a link to the most recent one on the right sidebar if you’re looking at Main (the header “Latest Rationality Quote” is a link to the page, above a link to the latest quote).
Nassim Taleb
True for scientists. But for most people science is indeed a set of answers
I am a scientist, albeit the most junior kind of scientist, and I reckon “science” can legitimately refer to a set of answers or a methodology or an institution.
I doubt anyone in this thread would object if I called a textbook compiling scientific discoveries a “science textbook”. I’m not sure even Taleb would blink at that (if it were in a low-stakes context, not in the midst of a heated argument).
The information in a science textbook is (or should be) considered scientific because of the processes used to vet it. Absent of this process its just conjecture.
I often wonder if this position is unpopular because of its implications for economics and climatology.
http://xkcd.com/397/
Well, this is a problem you have if your culture is so egalitarian that common people think they are entitled to their own opinions instead of quoting an authority: hopefully one that uses the scientific method properly.
Eric S. Raymond: “Interesting human behavior tends to be overdetermined.”
Example sources:
http://esr.ibiblio.org/?p=4213
http://esr.ibiblio.org/?m=20020525
http://esr.ibiblio.org/?p=6599
I didn’t understand this quote out of context so I followed one of the links and he explains it in this comment:
Richard Feynman, What is Science?
This description fits philosophy much better than science.
-Alexander Pope, An Essay on Criticism
Neil Postman from Amusing ourselves to Death, p 70
I’m not sure I understand. Can you expand on what the point is?
Postman said this in context of television and new age media, where even “news” other relevant information is shown for its entertainment value and not because it can help us take better decisions.
-- C. S. Lewis
This seems obviously false. Am I missing something?
I think that C.S. Lewis means that when a person puts forth an assertion, you should ascertain the truth of falsity of the assertion by examining the assertion alone; the mental state of the person making the assertion is irrelevant.
Presumably Lewis is arguing against the genetic fallacy, or more specifically, Bulverism.
Edit: Why the downvote? My comment was fairly non-controversial (I thought).
Whether a belief is wishful thinking is inherently an assertion about the mental state of a person. It is meaningless to say that you should examine the assertion instead of the mental state, since the assertion is an assertion about the mental state.
I don’t know about that. Merriam Webster defines wishful thinking as:
So if my calculations are accurate, per Merriam Webster’s definition, I have not engaged in wishful thinking.
Something can be wishful thinking and true at the same time. Doing the sum wouldn’t prove that it’s not wishful thinking.
Of course having the sum be correct is a necessary condition for non-wishful thinking, but it does not determine the existence of non-wishful-thinking all by itself.
No it’s not. You can be wrong for reasons other than wishful thinking.
When A is being correct and B is wishful thinking, what I said is that A implies B, which reduces to (B || ~A). What you’re saying is that ~A does not imply ~B, which reduces to (B && ~A). Of course, these two statements are compatible.
I think you messed up there. Being correct certainly doesn’t imply wishful thinking. You were saying that non-wishful thinking implies being correct. That is ~B implies A. Or ~A implies B, which is equivalent.
If I checked my balance and due to some bank error was told that I had a large balance, I would probably have the sum be incorrect but still be using non-wishful thinking. The sum being correct is not a necessary condition for non-wishful thinking. All the other combinations are possible as well, though I don’t feel like going through all the examples.
You’re right, I meant to say that B implies A, not to say that A implies B. However, that is still equivalent to (B || ~A) so the rest, and the conclusion, still follow.
B implies A would be wishful thinking implies that you are correct. This is obviously false. You clearly intended to have a not in there somewhere. Double check your definitions.
I was giving an example of (~A && ~B). If you want an example of (A && B), it would be that I don’t even look at my statements and just assume that I have tons of money because that would be awesome, but I also just happen to have lots of money.
It being a law of the Internet that corrections usually contain at least one error, that applies to my own corrections too. In this case the error is the definitions of A and B.
A=being correct, B=non-wishful-thinking.
“Having the sum be correct is a necessary condition for non-wishful thinking” means B implies A, which in turn is equivalent to (B || ~A).
“You can be wrong for reasons other than wishful thinking” means ~(~B implies ~A), which is equivalent to ~(~B || A), which is equivalent to B && ~A.
Same conclusions as before, and they’re still not inconsistent.
Now that we have that out of the way, we can start communicating.
A counterexample to (B || ~A) would be (~B && A), so wishful thinking while still being correct. As I said in my last post, you just assume you have a lot of money because it would be awesome, and by complete coincidence, you actually do have a lot of money.
Now that we have established the language correctly and I looked through my first post again, you are correct and I misread it. I tried to go back and count through all the mistakes that lead to our mutual confusion, and I just couldn’t do it. We have layers of mistakes explaining each others mistakes.
History teaches us, gentlemen, that great generals remain generals by never underestimating their opposition.
Gen. Antonio Lopez de Santa Anna: The Alamo: Thirteen Days to Glory (1987) (TV)
Overestimating can be costly too. That’s why bluffing can work, in poker as in war.
Examples/articles:
Empty Fort Strategy
100 horsemen and the empty city (gated). Here are two articles summarizing the original paper: Miami SBA and ScienceDaily
The most important decisions are before starting a war, and there the mistakes have very different costs. Overestimating your enemy results in peace (or cold war) which basically means that you just lose out on some opportunistic conquests but underestimating your enemy results in a bloody, unexpectedly long war that can disrupt you for a decade or more—there are many nice examples of that in 20th century history.
Peace or cold war are not the only possible outcomes. Surrender is another. An example is the conquest of the Aztecs by Cortez, discussed here, here, and here. Surrender can (but need not) have disastrous consequences too.
Generals are not the people who decide whether or not a war gets fought but who decide over individual battles.
If you’re unbiased then you should be underestimating your opposition about half the time.
If your loss function is severely skewed, you do NOT want to be unbiased.
What you want is to have a distribution. You will expect your opposition to be about as strong as it is. You will prepare for the possibility that it is stronger or weaker.
A distribution is nice but often you have to commit to a choice. In such cases you generally want to minimize your expected loss (or maximize the gain) and if the loss function is lopsided, the forecast implied by the choice can be very biased indeed.
Even with a very skewed loss function you want to have an accurate estimate of your opposition, which will be an underestimate about half of the time, and then take excessive precautions. Your loss function does not influence your beliefs, only your actions.
Yes, but actions is what you should care about—if these are determined, your beliefs (which in this case do not pay rent) don’t matter much.
.
So why would I want to bias myself after I’ve decided to take excessive precautions?
I think we’re in agreement btw, we care about actions, and if you have a very skewed loss function then it is rational to spend a lot of effort on improbable scenarios in which you lose heavily, which from the outside looks similar to a person with a less skewed loss function thinking that those scenarios are actually plausible. I was just trying to point out that DanielLC’s reply was correct and your previous one is not—even with a skewed loss function this should not produce feedback to the actual beliefs, only to your actions. So no, you DO want to be unbiased, it’s just that an unbiased estimate/posterior distribution can still lead to asymmetric behaviour (by which I mean spending an amount of time/effort to prepare for a possible future disproportionate to the actual probability to this future occurring).
Well, let me unroll what I had in mind.
Imagine that you need to estimate a single value, a real number, and your loss function is highly skewed. For me this would work as follows:
Get a rough unbiased estimate
Realize that I don’t care about the unbiased estimate because of my loss function
Construct a known-biased estimate that takes into account my loss function
Take this known-biased estimate as the estimate that I’ll use from now on
Formulate a course of action on the basis of the the biased estimate
The point is that on the road to deciding on the course of action it’s very convenient to have a biased estimate that you will take as your working hypothesis.
Yes. My point is that this new biased estimate is not your ‘real estimate’ - this is simply not your best guess/posterior distribution given your information. But as I remarked above your rational actions given a skewed loss function resemble the actions of a rational agent with a less risk-averse loss function with a different estimate, so in order to determine your actions you can compute what [an agent with a less skewed loss function and your (deliberately) biased estimate] would do, and then just copy those actions.
But despite all of this, you still want to be unbiased. It’s fine to use the computational shortcut mentioned above to deal with skewed loss functions, but you need your beliefs to stay as accurate as possible to not get strange future behaviour. A small, simplified example:
Suppose you are in possesion of 1001$ total (all your assets included), and it costs $1000 to buy a cure for a fatal disease you happen to have/a ticket to heaven/insurance for cryonics. You most definitely don’t want to lose more than one dollar. Then a guy walks up to you and offers a bet—you pay 2$, after which you are given a box which contains between 0$ and 10$, with uniform probability (this strange guy is losing money, yes). Clearly you don’t take the bet—since you don’t actually care much whether you have 1000$ or 1001$ or 1009$, but would be terribly sad if you had only 999$. But instead of doing the utility calculation you can also absorb this into your probability distribution of the box—you only care about scenarios where the box contains less than a dollar, so you focus most of your attention on this, and estimate that the box will contain less than a dollar. The problem now arises if you happen to find a dollar on the street—it is now a good idea to buy a box, although the agents who have started to believe the box contains at most a dollar will not buy it.
To summarise: absorbing sharp effects of your utility function into biased estimates can be a decent temporary computational hack, but it is dangerous to call the partial results you work with in the process ‘estimates’, since they in no way represent your beliefs.
P.S.: The example above isn’t all that great, it was the best I could come up with right now. If it is unclear, or unclear how the example is (supposedly) related to the discussion above, I can try to find a better example.
It seems to me that it’s best to use “your beliefs” to refer to the entire underlying distribution. Yes, you should not bias your beliefs—but the point of estimates is to compress the entire underlying distribution into “the useful part,” and what is the useful part will depend primarily on your application’s loss function, not a generalized unbiased loss function.
Sure it is my “real” estimate—because I take real action on its basis.
Let me make a few observations.
First, any “best” estimate narrower than a complete probability distribution implies some loss function which you are minimizing in order to figure out which estimate is “best”. Let’s take the plain-vanilla case of estimating the central point of a distribution which produced some sample of real numbers. The usual estimate for that is the average of the sample numbers (the sample mean) and it is indeed optimal (“the best”) for a particular, quadratic, loss function. But, for example, change the loss function to absolute deviation (L1) and now the median becomes “the best estimate”.
The point is that to prefer any estimate over some other estimate, you must have a loss function already. If you are calling some estimate “best”, this implies a particular loss function.
Second, the usefulness of any estimate is determined by the use you intend for it. “Suitability for a purpose” is an overriding criterion for estimates you produce. Different purposes (“produce an unbiased estimate” and “select a course of action” are different purposes) often require different estimates.
Third, “unbiased” is not an unalloyed blessing. In many situations you face the bias-variance tradeoff and sometimes you do want to have some bias.
This is a good point. A helpful discussion of asymmetric loss functions is here.
Only if you have no margin within which you can be considered to be “correctly estimating.”
--Isaac Asimov, “How Do People Get New Ideas?”
Frank Knight, “The Role of Principles in Economics and Politics” p.19
His claim of the insolubility of social problems is not a note of hopeless despair but should be understood in the context of his argument that free association and cooperation are the best and really only way to solve social problems, but pushed to their limit the result would be intolerable.
Think “Tommy” Adams refers to Thomas Sewall Adams.
I never was good at learning things. I did just enough work to pass. In my opinion it would have been wrong to do more than was just sufficient, so I worked as little as possible.
Manfred von Richthofen
Scott Adams posted his “My best tweets” collection. About half of them are examples of instrumental rationality in action, and most are worth a laugh. Some of my favorites from the Arguing with Idiots section are in the repiies.
Depends on whether your goal is to convince the person you’re talking to, or convince outside observers.
I just hope this is sufficiently selected that people who really do have problems with attacking people don’t read this.
If you actually are attacking them, you should still run away. Just for a different reason.
I cannot construct a coherent argument for intelligent design, depending on what you mean by “coherent”. I could construct an argument which is grammatically correct and uses lies, but I don’t think you meant to count that as “coherent”.
If you have at your disposal an intelligent being who gets to decide the laws of physics and gets to set the initial conditions, then intelligent design is an easy consequence: “God set up the universe in such a way that allowed life to evolve according to His predetermined laws”.
If we ever get enough computing power to simulate intelligent life, then those simulations will have been intelligently designed and an argument very similar to the above will be true (an intelligent person wrote a program and set the initial parameters in such a way that intelligence was simulated).
You can write a number of refutations of this argument (life sucks, problem of evil, Occam’s razor, etc.), but I’d still say it’s coherent.
The quote basically describes the principle of charity 2.0: you seek to understand the logic of a position foreign to you not just to refute it or to convince the other person, or to construct a compromise. You do it to better understand your own side and any potential fallacies you ordinarily do not see in your own logic.
What if your understanding is “it has no valid logic”?
You probably can if you start with a different set of axioms.
Note that, for example, “God exists” is not a lie but a non-falsifiable proposition.
According to supporters of intelligent design, “intelligent design” implies not using any religious premises. So if you started with that axiom, then you’re not really talking about intelligent design after all.
I don’t think this is quite right. I think they claim that intelligent design doesn’t imply using any religious premises.
~□(x)(Ix⊃Ux) rather than □(x)(Ix⊃~Ux)
In other words, there is nothing inconsistent with a theist (using religious premises) and a directed panspermia proponent (not using any religious permises) both being supporters of intelligent design.
Okay, change it to “their version of intelligent design doesn’t use any religious premises” and change my original statement to “I can’t construct a coherent argument for their version of intelligent design”.
I don’t think so, though it’s possible to quibble about the definition of “religious premises”. Intelligent design necessary implies an intelligent designer who is, basically, a god, regardless of whether it’s politically convenient to identify him as such.
Supporters of intelligent design may end up basically having a god as their conclusion, but they won’t have it as one of their premises.
And they have to do it that way. If God was one of their premises, teaching it in government schools would be illegal.
I think you’re confusing the idea of intelligent design and cultural wars in the US.
The question was whether you can construct “a coherent argument for intelligent design”, not whether you would be willing to play political games with your congresscritters and school boards.
No, the question was whether the “rationality quote” makes sense. I offered intelligent design as a counterexample, a case where it doesn’t. Telling me that you don’t think that what I described is intelligent design is a matter of semantics; its usefulness as a counterexample is not changed depending on whether it’s called “intelligent design” or “American politically expedient intelligent-design-flavored product”.
And I disagree, I think it does perfectly well.
The quote applies to actual positions, not to politically-based posturing.
That dilutes the quote to the point of uselessness. Probably most positions that people take involve posturing.
But if you really want a different example, how about homeopathy? I can’t construct an argument for that which is coherent in the sense that was probably intended, although I could construct an argument for that which is grammatically correct but based on falsehoods or on obviously bad reasoning.
What lies are those? What evidence convinced you that they are in fact lies?
(That’s how I would start.)
I said that I could construct such an argument. I think you’ll agree that I am capable of constructing an argument that uses lies. It does not follow that I think all intelligent design proponents are liars, just that I could not reproduce their arguments without saying things that are (with my own level of knowledge) lies.
(If you really want an irrelevant example of intelligent design proponents lying, http://en.wikipedia.org/wiki/Wedge_strategy )
It’s a heuristic, not an automatic rule. Excluding religion and aesthetics, I can’t think of any cases where it doesn’t work. There are probably some which I just haven’t thought of, but there certainly aren’t very many.
I mentioned homeopathy above.
You don’t have a small natural intuition in your brain saying that homeopathy makes sense? I do, although of course I ignore it.
I don’t think that’s the same thing as being able to construct a coherent argument.
Better tell that to every book on negotiation ever, I guess.
The human concept of justice is fickle, but nonetheless real. Appeals to it, if done skillfully, can be very advantageous.
Just letting you know that I dislike your repetitive snark.
This seems anti-rational, like a boo-light.
This is Hari’s business. She takes innocuous ingredients and makes you afraid of them by pulling them out of context.… Hari’s rule? “If a third grader can’t pronounce it, don’t eat it.” My rule? Don’t base your diet on the pronunciation skills of an eight-year-old.
From http://gawker.com/the-food-babe-blogger-is-full-of-shit-1694902226
Stephen King, 11/22/63
“My gripe is not with lovers of the truth but with truth herself. What succor, what consolation is there in truth, compared to a story? What good is truth, at midnight, in the dark, when the wind is roaring like a bear in the chimney? When the lightning strikes shadows on the bedroom wall and the rain taps at the window with its long fingernails? No. When fear and cold make a statue of you in your bed, don’t expect hard-boned and fleshless truth to come running to your aid. What you need are the plump comforts of a story. The soothing, rocking safety of a lie.” ― Diane Setterfield
The context for this quote is a Hansonian post emphasizing that rationality has costs, and someone who wishes to seek truth must be prepared to accept them: http://lesswrong.com/lw/j/the_costs_of_rationality/
The particular example chosen in the quote is not the best since non-existence of ghosts is not a lie. Nonetheless, the point is well-taken. As a short-term comforting strategy (say to comfort a five year old), it is preferable to say ghosts were destroyed by Zeus or something, than to say that it is highly unlikely that ghosts ever existed because no ghost stories have had a reasonably credible source etc.
I admit that I have no children, but even that last part seems almost wholly false to me.
Now, I might tell my hypothetical child that I’m a high Bayesian adept in the Conspiracy (passing actuarial exams/ordeals of initiation counts), that if spirits existed I’d be a mighty ceremonial magician (also probable) and therefore no ghost would dare harm my child.
The Bible says that God made the world in six days. Great Uncle Charles thinks it took longer: but we need not worry about it, for it is equally wonderful either way
-- Margaret Vaughan Williams
https://en.wikipedia.org/wiki/Vaughan_Williams
-- Paul Churchland, chapter 2 of Plato’s Camera
-Montaigne, On Pedantry
Mike Hearn, Replace by Fee, a Counter Argument
Drug is not a habbit, habbit could be a drug (Maahir) http://here4share.com/inspirational-quotes
“Mai La Dreapta,” commenting at SSC.