Rationality Quotes: January 2011
Post quotes.
Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
Do not quote yourself.
Do not quote comments/posts from LW. (If you want to exclude OB too create your own quotes thread! OB is entertaining and insightful and all but it is no rationality blog!)
No more than 5 quotes per person per monthly thread, please.
- Jan 5, 2011, 5:16 PM; 1 point) 's comment on Rationality Quotes: December 2010 by (
-- Benjamin Franklin
(To provide some context: at the time, the smallpox vaccine used a live virus, and carried a non-trivial risk of death for the recipient. However, it was still safer on the whole than not being immunized.)
I used this quote to help convince a friend to vaccinate her child this past year. It worked.
I assume you first looked at the statistics of specific modern vaccines, then reached a conclusion, then used the quote to persuade your friend about a specific vaccine.
So far as I’m aware, there are currently no publicly available vaccines that lack overwhelming evidence in support of their use. Researching every issue one has even the slightest doubts about is also a failure mode.
For example, the necessity of chickenpox vaccines is not quite clear-cut because an infection in childhood—that is usually mild—confers greater immunity benefits than a vaccine. See also flu shots.
Chicken pox vaccination seems to offer small but clear advantages. I wasn’t thinking about flu shots earlier, but agree that they are of little value; they don’t seem like “real” vaccines to my brain because they’re relatively temporary.
I loathe the anti-vaccination movement and am probably over-sensitive on this issue.
For what it’s worth, I don’t think there is such a thing as over-sensitivity on this issue. Here in Australia, there is an organisation called the Australian Vaccination Network, who attempt to convince parents not to vaccinate their children (due to unfounded fears of autism, mercury poisoning, and even wilder ‘Big Pharma’ conspiracies).
Children who are too young to be immunised against pertussis (whooping cough) rely on all transmission vectors (ie people they come into contact with) being immune: herd immunity is the term, I think. Pertussis isn’t dangerous for a more developed human being, but babies can die of it. One did—warning, sad. So yeah, that justifies loathing.
(Thankfully, the AVN has been all but disbanded thanks to the efforts of skeptic groups)
Yeah, the U.S. situation is similar. Worse, actually, since our idiot antivaxxers seem to be immune to skepticism. (The most prominent example is probably bat-shit insane Indigo mom Jenny McCarthy, who will soon be hosting a talk show on Oprah’s new network.)
I concur, your situation is worse.
When I looked into the infant Hepatitis B vaccine (late 2009) for a girl in an affluent, physically active household in southern Australia, the benefit looked marginal. There are many environments in which the girl could have lived that would have made vaccination clearly beneficial.
It would be surprising if all publicly available vaccines had “overwhelming” evidence in support of their use. That would seem to imply that our public health system hadn’t yet approached diminishing returns in searching for things against which to vaccinate, and that large gains were still to be had by vaccinating against more diseases.
(This is a short note on a complicated topic.)
(Vaccines have clearly done much good. Yay Science.)
What makes you think this (your second sentence) is not the case? Plenty of devastating diseases still cannot be prevented by vaccines; that’s why people continue to research and create new ones.
Whether all publicly available vaccines (or, more weakly but more relevantly, all recommended vaccines for a particular individual) are worth getting is a separate question, but the recommendations are evidence-based, and I personally believe they represent a guess as good as I can make.
Which doesn’t imply overwhelming evidence, though. Just enough evidence.
Agreed; I probably wouldn’t have said “overwhelming evidence”. But I do think there are still large gains to be made by vaccinating against more diseases, like, say, strep.
Revelation: The point of diminishing returns we would (in an ideal reach) is where health benefits are proportional to research dollars. Once a vaccine is researched, health benefits should be positive. However, variation, like the variation between regions, means that there are cases where an established vaccine has no benefits.
-- Salman Khan, Khan Academy
Upvoted. I have undergraduate commerce friends who want their degrees already so they can start on their mortgage. I asked them if they’d done a comparison with renting. They repeated the cached wisdom of “renting bad, mortgage good”, and “look everyone else is doing it”. I wish I had had this quote on hand—as it was I said something like “is everyone else mostly made up of commerce majors?” and didn’t really get my point across.
Could you give a specific video? This looks like an interesting site.
It’s from Renting vs. Buying a home (the quote might be in the part 2 or “detailed analysis” followup videos).
The videos there are on par with a first-rate college lecture. I believe Khan Academy is at the forefront of the growing anti-college revolution.
This is both insightful and highly quotable.
-Fred Mosteller
-- Hayao Miyazaki
I’ve always been impressed with how so many of his movies reflect this view, without being preachy about it. Look at Princess Mononoke, for example: there are several violently conflicting sides, and most of them can be described as good, even heroic.
Indeed, Princess Mononoke is one of the least preachy eco-movies ever made, although I have a feeling that its main focus is actually not on environmentalism but on conflict resolution. To quote Miyazaki (from memory, from an awesome documentary/backstage series about Mononoke), the film is to “illustrate adult ways of thinking about issues”.
The impetus for posting these Miyazaki quotes was the movie watching streak I went on recently. I’ve covered all of his movies except Castle of Cagliostro. I also read the Nausicaa manga, and its ending significantly upset me, to such extent that I think I will write a gratuitous Fix Fic that alters the ending to my pleasure. It upset me because nearing the ending Miyazaki constructs a pretty coherent and sensible transhumanist stance of dealing with the in-universe world and its problems, and then utterly demolishes that stance in the finale. Without going into specifics, the protagonist chooses an option that significantly increases the chance that humanity goes extinct in order to a) suspend other-optimizing by (most likely benign, maybe malicious) external forces b) eliminate medium term technological risks of moderate severity.
I think Miyazaki did it to sound deep and because of some underlying deathism. The tragedy of it is that Miyazaki is not a bit stupid, probably an atheist, averts romanticized environmentalism and conservatism all the time and espouses the “uncaring universe” viewpoint. Also, he is a genuinely good-willed guy and a masterclass craftsman and artist. His films reliably make me tear up. Still, he undeniably is tangled up in the head to some extent. In the Nausicaa manga he constructs the transhumanist viewpoint a lot more coherently and logically than the viewpoint of the heroine; poor Nausicaa actually sounds there like a foil. Which is a pity because Nausicaa is a rare example of an extremely idealized main character who manages to avoid being bland and Mary Sue-ish. Because of the ending she goes from “awesome beacon of light and hope” to “she who screwed up the future”.
I hope you excuse my rant about a manga that is probably read by few people; I think it has some relevance to LW as a failure-of-rationality case study. Aside from the ending it is also an excellent piece of art that I wholeheartedly recommend.
Sauce: http://www.nytimes.com/2005/06/12/movies/12scot.html?pagewanted=2
Whatever elaborate, and grotesquely counter-intuitive, underpinnings there might be to familiar reality, it stubbornly continues to be familiar. When Rutherford showed that atoms were mostly empty space, did the ground become any less solid? The truth itself changes nothing.
-- Greg Egan, Quarantine
Also known as Egan’s law. (Personally I think it should be called Stavrianos’ law after (I assume) the character, but I wasn’t asked.)
I like this. Similar vein to the litany of Gendlin.
-- Larry Wall (Programming Perl, 2nd edition), quote somewhat abridged
Richard Dawkins, God’s Utility Function
One problem I have often seen in “rationalist” and atheist literature is assuming the meaning of a particular phrase and then attacking it, whether or not it was the intended meaning. “Why” is asked as often about something’s causes as about it’s purpose. I agree that purpose-why is illegitimate to ask about natural objects, but Mt Everest has a completely legitimate cause-why, depending mostly on plate tectonics. There is nothing that makes the purpose-why which is being attacked, more likely to be the intended meaning of a question than using why to ask about the causes; which would make the attack off target and more likely to do nothing but engender resentment.
I see your point, but I also think it’s problematic when people say “why (implication: cause-why)” instead of just saying “how”.
When I hear people saying “Why did Mt. Everest form?”, I can substitute “How did...” in my head, but it also makes me wonder why they used “why” in the first place. No biggie, but that’s only because we know a fair bit about geology and how mountains form.
When it comes to broader questions like “Why does the universe exist?”, then the equivocation problem becomes much severer. I think in that particular case, there’s a good chance that the questioner is genuinely meaning to ask “purpose-and-cause-why”, because the concepts of “purpose-why” and “cause-why” are equivocated (since there’s no clear answer for the latter and blank spot for the former, as there is for Mt. Everest).
To me it seems a proper use of ‘why’. It means: consider the world as it was at some time in the past before Everest existed. Had we been alive then, we could imagine a future where Everest would form, or a future where it (counterfactually) would not form. The correct prediction would have been to say that it would form; we know that in our own present. But of a person reasoning only from what existed in the past, we can ask, why do you predict that Everest will form?
That is, to me, the meaning of the world “why” used about objects: it asks why the past evolved into our present rather than into a counterfactually different one.
My point, though, was that by assuming the other, rationalists are unnecessarily antagonizing people. Assume the person meant the “how” and answer that. If they meant the other, they will say so, then you can complain that it is an illegitimate question.
To stay young requires unceasing cultivation of the ability to unlearn old falsehoods.
-- Robert A Heinlein, Notebooks of Lazarus Long
cough
But keep it here, it got twice as much karma this time round.
-- Benjamin Franklin
-- Edward Luttwak, “Give War a Chance”
war, n.: a challenge to a contest that cannot be refused.
A good reason to accept such challenges, but not a good reason to issue them...
-- Hayao Miyazaki
-Nassim Nicholas Taleb
Bernard le Bovier de Fontenelle
Recently quoted on the web in relation to acupuncture studies.
This is very good advice—especially since postulating a cause probably increases your credence for a purported fact. Quote filed away and advice taken to heart.
Also, this ties in well with Your Strength as a Rationalist.
-John F. Kennedy
There’s a certain irony in that, coming from a politician as adept at making and using myths as JFK was.
The enemy of his enemy was his friend.
Dirge without Music
Edna St. Vincent Millay
I am not resigned to the shutting away of loving hearts in the hard ground.
So it is, and so it will be, for so it has been, time out of mind:
Into the darkness they go, the wise and the lovely. Crowned
With lilies and with laurel they go; but I am not resigned.
Lovers and thinkers, into the earth with you.
Be one with the dull, the indiscriminate dust.
A fragment of what you felt, of what you knew,
A formula, a phrase remains, --- but the best is lost.
The answers quick & keen, the honest look, the laughter, the love,
They are gone. They have gone to feed the roses. Elegant and curled
Is the blossom. Fragrant is the blossom. I know. But I do not approve.
More precious was the light in your eyes than all the roses in the world.
Down, down, down into the darkness of the grave
Gently they go, the beautiful, the tender, the kind;
Quietly they go, the intelligent, the witty, the brave.
I know. But I do not approve. And I am not resigned.
When somebody makes a statement you don’t understand, don’t tell him he’s crazy. Ask him what he means.
-- H Beam Piper, Space Viking
You must really like that quote.
I didn’t realize I had used it before. I hadn’t done that much commenting; but starting with last set of “Rationality Quotes” I decided to just start working down my Quotes and Aphorisms file; it has been growing for about 14 years now, and I thought I would share some of them.
Now there was a book that was not like what the title would lead you to expect.
In a similar spirit:
“To understand what another person is saying, you must assume that it is true and try to imagine what it could be true of.”
George Miller
Tim Minchin, Storm
Dammit, how do you get line-breaks? It’s a poem, but the stanzas get flowed into paragraphs.
That one seemed a little preachy and “rah-rah science” to me. I much preferred his “Fuck the Poor”:
I like to think of it as another post that’s about not just about the quote itself but how a Less Wrong context completely changes its meaning.
For those who haven’t heard the whole thing: http://www.youtube.com/watch?v=ujUQn0HhGEk
Two spaces at the end of a line.
thanks!
please correct “it’s memory” to “its memory” too.
Nice poem, btw :-)
OK
-- A Softer World #626
Possibly related: Cached Selves and some of its outbound links, and Violent Acres’ idea of self-brainwashing (bottom of post).
-- Jonathan Henderson
--Immanuel Kant, Critique of Pure Reason (A824/B852); seen on http://kenfeinstein.blogspot.com/2011/01/kant-on-betting-and-prediction-markets.html as linked by Marginal Revolution
Wow that’s interesting...but really weird.
What if you have a firm conviction that betting is immoral?
Then, you prove your belief by NOT betting.
I think the “betting proof” is a cultural thing. Of course...I wouldn’t bet much on that.
The new XKCD is highly relevant.
Wait, the mouseover says:
It just occurred to me that my museum visits as a child deceived me. That hundred year old glass didn’t flow! Lies! They’ve found panes upside down and sideways (with respect to thickness differentials) too.
Reading through the misconceptions page I discover that meteorites are not hot when they hit the earth! And after all this time thinking I could use them to finish off trolls.
My first year Geology lecturer said that apart from wanting to avoid contaminating the sample, the best reason to avoid touching fresh meteorites with your bare hands is the risk of freezer burn.
-- Bjarne Stroustrup
They should teach this in college!
I don’t recall my professors ever making the point that the way we wrote short programs for class would not always work on large programs.
Interesting, it seems like best practices are easy to teach (just follow these simple rules!), and the dysfunctional thing I’d expect would be for professors to tell you to follow them but not tell you why.
Updated.
That’s the failure mode that most of my profs fell into. When I was in school, there was a strong emphasis on correct style—in the extreme case, for example, some professors would fail an assignment if it didn’t have a high enough fraction of comments to functional code—but very little to suggest a coherent theory of software engineering.
From what I remember, most of my peers approached it with the attitude of being just another hoop to jump through.
One dysfunctional thing I’d expect would be the existence or perceived existence of a contrary movement, using the word “dogma” and saying things like “worse is better” and “if it’s stupid but it works, it isn’t stupid” and “those ivory-tower academics never have to deal with real-world problems” and “when theory and practice clash, theory loses”.
For example, this article makes a specific point about a specific situation (mixed in with some crazy), but might still leave one with an impression of “boo carefully planned programs, yay big hairy messes where you don’t know what half the API calls are for”.
Also, general hyperbolic discounting and programmers just not caring.
Also, sort of implied by that: methodologies that don’t actually work.
Unexpectedly semi-relevant: the latest xkcd, Good Code.
Maybe they do, now. When I was in CS, we had no classes on software engineering. But that was a long time ago, in CS terms. So you should not believe that I know what I’m talking about.
Ironic to hear that coming from Stroustrup. The language he has created, C++, is notorious for allowing the programmer to make a wide variety of very subtle errors that are impossible in most other languages.
Yeah, to someone unfamiliar with the topic it would seem that I make very strong statements. “Very subtle”? “Impossible in most other languages”? But the crazy truth is, my words are completely warranted. The most well-known example: after the addition of exceptions and templates to the language, it took several years for people to realize that writing exception-safe template code is a minefield (see Tom Cargill’s 1994 article), and three more years till anyone proposed a valid solution (Herb Sutter in 1997). Note that generic containers were one of the major motivations for templates in the first place! So we see seven years passing from introduction of a feature into a widely-used language, to the first time someone manages to correctly use the feature for its intended purpose.
The good news is, the language is still growing. I think I can confidently predict that when (if) the new standard comes out, people will be finding weird new interactions between features for years to come. I mean, just read that Wikipedia article from beginning to end, and try to tell me you disagree :-)
-Saying of investors
I’ve heard a similar aeronautical saying: Of course pigs can fly, they just need sufficient thrust.
Homer Simpson (referring to a roast pig experiencing a series of diverting events)
I actually like the next statement more:
At least Homer Simpson accepts that the pig is gone.
Technically off-topic...but I’ve never understood why people think turkeys can’t fly. I’ve even seen an ornithologist quoted in the NYTimes saying it (when a live turkey was found on an upper level balcony). Maybe it’s just domesticated turkeys...but I’ve definitely seen wild turkeys fly (and no, it’s got nothing to do with the whiskey).
Which brings me to an interesting (to me) question: why do people base a piece of “wisdom” on a reference that is untrue to begin with?
[and in closing: You don’t win friends with salad.]
That’s a good question. If I had to guess, I would say that most people used to be familiar with the domestic turkey that is being fattened for thanksgiving dinner (or whatever), and those probably can’t fly very well, if at all.
-- Thucydides
If I may, I prefer the fuller version:
Also, dupe: http://lesswrong.com/lw/2ev/rationality_quotes_july_2010/28gb?c=1
Ha—that post refers to Diax’s Rake, which is what happened to spur me to find the Thucydides quote in the first place!
In other news, I’ve invented this incredible device I call a “wheel”.
--Leo Rosten, “An Infuriating Man,” People I Have Known, Loved, or Admired.
-Walter Lippmann
The fact that the above comment got a lot of upvotes (i.e, widespread approval) is ironic.
There is a distinction to be made between “thinking alike things” and “thinking in alike ways”. Because the world is crazy and most people don’t even know there are ways of thinking (and those that do most often profess that everything is subjective), any statement “think alike” commonly is interpreted to mean “think alike things”, which is truly a good indicator of scarcity of thought in the common case.
(LessWrong is a near-complete inversion; where all/most thinking alike is strong evidence that a huge amount of thought has been happening)
True and well thought out.
Olivier Morin
Wolfgang Langewiesche, ″Stick and Rudder: An Explanation of the Art of Flying″, Part I, “Wings”. (via)
I don’t think this is really true (but have not been able to downvote anything for quite some time). You can have a functional understanding of how something works (if you do A to it it makes B happen) without having a model of how it works internally. This sort of modeling is what the “theory” practicalists disdain concerns itself with, and they may do well to ignore it.
Because we have limited computational abilities, we will often do better on non-novel problems by learning a few useful patterns than by deriving everything from the underlying model. There is a reason why in elementary-school math classes we do not just give the children the Peano axioms and say “have at it”.
Next paragraph in the book:
The synthesis here is roughly: Practical experience in a sort of Giant Lookup Table fashion but has bugs and fails in certain situations. Theory may have limits, but its main flaw is that it includes many useless things. To help those with practical experience, you need an awareness of theory and an awareness of the bugs in practical experience.
Anecdotal evidence: Most of driving, I learned through practice and instruction. I learned to brake smoothly only after my dad told me the underlying physics.
Thinking it over, it’s also a matter of extrapolation. From practice, you can effectively fit a curve to the behavior, but you don’t learn what happens outside the domain where that curve fits—and so, when you stall the wing or lose grip on the rear tires, your reactions will be exactly wrong, because you’re playing by rules that don’t apply any more. And yes, you can learn to fit the point of switchover and learn to fit the behavior in the new regime, in time … but if you crash, first, it will be very expensive.
Agreed, both are advantages of theory.
I would like to belatedly apologize for the terseness of my response—I realize now that I was basically punishing you for not hearing what I didn’t say, which was wrong of me.
In point of fact, I think Langewiesche was not quite correct—you can do things well without theory. Look at control systems. What theory lets you do is predict which practices will do well. We don’t give children the Peano axioms, but we try to make sure what we teach them accords with those axioms.
—Terry Pratchett, Making Money
Although thought by a madman in the book, there seems to be truth in this quote. People often seem to think of the future as a coherent, specific story not unlike the one woven by the brain from the past events. Unpleasant surprises happen when the real events inevitably deviate from those imagined.
That’s how I play chess.
Even on the Discworld, they have Perceptual Control Theory!
Terry Pratchett has an unusual art of presenting fantasy worlds full of nails.
Lewis Hyde, Alcohol and Poetry.
Via David Foster Wallace, A Supposedly Fun Thing I’ll Never Do Again.
To explain—I’m finding lately that the occurrence of irony is a useful warning that something is wrong; some current, important contradiction is being papered over. Sometimes the contradiction is obvious, yes, but among people with the habit of irony, sometimes that contradiction is buried deep enough that the ironist doesn’t know where the contradiction lies.
″...natural selection built the brain to survive in the world and only incidentally to understand it at a depth greater than is needed to survive. The proper task of scientists is to diagnose and correct the misalignment.” —E. O. Wilson
“The incredibly powerful and the incredibly stupid have one thing in common. Instead of altering their views to fit the facts, they alter the facts to fit their views. This can be rather uncomfortable if you happen to be one of the facts that needs altering.”
--Dr. Who
-Robert B. Cialdini, Influence: The psychology of Persuasion, p.59
George Orwell
Or a mirror.
Makes me wonder if a good way to deal with rationality or akrasia or self-improvement would be the kind of support group where everyone tries to find fault with everyone else. It’s so easy to see flaws in others compared to flaws in ourselves, why not use that to our advantage?
Finding the right people to do this who could both handle it and keep it from turning into an insult trading group might be difficult.
I think it’s not just faults. People don’t always appreciate their good points.
The group should be for identifying blind spots in general.
I tend to find focussing on developing strengths to better than focussing on weaknesses. Mind you there is a place for constructive criticism. But there are relatively few sources from whom such criticism is valuable.
I don’t follow. If you never focus on things you can’t do well, you’ll never do anything different or build any new abilities.
Piano teacher: You’re not keeping time very well, you could benefit from practising playing to a metronome.
wedrifid: I prefer to focus on developing strengths, and I’m really good at playing loudly so I’ll just do that, thanks.
?
The most important part in that comment:
Followed closely by:
Definitely not:
I was thinking of a group more like “you said your piano teacher suggested practising with a metronome—have you actually done so this week?”
“you’ve said a priority is learning the piano, yet you aren’t keeping track of your practise or recording yourself or making any way to check your progress and get feedback. Have you noticed that is inconsistent with your stated desire?”
“Do you realise how much you are talking about your commute to work compared to it’s real impact on your life?”
not
“you really suck at the piano”
“and have you noticed how stupid you are?”
“and how you talk forever about boring things?”
Isn’t that exactly what we do here (and on other forums)?
A lot of failings people have are things that are hard to notice online.
Good point.
That is difficult. I prefer the approach of Toastmasters of focussing more on the stuff done, well and some improvable points at once. Such a critique group can segway into a everyone-holds-everyone-down one very fast.
Perhaps the right way to do this is to focus on a particular topic rather than just a general-purpose fault-finding group. That would help people to evaluate others even if they weren’t very close, and also help to keep the criticism about something external and less personal.
For example, a group of people might work together on a project and criticize each others’ anti-akrasia skills purely in terms of work output on that project.
Georg Christoph Lichtenberg
-- Isuna Hasikura, Spice & Wolf [tr. Paul Starr]
I’d probably understand that better if I knew the context.
Lawrence is a traveling merchant in Spice & Wolf who’s received a proposition from someone to buy information on upcoming changes in the precious metal content of a type of silver coin; the cost is a relatively small fixed fee plus a cut of the profits if the information is accurate. Holo, the eponymous “wisewolf,” is essentially telling Lawrence that he has a dominant strategy.
Agreed. I have no clue what it means. I saw Spice & Wolf on the manga shelf in Borders… is it worth reading?
It’s nowhere near as good as MoR in a LW sense, not as good as “The Cambist and Lord Iron” at teaching economics through fiction, and you would not find it an intellectual challenge in any sense, but in the general context of Japanese light novels, it’s good, I think.
(At $8 and what I remember of your comments, I would give 60% odds you would not regret the purchase, and 5-10% you’d like it ‘quite a bit’ or something along those lines. If you do buy it, please tell me before you finish so I can register this as a prediction on PredictionBook.com.)
— Mark Twain (in Pudd’nhead Wilson)
Mark Twain is an uncommonly bad source for business advice, IMO. A watched stock never grows.
Georg Christoph Lichtenberg
Sophocles, Antigone
Wall Street Journal
That’s actually a pretty witty reply on Giffords’s part. I think better of her now.
-- Japanese Proverb
Never understood the math behind that one. Do I start off lying down?
You could end up double-standing. Transcend to a new level of up? Walk up some stairs, perhaps?
Indeed. What is to standing, as standing is to sitting? What is to walking, as walking is to crawling? What is to humans, as humans are to their pets? What is to 3D movement, as 3D movement is to 2D?
Or something.
Jumping.
Running.
Dragons.
4D movement.
Hopping. Each time you halve the number of limbs involved.
Standing like a chicken, with your knee and hip joints bent the wrong way.
I always figured one of the times you get up is supposed to be symbolic, but I’m not sure what of.
That’s how I usually start the day.
Or, you know… crawling.
Ah, I’m glad I’m not the only one who’s noticed that.
you start standing, you end standing.
Then it would be “Fall down seven times, stand up seven.”
Okay, let’s get super technical. May as well, it is LW after all.
You start off as a baby who can’t even crawl. Eventually, after much effort and encouragement from loving voices you get your feet below you and you stand up.
Now that you’re standing (1) you face your first challenge: Walking. You take one leg and put most of it in front of the other, you fall. (1) Why? Because you forgot to move your foot. So you stand up again, (2) and you get your leg AND your foot in front of the other. You crash down on the dog. (2) Gotta get that balance in check, babe! Alright, so we’re up again. (3) You kick that leg forward, you stick an arm out the other way to spare the dog further discomfort and splash, (3) there goes the jube jubes on the coffee table. You’re in heaven! Your mom perks up from the news to see what’s going on (OF COURSE she notices as soon as the candies are involved) She grabs you, yells at you for stealing candies and wonders how you got yourself in so much trouble. While she steals away your candies, you decide it’s time to find more adventures. In a flash you’re up on your feet (3) this time you’re using the coffee table to stabilize. Your mom takes a glance over and she’s shocked! “ooo my god my baby is wal..” she’s cut off when your ignorant older brother comes in and whisks you off your feet just before your first successful step. Some commotion ensues between the older people. Eventually you’re placed back down on your bum to start again at your leisure… (4) or is it? Now there’s a crowd. They’re all wanting something from you. You have to think about this of course. You can’t take the pressure, you gotta get out of there and fast. Bam! (5) you’re back up and trying desperately to get away from these oogling weirdos. 1 step, 2 steps, 3 steps, you panic, you fall! (5) They’re still on your tail, back up again, (6) “just head for the door” you think to yourself. “The crowd’s only getting uglier!” Finally you make it, the people behind you are going nuts, can’t look back now.. The doorway is right there, and it’s open cause dad was doing the lawn, you trip just before the threshold. (6) you roll out the door, back flip up to your feet, and you make a run for it! The crowd goes wild! You get smoked by an oncoming vehicle (7) (your sister’s tricycle) But you dust it off and get back up (8) now you’re free and running! A clean escape!
What on earth?
HonoreDB was pointing out that if you start standing, then the number of times you’ve stood up can never exceed the number of times you’ve fallen down (unless you can stand up while you’re already standing).
You could sit down without falling.
maybe, as ninjacolin describes, you have to stand up once BEFORE you fall down. So, in fact, to end up standing, you MUST stand up one more time than you fall down (unless you assume that everyone starts out standing, which they don’t).
Is that what the proverb means? Not necessarilly… but the math isn’t wrong.
If you interpret “stand up” as “stand back up” it makes more sense.
It’s a ratio of standing vs falling.
Stand first. Fall. Stand. No matter how many times you fall, stand up once more. That always keeps the ratio of standing higher than falling. This is straight-forward, no?
-- Agnes de Mille
Source: De Mille, Dance to the Piper 77; according to http://books.google.com/books?id=ihFTOcU8kAUC
Errare humanum est, sed perseverare diabolicum.
Rough translation: To err is human, but to persist in error diabolical.
(Saw the quote in William Langewiesche’s Fly By Wire; it is often attributed to Seneca on the Webs, but I can find no citation.)
A pagan-raised Stoic like Seneca was fairly unlikely to use infernal metaphors.
There are aphorisms similar to the first half among classical authors, but the current formulation originated with St. Augustine.
“Do you know, in 900 years of time and space, I’ve never met anyone who wasn’t important.”
Doctor Who (written by Steven Moffatt)
-- Eric Laithewaite, Invitation to Engineering
I think that’s a bit of an overstatement, but it is definitely the key.
I definitely like this statement...but I am not sure I agree with it.
Much learning is passive and not a result of wanting or even trying. And, a skillful teacher can cause learning (and by extension education) to happen without the student wanting to learn, or knowing that he/she is learning.
I suppose there is a specific distinction between “education” and “learning”, although I am not sure if it functionally boils down to this.
--Hayao Miyazaki
Cf. the Peter de Blanc tweet
This should be the motto of CS folk and programmers everywhere.
It’s the motto of any sysadmin who doesn’t want his brain to fall out.
--Chip Heath & Dan Heath, Switch: How To Change Things When Change Is Hard, pg 181
I like this and agree with the sentiment, but I suspect it’s not quite true as stated.
At least, I can enjoy watching someone build a structure out of a pile of wood, even though I don’t attribute any kind of fundamental pile-nature to the wood and am not shocked by that nature being subverted… I just enjoy watching someone exercise skill. I can imagine enjoying watching a skilled behavior-modification expert construct cooperation out of conflict in the same way.
“Fanatics may suppose, that dominion is founded on grace, and that saints alone inherit the earth; but the civil magistrate very justly puts these sublime theorists on the same footing with common robbers, and teaches them by the severest discipline, that a rule, which, in speculation, may seem the most advantageous to society, may yet be found, in practice, totally pernicious and destructive.” —David Hume
More of an anti-fanaticism quotation, but it seems to belong.
There is no harm in being sometimes wrong — especially if one is promptly found out.
John Maynard Keynes
William Feller, An Introduction to Probability Theory and its Applications
I can’t remember the source of the quote I’m thinking of, but it goes something like this:
“People always remark that I know so much about science and so little about celebrities, but they fail to see that the two are related.”
Does anyone know the original quote?
There’s this:
Maybe that was it and I spruced it up in my head! Thanks.
Related, from The Onion.
Where did you find that quote originally?
He posted it quite a while ago: http://lesswrong.com/lw/n8/rationality_quotes_9/
I was unable to track it any further—all the Google hits seem to trace back to this or the original posting.
Here’s one of a few mailing list postings where he has it as his signature.
del
A large part of education is learning to use your own judgement.
-- Ardath Mayhar, Khi to Freedom
— Kyoshi Antonio Fournier
The post that he’s responding to is also interesting.
Although the very question “Are We Stubborn or Manipulable?” invites a post on how to manipulate people by harnessing their stubbornness. I’ve won plenty of games that way. :)
-Jeremy Grantham, about the stock market/economy.
-- Peter McWilliams, Do It!: Let’s Get Off Our Buts
Evolution has been optimising humans to learn to walk as babies; it hasn’t selected (directly, or anywhere near as strongly) for ability to do Topology.
That’s true, but an adult will still learn topology a lot faster than a baby :).
I guess whether “adults learn faster” depends on how you look at it. Adults can learn any given thing faster than babies, but babies are getting the low-hanging fruit.
All the empirical evidence I’ve ever seen on the subject indicates that this is the precise opposite of the truth. Could you provide evidence to support this, please?
But an adult will never learn second languages faster than a child, and in fact will never learn a first language at all if not during childhood.
The same is true with sports. I imagine that if an adult has never learned to walk (somehow) that it would take a lot longer than a few months to learn to walk (a newborn doesn’t take years to learn to walk...he/she takes years to build muscle strength, and then typically a short time to learn to walk and then immediately run).
I think we all wish McWilliams was correct...I just don’t think he is.
Pretty much true.
There’s probably some fading of plasticity, but there’s lots of other explanations too. Children learning languages are surrounded by it, and spend all their time learning it. Almost all adult language learning is part time, rather than full time immersion. Fluency in either case requires several years, but “can get by in” is plausibly shorter (weeks to months, say) for an adult learning secondary languages as compared to a child. An adult also often has the advantage of being able to discuss the structure and vocabulary of the target language in already acquired language.
What is nearly universally true is that people’s ability to make and distinguish between phones not present in their early environment is very weak. I will probably always have a difficult time distinguishing between “cot” and “caught” because they contain allophones of the same phoneme in my dialect. Same for “merry”, “mary” and “marry”.
When I took a college phonetics course, I and a classful of other students more than doubled the number of distinct sounds we could distinguish and produce. So it can certainly be done. I think adults normally don’t because they don’t need to, since mapping to their native language’s phonology is so much easier. Also, when I took a foreign language in high school, there was no IPA and none of the teachers had the linguistics training necessary to explain the cause of an accent even if they took the time to do so.
But it’s certainly true that language learning is automatic for children, and not so automatic for adults.
Child → several years to fluency.
Adult → http://fluentin3months.com/ he started learning other languages in his late twenties.
There is substantial literature that suggests that language acquisition is more difficult, slower, and ultimately less successful in adults than in younger people. I believe it’s often mentioned that it’s at about 12-14 yrs old that the neural plasticity for language acquisition fades. http://pandora.cii.wwu.edu/vajda/ling201/test4materials/secondlangacquisition.htm http://serendip.brynmawr.edu/biology/b103/f03/web2/mtucker.html
I never said that it’s impossible for an adult to learn a foreign language, just not as fast or effective. That man’s blog seems interesting...but I’m quite skeptical that he becomes “fluent” in a language in 2-3 months. I’d be interested to see how fluent he actually is (nevermind the similarities between several of those languages, which aids in acquisition...I’ve experienced that first hand). Then again, maybe he’s an unusual case.
simplyeric:
I think the confusion here is about what exactly is meant by “learning” a language.
If the goal is to quickly build some rudimentary skill for finding one’s way around in a foreign language (which basically boils down to memorizing a lot of words and stock phrases, along with a few very basic syntactic patterns), an intelligent adult will likely be able to achieve it faster because of better work ethic and superior general experience in tackling intellectual problems.
On the other hand, if the goal is to become indistinguishable from a native speaker, then the kid clearly has an advantage no matter how long it takes, because the task is impossible for the overwhelming majority of adults (which for this purpose means anyone over 12 or so). You may become a fluent speaker and perhaps even a good writer, but you’re stuck with a foreign accent, and even if you manage to get rid of it with special training, you’ll still make occasional subtle but noticeable syntactic and semantic errors.
If the goal is something in-between, the winner will depend on the exact benchmarks of success.
For the record he says 3 months to fluency is his personal goal, not what he guarantees/claims is always possible.
There are videos of him speaking on his various blogs, but since I don’t speak any non-English languages I can’t judge his fluency, but I expect he is conversationally competent and not indistinguishable from a native, and yes he has to work hard at it so it’s not the same kind of easy learning as children have, which seems to be what your links support, so no argument from me there.
However, he still seems to learn a useful conversational ability in a language in months whereas children take years to do that, so I still think your statement “but an adult will never learn second languages faster than a child” is a strange claim, unless you specifically mean “like a native”, which seems a much stricter test than necessary.
From your links:
Yet the results of child language learning are not equal or always total fluency—plenty of adults barely seem to know what they are saying, cannot express themselves clearly, do not finish sentences ‘properly’, don’t notice the difference between similar words and sentences with different meanings, and people cannot orate without learning to be orators, cannot write without learning to write (I mean author well written texts instead of drivel), cannot tell stories captivatingly without practise, cannot follow official formal documents, and other linguistic things which you might lump in with fluency or might not, leading to potentially very different expectations of a fluent person.
The second link makes several comments including adult lack of opportunity (limited classroom time) which is interestingly mentioned here: http://www.fluentin3months.com/hours-not-years/
Mario Pei, who knew an astonishing number of languages (to what degree I don’t know), quipped that the first ten are the hardest.
Eating solid food does not fit in the list of learning tasks! It is a physiological adaption.
(But the general point is good.)
-- John Locke, An Essay Concerning Human Understanding, book 4, chapter 16
scientia potentia est
Knowledge is power.
--This quote is attributed to Sir Francis Bacon, but we don’t really know.
---The Flaming Lips, “All We Have is Now” (relevance: anthropic doomsday argument)
-- Émile Coué
This sounds like extremely naive optimism. A vast majority of games in all team sports, for instance, probably end in one team failing to do a possible thing they thought they could do.
Yes, I agree. The author also promoted therapeutic hypnosis. I like the quote in spite of its hyperbole.
Point one: I think you and he are using different definitions of “possible”.
Point two: “Win the game” is not a well-specified outcome in hypnosis or NLP, since it relies upon matters outside your control.
What do you think the different definitions of possible they are using are?
Definition 1: possible as in “I can imagine winning, therefore it’s possible”
Definition 2: possible as in “actually possible for me to do in reality, independent of whether I imagine it to be so”
The quote was using definition 2: that is, “if you persuade yourself that you can do a certain thing, provided this thing is possible [in the real world when you attempt it], you will do it, however difficult it may be.” Desrtopa’s argument from team sports is using the first.
IOW, just because a given team imagines it possible to win does not mean they can win, because winning is not under their control. They can, however, imagine it possible to execute various skills at a high level of proficiency, and do this, whether they win or not.
In fact, it is generally reputed that the “winningest” teams tend to follow this philosophy: i.e., to practice the execution of basic skills to a near-exclusion of any consideration of “winning”. This is quite in keeping with the spirit of the original quote, which is regarding that which is actually possible given a particular set of circumstances (such as the state of the other team) which are not actually under your control.
“IOW, just because a given team imagines it possible to win does not mean they can win, because winning is not under their control”
But just because a team does not win, does not mean it was not possible.
I mean, think of all the things that a person does multiple times but doesn’t do every time. Hit a golf ball x yards, run a 7 minute mill, sing on key. The “imagining” has nothing to do with it.
In a deterministic context, things that are “possible to do in reality” and things that are necessarily going to happen have complete overlap. In this context, saying that if it’s possible then you will do it is vacuous.
In any case where we can’t predict future events with certainty, this definition is fairly useless. The colloquial, and more generally functional definition of possible, is that we cannot discount the potential that a thing might happen prior to the fact. Just because we can imagine something happening does not mean that it cannot be discounted as a possibility, and just because something is necessarily going to happen does not mean we can know that ahead of time and discount the possibility of events that would be exclusive with it.
By the practical definition of “possible,” the quote is not true, and by the strict deterministic definition the quote is still not true, because one can in fact do things one believes oneself to be incapable of.
If you interpret the quote to use the colloquial definition of “possible,” but assume that it only applies to things where no elements of the activity are outside your control, then it’s deceptively lacking in meaning, because beyond a trivial scale there is very little one can accomplish where this applies.
Fallacy of the grey. An athletic competition involves a great deal of elements where one’s control is neither complete nor absent.
If the author had intended deception, he’d have seen no need to include the disclaimer regarding possibility. He effectively said, “if you believe you can, then you can do things that otherwise would be very difficult—you won’t do the truly impossible, of course, just the seemingly impossible.”
Since the beginning, not one impossible thing has ever happened. If it happened, it was possible, after all.
He said that if you believe you can do a thing, and it is possible, you will do it, which is quite different from saying that you can do it. If you add as many qualifications as are necessary for it to be accurate, it is no longer an interesting, or, I would think, particularly inspirational statement.
This is more depressing than inspiring, but the final sentence is worth contemplating. It’s from a review of a short book by the 19th-century economist Francis Edgeworth, showing how to begin a mathematical (utilitarian) treatment of morality.
WS Jevons (1881), “Review of Edgeworth’s Mathematical Psychics”, Mind, Vol. 6, p.581-583.
If you just have a single problem to solve, then fine, go ahead and use a neural network. But if you want to do science and understand how to choose architectures, or how to go to a new problem, you have to understand what different architectures can and cannot do. -- Marvin Minsky
Reminds me of this story.
-- Lao Tzu
Things would be so different if they were not as they are.
I was very torn about where to post this, as it includes an image. Not only is it an image, it’s an animated GIF, which can be considered obnoxious for various bandwidth and aesthetic reasons. However, I felt the humour value was worth the risk, and this seems like the right thread. So here’s the quote:
That quote is from “Think Like Reality”, and therefore a violation of Rule 3.
Ah, completely missed that.
Clearly the wrong thread, then. Should I delete the comment, and can you recommend somewhere else to post it?
I’d probably post it in the latest Open Thread.
Thanks
-- muflax (on his blog, not on LW, so it’s cool, right?)
---Summerspeaker, “The joys of solidarity with the technophobic”
Bjarne Stroustrup
--W.H. Auden
Dr. Manhattan, Watchmen
Hey, no quoting yourself.
I’m still on Mars, Laurie.
“Simultaneous” is a word that you use from within time, to refer to relations described by time. I don’t think you’d use the word that way if you were really looking at the universe at the level of timeless physics, really seeing the whole design in every facet. (Though it is the word you’d probably use if you were a human author trying to write a character who sees the deeper reality beyond time, if you yourself don’t quite see it. :P) Probably the intuition behind that is imagining looking at spacetime as something like a film reel laid out in front of you, and seeing that it’s all already there, no matter what the people in any given frame seem to think. But that puts your perspective outside this universe’s apparent time dimension, but inside an imagined outer timeline against which you can continue using words like “simultaneous” or “already”. And that’s no way to really reduce time; it’s a mistake similar to trying to reduce consciousness by putting a little homunculus inside your head that watches your sensory input on a projector screen. It’s reducing a black box to some visible machinery interacting with… another copy of the same black box.
I don’t think there’s any perspective from which “Time is simultaneous” makes sense, unless our universe is actually a static block of already-computed data on the hard drive of some computer in a different reality with its own timelike dimension.
(Edit: Oh, and Dr. Manhattan is being a bit uncharitable by claiming that “humans insist” on seeing things in this limited way. Sure, I consider my lack of omniscience to be a moral failing on my part, but that doesn’t mean I’m not trying to do better.)
-- Kay Hanley
This reminds me of hyperbolic discounting and doesn’t seem to have other redeeming qualities.
I was thinking along the lines of doing one’s best with earthly life rather than waiting for a promised afterlife. Mind you the song’s not really about either of those things, but I first heard the quote out of context and decided to keep it that way. :p
Bother. I had a quote to post, created the thread but forgot the quote. Probably but fortunately because it was only moderately good.
René Descartes walks into a bar …
He’s a regular, so the bartender says, “Hey, René! How ya been? You have your usual?”
The philosopher pauses and says, “I think not.”
He ceases to exist.
Jokes about “I think, therefore I am” are always amusing to students of logic, because people twist themselves in knots trying to make it seem weird, when the only thing you can do with “I think → I am” is “I am not → I think not”, and “Rene Descartes died, therefore he stopped thinking” isn’t funny.
An eight year old Rene Descartes had been spending hours at study, trying to work out a particularly tricky geometry problem. His Jesuit teacher says “I think it’s time for a break. You’re clearly exhausted.” “I am not!” the boy insists, as he falls unconscious.
I’m sure someone will point out if I am incorrect here, but:
Isn’t it somewhat irrational to analyze why a joke isn’t funny? Isn’t it sortof like trying to use a rational analysis to demonstrate that you should or should not like the sound of birds?
I mean, hey now, c’mon.
( In any case, the joke wasn’t “Descarte died thus stopped thinking”. )
Nope. Our best guess on humour is that it is unexpected pattern-breaking. This is a function of our extremely complex pattern-matching muscle, the brain. My rational analysis of what happened in Costanza’s post is that he attempted to break the pattern of “cogito ergo sum is an axiom / truism or important / obvious / wise statement” by presenting a situation where the statement’s conclusion is absurd.
My post pointed out that the joke was funny to students of logic because it breaks the pattern of “logic”.
I am not sure how this is like demonstrating that you should or should not like the sound of birds. That equivocates “is” and “should”—a joke is funny or a joke isn’t funny, and you can analyse why this is so quite rationally, whereas “a joke should be funny” is definitionally true. Your statement is further scuttled by talking about what a person should do, instead of a concept like ‘joke’ or ‘bird’. There’s no agreed-upon shouldness for people.
The joke was definitely not “Descartes died thus stopped thinking”; nobody would make a joke about that, because that’s not funny. (This is a filter on jokes about Descartes’ cogito: all correct interpretations are not funny. Jokes are—at least an attempt at—funny. Therefore all jokes about the cogito are not correct interpretations.)
Critical to a joke is that it fits some other pattern, which is what I think you’re getting at.
Here is a not-funny joke:
“1, 2, 3, 4, bananas!”
It seems like you’re expecting it to follow a logical-consequences pattern. It’s really following a more pun-like pattern. I found it amusing.
Yeah, it should be “1, 2, 3, Kumquat!”
I laughed.
Maybe you were thinking of this.
Never saw that before. I may have been thinking of this somewhat.
This is basically the entire absurdist humour approach, in one joke.
Only if the “Here is a not-funny joke:” is considered to be part of the joke. It wouldn’t have been as funny without that preface (and it would actually have been unfunny if it had said “Here is an absurdist joke:”).
“Nope”
Isn’t it not entirely rational to declare something so unequivocally, based on:
“Our best guess...”?
A. I’m not sure I agree that humor is simply unexpected pattern breaking. I’m sure there’s a link that elaborates that theory? There are many aspects of humor, and I think breaking it down to such a blanket statement is probably too simplistic. (I do agree that’s certainly part of it).
B. Even if your statement is correct, you are then also implying that it is purely objective what it funny, and thus purely objective what the patterns are and how they are to be broken. Patterns to you might not be patterns to me. Or we might see the same pattern, and you might not find breaking it funny, but I do. Or I might find breaking the pattern offensive while you find it neutral. Or you might expect it and I might not. Or you might understand it while I don’t.
Because of A and B, I think that “funny” is subjective. People can rationally discuss why they think something is funny...but I don’t think they can discuss it in the context of “that is ojectively unfunny”. You are implying that you know all the patterns, and can comment on whether the pattern was successfully borken in that instance.
Or rather: you can rationally analyze why that joke wasn’t funny TO YOU, but you can’t rationally analyze why it WAS NOT funny.
If I rephrase my bird issue: it’s like rationally analyzing that bird sounds are or are not pleasing. (you’re right that I phrased it sloppily).
As for
“The joke was definitely not “Descartes died thus stopped thinking”
I might misunderstand, but I thought that’s what you said:
“Rene Descartes died, therefore he stopped thinking” isn’t funny.
(although I’m not sure this aspect of it is important to the bigger issue of “funny”)
In the right context, I find it to be. It’s so obvious and uninsightful that it causes an unexpected pattern break if you expect any sort of twist or clever input.
And there is why it seems not entirely rational to discuss what is and isn’t funny...Shokwave believes that he has rationally shown that “Rene Descartes died, therefore he stopped thinking” isn’t funny. You have used his same logic to demonstrate that it IS funny.
I personally don’t think it’s funny, as delivered. But, with the right delivery, it could be hilarious, I suppose.
I’m fascinated by the rational discussion of “the nature of funny”, by the way. I was opining on the discussion of “the objective funniness of a particular joke”.
Nope. That we can only guess means we don’t understand; what we don’t understand, we ought to analyse, rationally if at all possible.
This section in quotes is the only conclusion you can draw from Descartes’ “I think, therefore I am”. Humans are familiar with the concept that death stops brain function, so the section outside the quotes is invariant over all subjective viewpoints. Therefore, someone trying to make a joke about Descartes’ “I think, therefore I am” is almost certainly going to commit some form of logical fallacy, because the only non-fallacious route isn’t a joke. That is, “joke” strongly implies “fallacy”, because “correct” strongly implies “not funny” (implicit assumption that “funny” is a necessary condition for “joke”).
“What we don’t understand, we ought to rationally analyze....”
Absolutely. And what “our best guess” imples to me is that we don’t fully understand “funny” or “joke” or “comedy” So we ought to rationally analyze that issue. What I feel you did there was you took your interpretation of “our best guess” as good enough and moved forward with unequivocated confidence to apply it to a joke that someone wrote. I feel like there is a procedural lapse there. You were anayzing The Joke At Hand, while admitting that we do not really understand “jokes in the abstract”.
Thus: we don’t understand what makes certain bird sounds pleasing to people, but I am going to make an unequivocated statement that this bird sound is objectively not pleasing, based on our best guess.
anyway...
“Rene Descartes died, therefore he stopped thinking” … is the only conclusion you can draw from Descartes’ “I think, therefore I am”.
You are assuming a causality that the “being” creates the “thinking”. One could also assume that the thinking creates the being, which is where the joke forms.
I personally think it’s neither: “thinking” is evidence of “being”, the causality being ambiguous.
That’s not even rational, it’s affirming the consequent.
Re-writing Descartes’s “I think, therefore I am” as (“If I think, then I am” and “I think”); therefore “I am”. Then the joke’s “I think not” would be denying the antecedent, which is still a fallacy, of course.
I seem to represent P → Q and ~Q → ~P the same way in my mind, but giving the resulting fallacies different names reduces ambiguity, so I guess this is a useful distinction.
P→Q is not logically equivalent to ~Q→P. Perhaps you meant P→Q and ~Q→~P.
Fixed, thanks.
Affirming the consequent is a totally different fallacy -
“If P, then Q and Q is true; therefore P”.
Denying the antecedent with P and Q:
P → Q
~P
Therefore ~Q
Affirming the consequent with ~Q and ~P
~Q → ~P
~P
Therefore ~Q
Wow, I feel kind of bad just writing those chains of “deduction”. Anyways, the same result was concluded from the same minor premise, the only difference is the major premise, and P → Q and ~Q → ~P are equivalent.
edit: formatting
By the principle of explosion, all fallacies are the same.
(∀P,Q ((P->Q) ^ Q) → P) <-> (∀P,Q ((P->Q) ^ ~P) → ~Q)
That depends on the definition of same. All fallacies imply each other, but the premises and conclusions in these two should be represented identically by a computer.
No. This is not the case. Just because something is a fallacy doesn’t make its negation true. Thus for example (P->Q) → (Q->P) is a fallacy. But ~((P->Q) ->(Q->P) ) is not a theorem of first order logic. So even if I throw (P->Q) → (Q->P) as an additional axiom in I can’t get a general explosion in first order logic. Contradictions lead to explosion, but fallacies do not necessarily do so.
Sure you can.
R v ~R (axiom)
R → (R v ~R) (by 1)
(R v ~R) → R (by the new axiom)
R (by 1 and 3)
Edit:
(P->Q) → (Q->P) is not a fallacy. ∀P,Q: (P->Q) → (Q->P) is a fallacy, and its negation is ∃P,Q: (P->Q)^~(Q->P) which is indeed a theorem in first order logic.
Huh?? If you allow quantification over propositions, you are no longer using first order logic.
I think you were closer to being on track before your edit. The first thing to realize is that a fallacy is not a false statement. It is an invalid inference scheme or rule of inference.
So, with P and Q taken to be schematic variables (to be instantiated as propositions), the following is a fallacy (affirming the consequent):
P → Q |- Q → P
Or, you could have simply corrected the words “additional axiom” in the quoted claim to “additional axiom scheme”.
Er, sorry. Meant propositional calculus not first order logic. I think my statement works in that context.
What’s specifically going on here is that (P=>Q) ⇒ (Q=> P) is false whenever P is false and Q is true.
Adding it as an axiom schema to propositional calculus results in a contradiction. It cannot be added as a single axiom to first-order logic.
Yes, you are correct. I was confused in a very stupid way.
It’s a joke about rationality. Why should all rationality quotes need to be direct and inspiring?
I would in general approve of a joke about rationality. I just don’t think that this was particularly related to rationality or particularly funny, and it contained a logical fallacy without the fallacy being the butt of the joke, so it was not very rational.
However, on the assumption that at least once in René′s life he did indeed walk into a bar and refuse his regular drink in such manner (it seems possible), the premises and the conclusion are all true.
I think the implication that the cessation of existence was immediate was at least as strongly implied as that the exchange took place in French.
Of course. One of the things that learning logic from philosophy teaches you is to be nitpicky about deriving causation from conditionals. A truth table, for better or worse, contains no field for “strong implication contradicted”.
I don’t think I derived this implication from the `I think, therefore I am,′ I think I got it from how it happened right after, though I can’t be sure about that specific instance of causation in my brain.
Best summary of the justification for Bayesian AI I’ve ever heard.
How many Less Wrongers does it take to ruin a joke?
None. Any general intelligence should be able to do it.
Two, judging by the number of people between your comment and the top one, with the top comment being excluded because the ruining of a joke is defined not to include the initial statement of the joke and, due to the way in which you mentioned the ruining of the joke, you presumably were commenting on an existing situation, rather than one which you had just completed ;)
Hmm, it was only one that time.
Really? What if they totally mess up the punchline? Or accidentally use a synonym of the word that was being set up for a pun?
Good point, I did assume that the joke was told correctly.
AUGH