Rationality quotes January 2012
Here’s the new thread for posting quotes, with the usual rules:
Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
Do not quote yourself.
Do not quote comments/posts on LW/OB.
No more than 5 quotes per person per monthly thread, please.
- 12 Nov 2012 4:50 UTC; 5 points) 's comment on Rationality Quotes November 2012 by (
“if we offer too much silent assent about mysticism and superstition – even when it seems to be doing a little good – we abet a general climate in which skepticism is considered impolite, science tiresome, and rigorous thinking somehow stuffy and inappropriate. Figuring out a prudent balance takes wisdom.”
– Carl Sagan
Everyday words are inherently imprecise. They work well enough in everyday life that you don’t notice. Words seem to work, just as Newtonian physics seems to. But you can always make them break if you push them far enough.
--Paul Graham, How to Do Philosophy
[surprisingly not a duplicate]
-- H. L. Mencken, describing halo bias before it was named
I like the pithy description of halo bias. I don’t like or agree with Mencken’s non-nuanced view of idealists. it’s sarcastically funny, like “a liberal is one who believes you can pick up a dog turd by the clean end”, but being funny doesn’t make it more true.
The point is that idealists suffer from a halo bias around their chosen ideal.
Do roses make for good soup? They make for good chocolate.
Rose water is used for flavoring, sometimes. Roses have essentially no nutritional value, though, and cabbages are widely held to taste better than they smell.
I’ve had rosewater flavoured ice cream.
I bet cabbage ice cream does not taste as nice.
-Saint Thomas Aquinas
I wish I would have memorized this quote before attending university.
*This comment was inspired by Will_Newsome’s attempt to find rationality quotes in Summa Theologica.
Summa Theologica is a good example of what happens when you have an excellent deductive system (Aquinas was great at syllogisms) and flawed axioms (a literal interpretation of the Bible).
Aquinas probably meant something different by “literal interpretation” than you think. For instance, I’m pretty sure he agreed with Augustine that the six days of creation were not literally six periods of 24 hours.
Out of curiosity, where did Augustine say that? It’s interesting that anyone bothered doubting that the six days were literal before the literal interpretation became embarrassingly inconsistent with established science.
The first three “days” happened before the sun and moon were created, so a literal interpretation was problematic even then.
Eh, there’s an easy hack around that: God already knew what the length of a day was before it created the sun and the moon.
The literalness or otherwise of the description wasn’t really a issue of major debate one way or the other until there was a strong alternative hypothesis. Theres no political or signalling benefit to supporting a bizarre position when you have nothing to compare it too.
Yes. So, the question is, Which alternative hypotheses were on the table before Darwin, and why were they considered compelling?
If I was copying over rationality quotes from the Summa I’d have gone for way different stuff, Aquinas was a fucking beast of a rationalist. I was just testing LW. Karma is not nearly as useful as accurate beliefs.
I don’t know about a beast, but in general philosophers from the Middle Ages are far underrated compared to, say, philosophers from the “Enlightenment”.
I think thats a product of people being evaluated by the ‘rightness’ of their conclusions rather than the validity of their arguments, so someone who rationally derived a wrong conclusion from bad data is less respected than someone who found a conclusion similar to our present ones by bad reasoning or sheer chance (e.g. certain ancient philosophers).
Maybe, but that doesn’t explain why there is so much misinformation about medieval philosophy in popular sources. For instance, as Will Newsome tried to point out, Saint Thomas Aquinas was arguably a compatibilist with respect to the problem of free will, but I was taught in university that the “right” solution to the problem of free will (compatibilism) had to wait for a cognitive scientist (specifically, Daniel Dennet). There are numerous issues where thinkers from the Middle Ages did come to roughly the “right” answer, but that moderns teach that they didn’t. There has got to be more to the story.
I am surprised by this. The proto-compatibilism of Aquinas might be little-known, but I thought it was common knowledge that compatibilism has a long pedigree before the late 20th century, including most logical positivists like Ayer and earlier British empiricists like Hume (I would include Spinoza as well). What Dennett gives is a version informed by modern cognitive science, but not especially novel in its basic features.
--Piet Hein
Lesswrong!
This has been quoted by Yvain before, but not here.
I was very surprised to see this was not a dupe; checking, the copy in my Mnemosyne was simply taken straight from a collection of his grooks. A missed opportunity.
Do you mean you have a deck for quotes? As I’m just getting into trying out spaced repetition and trying to come up with things to memorize, I’m wondering about your reasons for memorizing quotes (if that is indeed what you’re doing). Do you have some sort of system of question/answer pairs that help you remember quotes that are applicable to certain situations? Or are you trying to memorize quote authors? Or what?
I add quotes because it’s a handy sort of quotes file (many people keep them) and because I like being able to reel off quotes or just have them handy in my memory for writing.
There’s nothing fancy about them: the question is the quote, and the answer is all the sourcing and bibliographi information. I grade them based on whether I feel I could paraphrase them in a relevant context. (“Ah yes, good old Box’s quote about how ‘all models are wrong but some are useful’. Good to remember for statistical discussions. Mark that one a 4.”)
Are the decks you personally use available anywhere?
http://www.gwern.net/Spaced%20repetition#see-also
Thanks a bunch :)
Do not accept any of my words on faith,
Believing them just because I said them.
Be like an analyst buying gold, who cuts, burns,
And critically examines his product for authenticity.
Only accept what passes the test
By proving useful and beneficial in your life.
-- The Buddha, Jnanasara-samuccaya Sutra
Good instrumental rationality quote; not so good for epistemic rationality.
Why do you say that?
“Proving useful in your life” (but not necessarily “proving beneficial”) is the core of instrumental rationality, but what’s useful is not necessarily what’s true, so it’s important to refrain from using that metric in epistemic rationality.
Example: cognitive behavioral therapy is often useful “to solve problems concerning dysfunctional emotions”, but not useful for pursuing truth. There’s also mindfulness-based cognitive therapy for an example more relevant to Buddhism.
I suppose that is a tension between epistemic and instrumental rationality.
Put in terms of a microeconomic trade-off: The marginal value of having correct beliefs diminishes beyond a certain threshold. Eventually, the marginal value of increasing one’s epistemic accuracy dips below the marginal value that comes from retaining one’s mistaken belief. At that point, an instrumentally rational agent may stop increasing accuracy.
On the other hand, it may be a problem of local-versus-global optima: The marginal value of accuracy may creep up again. Or maybe those who see it as a problem can fix it with the right augmentation.
There is no tension. Epistemic rationality is merely instrumental, while instrumental rationality is not. They are different kinds of things. Means to an end don’t compete with what the end is.
Upvoted for this
It is useful for pursuing truth to the extent that it can correct actually false beliefs when they happen to tend in one direction.
This sometimes comes at the expense of other truths, just as pursuing evidence for your preferred conclusion turns up real evidence but a less accurate map.
Related quote from Epictetus.
“A casual stroll through the lunatic asylum shows that faith does not prove anything.”
Friedrich Nietzsche
That would seem to be an odd notion of “faith”; is the translation untrue to the original or is Nietzsche just being typically provocative? (I also personally don’t see how the quote is at all profound or interesting but that’s a separate issue and more a matter of taste.)
I apologize for practicing inferior epistemic hygiene. Thank you for indirectly bringing this to my attention. I knew that the quote was commonly attributed to Nietzsche, but I had never seen the original source. It would seem to be a rephrasing of this quote from The Antichrist:
Ah, that sounds a bit more like the Nietzsche I know and kinda like! Thanks for digging up the more accurate quote.
I’d parse the quote as meaning “Believing in something doesn’t make it true”, in which case it’s something that pretty much everyone on this site takes for granted, but that the average person hasn’t necessarily fully internalized. Yudkowsky felt the need to make a similar point near the end of this article, and philosophers as diverse as St. Anselm and William James have built entire epistemologies around the notion that faith is sufficient to justify belief, so obviously it’s a point that needs to be made.
I dunno about St. Anselm but I found James’s “The Will to Believe” essay reasonable as a matter of practical rationality. The sort of Bayesian epistemology that is Eliezer’s hallmark isn’t exactly fundamental, and the map-territory distinction isn’t either, so I don’t find it too surprising that e.g. Kantian epistemology looks a lot more like modern decision theory than it does Bayesian probability theory. I suspect a lot of “faith”-like behaviors don’t look nearly as insane when seen from this deeper perspective. So on one level we have day-to-day instrumental rationality where faith tends to make sense for the reasons James cites, and on a much deeper level there’s uncertainty about what beliefs really are except as the parts of your utility function that are meant for cooperation with other agents (ETA: similar to Kant’s categorical imperative). On top of that there are situations where you have to have something like faith, e.g. if you happen upon a Turing oracle and thus can’t verify if it’s telling you the truth or not but still want to do hypercomputation. Things like this make me hesitant to judge the merits of epistemological ideas like faith which I don’t yet understand very well.
This sort of taxonomy seems to deserve a more thorough treatment in a separate post.
-Anon http://www.quora.com/What-is-it-like-to-have-an-understanding-of-very-advanced-mathematics#ans873950
(emphasis mine)
Teaching, for me and several other people I know, serves the purpose of reveling in your mastery. In fact, Feynman said it best:
Teaching helps me a lot in this respect, because I become very insecure sometimes when I do my research.
I can’t tell if he presents this as a good thing or a bad thing.
At the very edge its also useful to be able to work while in a state of sheer existential dread.
In my experience, if you find yourself in “a state of sheer existential dread”, that probably means you’ve done something wrong, most likely made a category error somewhere along the way.
“Never interrupt your enemy while he is making a mistake.” —Napoleon Bonaparte
(This has been mentioned before on LW but not in a quote thread. I figured it was fair game.)
Just make sure to only apply this one to your actual enemies, and not to people who generally wish you well but disagree on some key point.
Interrupting even neutral associates when they are making a mistake does not necessarily have good outcomes for you either. Being the messenger has a reputation...
Its apparently a misattribution sadly.
It looks like it’s in the attribution section to me, not the misattribution section.
Sister Juana Inés de la Cruz, 1691 (tr. Pamela Kirk Rappaport)
--Frank Adamek
While you’re there, enjoy the laddergoat.
Now in live action
.
Fabius actually seems a little irrational in this quote. At first he objects to Augustus’s interpretation because Augustus is not an expert on the interpretation of signs, which is reasonable. But then when Augustus does have an intepretation that’s coming from an augur, Fabius still continues to question it, pitting his view against expert opinion like it was still just the opinion of Augustus. Since it is not established that Fabius would be an augur himself, this seems like motivated cognition / not properly updating on evidence.
Alternatively, it could be that Fabius doesn’t actually believe in omens, but in that case first appealing to the need to get an expert opinion is pretty dishonest.
Of course, Alejandro’s comment below does clarify that Livia is probably lying about the augur’s testimony, but I’m going by the quote as it was posted (and as most people probably read/voted it).
Fabius does not want to argue with a fool more than it is necessary. He engages the heavy guns only when needs to, this time at the end of the dialogue.
My kind of a (dishonest you say) guy.
Because days is the Schelling point interpretation, and if gods are communicating with you they’ll probably go for the Schelling point. Lightning implies Zeus-Jupiter, so Augustus should look into historical examples of Zeus talking to people to see if Zeus tends to be misleading in ways similar to those Fabius warns of; in fact the augur had probably already considered things like this before speaking with Livia. And Fabius should trust the augur, who is a specialist in the interpretation of signs and probably has more details of the case than he does. I mean seriously, what are the chances that the letter C would get struck by lightning? We are beyond the point of arbitrary skepticism. Deny the data or trust the professionals. (I’m not familiar with the series in question, I’m just filling in details in the most likely way I can think of.)
ETA: Wait, maybe Fabius is trolling Augustus/me? …Nice one Fabius! I approve of your trolling. Downvote retracted. (Oh yeah and this is an excuse to link to the Wiki article on assassination markets.)
For everyone who knows that Livia is the Magnificent Bastard of the series (which is made clear from the first episode, so no spoiler there), the highest probability mass goes to the hypothesis that was lying about having spoken to an augur or about what he told her, and that she wanted Augustus to question her and only feigned to resist. And “everyone who knows” at this stage probably includes Fabius, and every other character but Augustus.
So the leader of the relevant transhumanly intelligent entities is on the side of the Magnificent Bastard? If I was Augustus I’d seriously consider being nice to the Jews and asking YHWH for guidance.
(Rationality: it works even better in magical universes! (Like, ahem, the one we’re in.))
.
That my comment is at +2 while its parent is at +17 is a pretty clear demonstration of the lack of local sanity, no?
My new voting policy is to downvote every single comment that makes baseless inferences from raw karma, starting with this one. It’s much less informative than you seem to think it is. At minimum, in general, most comments don’t even receive enough votes for the karma balance to be statistically significant. (Otherwise, the variance in karma would be significantly higher, assuming reddit-like distributions.)
If by “lack of local sanity” you are referring solely to people who have voted on your previous comment, then do notice that you have very little information from which to inform a prior on how many people have read your comment, how many people have voted upon it, and the distribution of votes thereof. Whatever alien calculus you have that converts raw net karma to a measure of sanity seems horrifically flawed and horrendously underinformed.
This seems to be an incorrect application of the concept of statistical significance.
You should thank the people who explained your comment for you, where you should have yourself. Fixed… perhaps?
paper-machine is currently winning by a large margin, but the competition will continue for another day or so!
Your comment was hard to read, asserts-but-does-not-argue that days are the “natural” interpretation (“Schelling point”), tells us to “trust the augur” without any real-world or in-universe data showing that augurs are better than horoscopes, contains “I approve of your trolling” and goes against local verities and tastes. Lightning hitting the C is surprising, but there is a simpler explanation for that (see Alejandro1′s reply.) There are quite a few reasons not to upvote you.
(IDNDV)
No. You got two votes despite the comment rambling and hard to follow. That is indication that you couldn’t be bothered taking the time to express whatever your point was clearly so that the reader doesn’t have to try to piece it together.
(You should also try to explain the +17 if you want to win the competition! ETA: I meant, explain how the +17 is or isn’t evidence of lack of sanity, but an explanation of the phenomenon itself would of course be instrumental in attempting such a larger explanation.)
People like witty remarks and don’t like to see them deflated, even when the deflation is totally warranted. This is a big problem in debates, since someone can make a totally specious but well-timed criticism or remark, provoke laughter, and get the audience on their side even if the criticism is rebuked.
(Tangent, moderate spoiler warning:) I noticed that in myself while watching the movie “Silent Hill”. During the whole ‘witch hunts and self-justifying faith are bad mmmkay’ climactic rant it’s easy to forget that even the demon admits that the only reason the entire town hasn’t been mauled/raped to death by symbolic guilt-constructs is because of their “blind faith”. (And “God is not here”: so they could have had “blind faith” in toaster ovens and that’d have protected them from the demon too? Possible but rather unlikely.) But I guess if one is already applauding a biasedly-informed woman (she’s literally getting her scant info from the deceiver himself because apparently no one taught her that dealing with transhuman intelligences with unknown preferences is an insanely stupid idea) for helping deliver vengeance to an entire dismal, desperate town’s worth of people then it doesn’t cost ones conscience much to let ones epistemic standards slip a little as well.
(ETA: And as long as we’re being charitable, I can’t really blame Christabella for having the cop lady burned to death either. Whether Sybyll knew it or not she was clearly in league with the demon/devil. Burning her is even mildly charitable, giving her a chance to repent, much better than pronouncing her anathema and casting her out into guilt-hell to have her skin ripped off by Pyramid Head. And considering that in-universe the devil clearly exists it’s mildly hard to blame them for a premature demon-spawn burning now and then. (Come to think of it didn’t Dahlia implicitly admit that her daughter was demon spawn, by consenting to the burning? Wasn’t Christabella all ‘just name the father already’ and Dahlia was all ‘lol no thanks I’d rather the little girl burned’? Maybe Alessa really was a witch; she certainly was in league with the devil at a later point, and had a nasty need for vengeance to boot, and the evidence of her innocence that we have we got from the freaking devil herself.) But now we’re getting into politics.)
ETA2: And while I’m at it, Dagoth Ur was the good guy (not even just relatively good, actually good), Azura is clearly a manipulative bastard of epic proportions and “the Nerevarine” is just one of her pawns. Has anyone else compiled all the evidence for this already?
There isn’t a competition.
No idea one way or the other. I didn’t read it. The script in the context didn’t pattern match to the kind of thing that would interest me − 17 votes or no. Something about Cs getting struck by lightning.
...
Just read it. It was ok. Somewhat of an insight which I can at least imagine some other people finding interesting. It’s a quote and people’s standards seem to go out the window when it comes to quotes. I’m only mildly surprised that it got 17.
Has anyone collected the top-rated comments dealing with math, decision theory, and other technical topics? I’d like to see a site-wide “bikeshedding index” based on the ratio between them and the top-rated quotes.
.
“100 days to live” is only the sensible, natural interpretation for “lightning struck the C in Caesar” after you’ve already heard someone suggest it. There’s this thing called priming, you see...
--Mencius Moldbug
Remember sources please; “How Dawkins got pwned (part 7)”, 8 November 2007
You have a thing for Moldbug too, don’t you? ^_^
This sounds like bad advice. In Moldbug’s application of it, for example, making things “obvious” corresponds to making bad arguments—arguments that, in some alternate reality, possibly made of straw, would correspond to some possibly straw person who found the argument very obvious. And then you say “well, obvious argument #1 is awful, so by process of elimination let’s go with obvious argument #2! Q.E.D.”
Human knowledge and human power meet in one; for where the cause is not known the effect cannot be produced. Nature to be commanded must be obeyed; and that which in contemplation is as the cause is in operation as the rule.
Francis Bacon
Wow, I’m surprised this had not been posted before. Good catch.
I was very surprised, too. I’d found a similar quote—one that that I’ll put in a top-level comment—and checked for the Bacon quote.
--Steven Kaas
-- Robert M. Pirsig, Zen and the Art of Motorcycle Maintenance
Use only that which works, and take it from any place you can find it.
--Bruce Lee
That seems rather applause-lighty. The reversal is abnormal; who would say “Use some things that don’t work”? Maybe in some traditionalist cultures “Resist the appeal of using things that work but come from unworthy places” would sound wise, but on LessWrong it would likely get stares.
Bruce Lee was a martial artist, and martial arts is a field where a lot of people go by tradition rather than checking on what works.
awesomely relevant video: Joe Rogan on MMA and Kung Fu
I think many cited quotations sound applause-lighty. They are meant to by pithy encapsulations of LW themes, after all. And I don’t think that’s necessarily a problem; applause lights are a problem for things that might be taken as reasoning, like posts.
“Use only that which works” is obvious enough to be unhelpful, but “take it from any place you can find it” was pretty novel in the context in which he proposed it, and still is to a lot of people in a lot of domains.
The existence of the Traditional branch of Jeet Kune Do (as opposed to the Concepts branch,) which exclusively teaches the martial art as Bruce Lee practiced it at the time of his death, is testament to the strength of humans’ tendency to behave counter to this advice.
--Bruce Schneier
--Steve Sailer
Good chop, bro.
In short, they made unrealistic demands on reality and reality did not oblige them.
Cory Doctorow talking about DRM, but I think there are some wider applications.
Reminiscent of one of my favorite Bruce Schneier quotes.
Second slide of this powerpoint by Stanford’s Persuasive Tech Lab.
--Peanuts (Nov. 23, 1981) by Charles Schulz
--Mencius Moldbug
Please remember sources; this is from “How I Stopped Believing in Democracy”, 31 January 2008.
Is it conventional to add sources when it is an on-line? Sorry didn’t know that was expected, since it wasn’t in the posting rule set. Will remember to add sources in the future.
BTW gwern sometimes your attention to detail is as unnerving as it is helpful and impressive.
I thought it was, but then, I may be interested only because it makes it easier in the future to track down citations if there is a title and URL (and because if I click on a URL, it goes into my archive bot).
It’s just time-wasting… Heck, I time-waste on my time-wasting, I’m supposed to be adding citations on how people are biased against spaced repetition even when their scores are better with SR to my respective article.
--1 Corinthians 15:26
(I wonder what Eliezer would’ve made of it—as far as I know, he never read Deathly Hallows and so never read about the tombstone.)
Well, he knows about the Hallows themselves via wiki-readings. I think he would have written the story the way it is whether he knew about the tombstone or not, but I put fairly high probability that he does know about the tombstone and how fantastically awesome an endcap it’s going to be on the story.
Mm. Maybe: http://predictionbook.com/predictions/5122
I think there’s a close to 100% chance that the tombstone will be alluded to, because even if Eliezer DIDN’T know about it before, he will by the time the story ends (because I will have questioned and informed him about this), and after that I just can’t imagine him making such a terrible mistake as to NOT include the tombstone’s quote.
I do think a simple bet of “did he already plan this?” is feasible. We can just ask him. (I put odds at 75%).
(By “close to 100%” I mean maybe 95. I can think of scenarios where he hadn’t originally planned for the tombstone and where it would be hard to integrate it)
Oh fine: http://predictionbook.com/predictions/5124 But you’d better ask him now!
I was already aware of the quote. It’s on James and Lily’s tombstone (in canon).
I see; but the predictions/questions wasn’t were you aware of it at all, but were you planning to incorporate it ex ante, and did you ex post.
If it’s incorporated it will have been planned beforehand.
You and your silly hatred of spoilers. (The recent experimental evidence, BTW, suggests spoilers are not harmful but helpful for enjoyment.) But I guess that statement works.
For what it’s worth, there are stories where I’ve appreciated going in with no knowledge except for some reason to think I’d like it (the movie Hugo 3D is a recent example, for Mieville’s Un Lun Dun I just had a reasonable guess about genre).
I think I lost some of the impact of A Deepness in the Sky because I knew what Focus was before I started reading.
I think whether spoilers are harmful varies among works and among readers. (For example, ‘finding out how it ends’ was the only reason why I finished reading Digital Fortress by Dan Brown rather than throwing it in the garbage bin right after the first couple chapters; if I had already known the ending I would likely not have enjoyed it at all (except possibly for laughing at it).)
This is an example of when spoilers are good, right? Every person saved from reading Dan Brown...
I’m confused by what this is an example of.
Had you known how it ended, would you have finished reading the book? If so, why? If not, how would that have been harmful?
1) Probably not; 2) that would have taken away from me the enjoyment of reading the book to find out the ending. (I was quite bored that day, and I didn’t have my computer or my music player or anything else to do with me.)
(nods) OK, sure… if the most enjoyable thing I can do right now is read a book that isn’t enjoyable to read, in order to get the enjoyment of reading the book and being surprised by its ending, then telling me the ending is harmful.
Agreed.
I have trouble imagining actually being in that state personally, but of course people vary.
The existence of bookshops in train stations and airports selling badly-written suspense novels suggests this is a common state.
Well, I too have bought a number of books in airports and train stations over the years, and I don’t see how the fact that airports and train stations sell the books they sell provides evidence to choose between the theory that army1987′s state is common, and the theory that my state is common. (Of course, the reality could also be both, or neither.)
If I had been thinking better I would have specified “did he know” rather than “did he plan” so that we could resolve the issue. (I think there is at least a 30% chance one (if not both) of us will have forgotten this wager by the time the reveal happens)
That’s what PredictionBook is for. So far I have a good record for long-term use of it...
–Elizabeth Anscombe, [An Introduction To Wittgenstein’s Tractatus](http://www.archive.org/details/introductiontowi009827mbp) (1959); apropos of a recent Scot Sumner blog post
Another great quote by Sumner in that same post:
Science isn’t just a job, it’s a means of determining truth. Methods of determining truth that aren’t trustworthy in the laboratory don’t become trustworthy when you leave it. There is no doctrine of applying scientific methodology to every aspect of one’s life, you either follow trustworthy methods of investigation or you don’t, and “follow trustworthy methods of investigation” is the core of science.
~Desertopa, TVTropes Forum
There are types of valid evidence that aren’t scientific. In particular science is also partially a social process, whereas you trying to find the truth for yourself is not.
Slavoj Žižek, Violence, emphasis added. Admittedly not the most clear elucidation of the subject of how urgency (fabricated or otherwise) should affect ethical deliberation, but see also his essay “Jack Bauer and the Ethics of Urgency”—if you’re into that sort of thing.
Lance Parkin, Above us only sky
This is less a rationality quote than a “yay science” quote, but I find that impressive beyond words. For millenia that was a huge and frightening question, and then we went and answered it, and now it’s too trivial to point out. We found out where the sun goes at night. I want to carve a primer on cosmology in gold letters on a mountain, entitled something in all caps along the lines of “HERE IS THE GLORY OF HUMANKIND”.
Is it excessive nitpicking to point out that the daily disappearance and reappearance of the Sun has to do with the Earth’s rotation on its axis, not its rotation about the Sun? (Probably not, as the first comment on Parkin’s blog posting points out the same.)
Is it excessive nitpicking to note that not only did he misuse the word “ultimate”, he used it to mean basically the opposite of what it actually means?
No. Thank you for inspiring me to look up the word and learn its true meaning.
Do you mean cosmology or astronomy?
Both. Cosmological content: “stuff goes around other stuff”; astronomical content: “this applies to the stuff we sit on”; philosophical content: “finding this out proves we are awesome”; gastronomical content: “here’s a recipe for cake to celebrate”.
lolwut
The truth is common property. You can’t distinguish your group by doing things that are rational, and believing things that are true.
Paul Graham, Lies We Tell Kids
It would seem that if no other humans are behaving rationality and your group is behaving rationally then even Sesame St could tell you which of these things is not the same.
If no other groups of humans are behaving as rationally as yours is, then it’s likely no other humans are capable of easily identifying that your group is the one with the high level of uniquely rational behavior. To the extent that other groups can identify rational behaviors of yours, they will have already adopted them and will not consider you unique for having adopted them too.
You can signal the uniqueness your group by believing and doing things that are both rational and unpopular, but to most outsiders this only signals uniqueness, not rationality, because the reason such things are unpopular is because most people don’t find them to be obviously rational. And the outsiders are usually right: even though they’re wrong in your particular actually-is-rational case, that’s outnumbered by the other cases which, from the outside, all appear to be similar arational group-identifying behaviors and rationalizations thereof. E.g. at first glance there’s not a huge difference between “I’m going to get frozen after I die”, “I don’t eat pork”, “I avoid caffeine and hot drinks”, etc.
Not actually true. I’d like it to be!
Damn skippy.
I’d even settle for the above being true of my group with respect to other groups.
Depends on how immediate and/or dramatic the benefits of the rational behavior are.
then you’re probably insanely wrong.
Why do you say that? That doesn’t sound true. Humans are monkeys—I should be surprised if a group of monkeys acts perfectly rational. I suggest that any insanity that however insane I may be this issue is straightforward.
My original comment was meant to be a mildly elaborate adianoeta that is more than the sum of its parts (except that the addition of “insanely” was a regrettable and meaningless rhetorical flourish). So if I seem straightforwardly wrong then maybe something was lost in interpretation or I just didn’t do it right.
Trust in me, just in me. Dude people are still doing karmassassination! Even without voting buttons on profile pages. Crazy.
Assuming infinite cognitive resources or something? What’s your standard?
No? You don’t even try to be trustworthy here!
Of course I do. I barely ever lie here in the morally relevant sense of the word lie. I’m not even sure if I’ve ever purposefully lied here. That would be pretty out-of-character for me.
The evaluation of whether it is sensible to “trust in you, only you” isn’t based only on whether you are lying. When you aren’t even trying to communicate on the object level the interpretation of your words consists of creating a probability distribution over possible meanings vaguely related to the words that could correspond to what you are thinking. I can’t trust noisy data, even if it is sincere noisy data. I mean, given the sentence “Trust in me, just in me” I only had 60% confidence that you meant “I attest that the next sentence is veritable” (more now that you are talking about how you never lie).
Trustworthiness isn’t just a moral question. Choosing what to trust is a practical question.
For what it is worth of course I believe that you are likely experiencing karmassassination. I noticed that some of your non-downvote-worthy comments are taking a hit.
It takes the assassin a few more clicks. But if they want to assassinate I don’t expect that it would stop them. Actually that feature removal is just damn annoying. I often read through the comments of users that I like/respect/find-interesting. Naturally I’m even more likely to want to vote up comments from such a stream than I am when reading the general recent comments stream. So now I have to go and open up each comment specifically and vote it up.
Upvoted, good point re noise and trust.
I’m so glad that “re” is a word.
Does it matter? If the standard chosen is such that humans behave perfectly rationally according to it then they are completely free of bias and ‘rational’ has taken on a bizarre redefinition to equal to whatever humans are already achieving. The time to be particular about whether rational means ‘optimal use of cognitive resources’ or ‘assuming infinite cognitive resources’ is when the behavior in question is anywhere remotely near either.
This idea of rationality is somewhat broken because we lack baselines except those we get from intuitive feelings of indignation or at best expected utility calculations about how manipulable others’ belief states are. We have no idea what ‘optimal use of cognitive resources’ would look like and our intuitions about it are likely to be tinged with insane unreflected-upon moral judgments.
Um I don’t think we significantly disagree about anything truly important and this conversation topic is kinda boring. My fault.
Apparently they stopped after downvoting about 30 comments. Maybe it was too much work.
The role of laziness in preventing bad acts rarely gets enough credit.
Words to model ones life around. Well I did anyway. Laziness and fear.
It’s been a while since I read that essay. I can’t tell whether that quotation’s meant to be an example of a lie we tell kids, or one of Paul Graham’s own beliefs! (An invertible fact?)
It is Graham’s own belief.
Yes, a look at it in context in the essay confirms that — but isn’t it a strange belief for someone like Paul Graham to have? It looks false to me (although “truth is common property” is ambiguous). I think a group could make itself very distinct by believing certain truths and doing certain rationally justified things.
I don’t know whether it’s strange for Graham to think this; I haven’t read much of his stuff.
I found the phrase “common property” odd too. I associate the phrase with “commons,” as in tragedy of the commons.
I think LessWrong is distinctive, and part of its distinctiveness comes from its members’ attempts to do the above.
Most groups of weapon developers probably hope to keep their knowledge distinct from that of other groups for as long as they can...
What? I don’t get this. Also, why should weapons developers care whether their products are distinctive? Having better weapons helps, and being better is being distinctive, but so is being worse.
I apologize. I should have been clearer. I mean that if a group of weapons developers, such as, for instance, the Manhattan Project, discovers certain critical technical data necessary to their weapons, such as, for instance, the critical mass of Pu-239, they will often prefer that these truths not spread to other groups. For as long as they are able to keep this knowledge secret, it is indeed a set of truths that makes this set of weapons designers distinct from other groups.
Oh, I see now. Thanks for clarifying.
But if other developers are incorrect, then you’d want to be correct; and if other developers are correct, you’d still want to be correct. Put game-theoretically, accuracy strictly dominates inaccuracy. By contrast, isnt’ distinctiveness only good when it doesn’t compromise accuracy?
--Jean de la Bruyère
But we all die, so that makes death alright?
That is one source of acceptance of death.
Prompted by Maniakes’, but sufficiently different to post separately:
Daniel Dennett, “Get Real” (emphasis added).
Eliezer Yudkowsky
(Some discussions here, such as those involving such numbers as 3^^^3, give me the same feeling.)
I don’t understand that quote. A good Bayesian should still pick the aposteriori most probable explanation for an improbable event, even if that explanation has very low prior probability before the event.
I suspect the point is that it’s not worthwhile to look for potential explanations for improbable events until they actually happen.
I think it’s more than that—he’s saying that if you have a plausible explanation for an event, the event itself is plausible, explanations being models of the world. It’s a warning against setting up excuses for why your model fails to predict the future in advance—you shouldn’t expect your model to fail, so when it does you don’t say, “Oh, here’s how this extremely surprising event fits my model anyway.” Instead, you say “damn, looks like I was wrong.”
I don’t, however, think it’s meant to be a warning against contrived thought experiments.
Absolutely: I strongly recommend you not try to explain how 3^^^3 people might all get a dustspeck in their eye without anything else happening as a consequence, for example.
It’s Yudkowsky. Sorry, pet peeve.
Fixed.
Is Eliezer claiming that we aren’t living in a simulation, claiming that if we are living in a simulation, it’s extremely unlikely to generate wild anomalies, or claiming that anything other than those two is vanishingly unlikely?
Sorry to be so ignorant but what is 3^^^3? Google yielded no satisfactory results…
http://en.wikipedia.org/wiki/Knuth_arrow
TheOtherDave’s other comment summed up what it means practically. Also, see http://lesswrong.com/lw/kn/torture_vs_dust_specks/.
Ah thank you, that clarifies things greatly! Up-voted for the technical explanation.
A number so ridiculously big that 3^^^3 * X can be assumed to be bigger than Y for pretty much any values of X and Y.
Bloody p-zombies. Argh. Yes.
“A Confucian has stolen my hairbrush! Down with Confucianism!”
-GK Chesterton (on ad hominems)
Lucio Russo, The Forgotten Revolution: How Science Was Born in 300 BC and Why it Had to Be Reborn
Some people will always have to take most of natural science on authority. Sure you can make that sound bad, but to me it sounds like “children take 9*9=81 on authority! spoooooky.”
Ye gots to wiggle yer fingers when ye say it.
A “preview” electronic version of this book is available through the translator’s website here: http://www.msri.org/~levy/files/russo/
I enjoyed the book a lot. It’s true that the author reads Hellenic scientists in the most favorable possible light while reading Renaissance scientists in the least favorable possible light. But he gives extensive quotations from the available sources, so that you can judge for yourself whether his interpretations are stretched.
--(The Science of Discworld, Ebury Press edition, quotes from pp 41-42)
— Bertrand Russell, History of Western Philosophy (from the introduction)
A stoic sage is one who turns fear into prudence, pain into information, mistakes into initiation, and desire into undertaking.
Nasim Taleb
Terry Pratchett
--Aristotle
--B.F. Skinner
Steven Pinker, Words and Rules
Invertible fact alert: I can’t tell if Pinker means that as (mostly) a good or a bad thing!
I take it as ha ha only serious. Pinker knows that people are generally appallingly inaccurate and believe untruthful things, and that psychology is right to throw out every other belief and only depend on what it has rigorously verified; but he also knows the rigorous verification has been done on weird subjects and so psychology has thrown out a lot of correct beliefs as well. Accepting this tension is the mark of an educated man, as Aristotle says.
Given the history of psychology as a field, I’d assume he’s praising the merits of experimental evidence.
“When The War Came”, by The Decemberists
(from memory, will fix any errors later)
http://en.wikipedia.org/wiki/Nikolai_Vavilov
“While developing his theory on the centres of origin of cultivated plants, Vavilov organized a series of botanical-agronomic expeditions, collected seeds from every corner of the globe, and created in Leningrad the world’s largest collection of plant seeds. This seedbank was diligently preserved even throughout the 28-month Siege of Leningrad, despite starvation; one of Nikolai’s assistants starved to death surrounded by edible seeds.”
Thank you kind sir.
Can you elucidate the connection to rationality?
A few Google searches resolved this question for me, and proved very interesting besides. Vavilov was a Soviet botanist focused on the cultivation of efficient seeds to mitigate hunger. In World War Two, Vavilov’s Leningrad seedbank came under siege by the Nazis, who apparently wanted to steal/destroy the seeds. Considering the supplies vital to Russia’s long-term survival, several of the scientists swore oaths to protect the seedbank against German forces, starving foragers, and rats.
They succeeded in doing so. The scientist-guards were so loyal that many of them died of starvation despite being in a facility full of edible seeds, as well as potatoes, corn, rice, and wheat. The seedbank endured the siege and was replenished after the city was liberated.
Vavilov himself did not live to see the victory of his researchers, as he had been sent to a camp thanks to his disapproval of the scientific fraud of Lysenkoism and died (ironically, of malnutrition) before the war ended.
The Pavlovsk seed bank is at risk, but not yet doomed.
That is awesome. Thanks.
At first glance, it looks like a clear case of Bayesians vs. Barbarians to me.
Can? Of course. “Will?” Less likely.
Less Wrong has a lot of clever Russians, maybe they will.
-- Milton Friedman
One of these things is not like the others, one of these things does not belong.
There are valid quibbles and exceptions on both counts. Some breeds of cats make vocalizations that can reasonably be described as “barking”, and water will burn if there are sufficient concentrations of either an oxidizer much stronger than oxygen (such as chlorine triflouride) or a reducing agent much stronger than hydrogen (such as elemental sodium).
In the general case, though, water will not burn under normal circumstances, and most cats are physiologically incapable of barking.
The point of the quote is that objects and systems do have innate qualities that shape and limit their behaviour, and that this effect is present in social systems studied by economists as well as in physical systems studied by chemists and biologists. In the original context (which I elided because politics is the mind killer, and because any particular application of the principle is subject to empirical debate as to its validity), Friedman was following up on an article about how political economy considerations incline regulatory agencies towards socially suboptimal decisions, addressing responses that assumed that the political economy pressures could easily be designed away by revising the agencies’ structures.
Relevant.
I was actually thinking in terms of ‘cats can deliberately meow in an annoying fashion (abstract) like human infants and this behaviors seems perfectly modifiable, so a transhumanist could have a decent reason for preferring cats to bark than meow; and this is really stupid anyway, since we can change cats easily—we certainly can demand cats bark—but we can’t change physis easily and can’t demand water burn’.
pfsch. You can burn water if you add salt and radio waves. Or if you put it in an atmosphere containing a reactive fluorine compound. Etc etc etc.
That since their preference harms nobody (apart from unadopted cats) and the utility function is not up for grabs, I have no grounds to criticize them?
The preference alone is mostly harmless. When the preference is combined with the misapprehension that the preference can be fulfilled, it may harm the person asserting the preference if it leads them to make a bad choice between a meowing cat, a barking dog, or delaying the purchase of a pet.
If the preference order were (1. Barking Cat, 2. Barking Dog, 3. Meowing Cat, 4. No Pet), then the belief that a cat could be taught to bark could lead to the purchase/adoption of a meowing cat instead of the (preferred) barking dog.
Likewise, in the above preference order, or with 2 and 3 reversed, the belief in barking cats could also lead to the person delaying the selection of a pet due to the hope that a continued search would turn up a barking cat.
The problem is magnified, and more failure modes added, when we consider cases of group decision-making.
“I would like to have a cat, provided it barked” states that U(barking cat) > U(no cat) > U(nonbarking* cat). Preferring a meowing cat to no cat is a contradiction of what was stated. The issue you raise can still be seen with U(barking cat) > U(barking dog) > U(no pet) > U(nonbarking cat), however—a belief in the attainability of the barking cat may cause someone to delay the purchase of a barking dog that would make them happier.
*In common usage, I expect that we should restrict it from “any nonbarking cat” to “ordinary cat”, based on totally subjective intuitions. I would not be surprised by someone who said “I would like an X, provided it Y” for a seemingly unattainable Y, and would not have considered whether they would want an X that Z for some other seemingly unattainable Z. I think they just would have compared the unusual specimen to the typical specimen and concluded they want the former and don’t want the latter. This is mostly immaterial here, I think.
I stand corrected.
That’s strictly ruled out by the wording in the quote. While people often miscommunicate their preferences, I don’t see particular evidence of it there, or even that the hypothetical person is under a misapprehension.
To take it back to metaphor: the flip side of wishful thinking is the sour grapes fallacy, and while the quote doesn’t explicitly commit it, without context it’s close enough to put me moderately on guard.
Here is the full article from which the quote was taken: http://www.johnlatour.com/barking_cats.htm
Bertrand Russell
-- Dave Gottlieb
What about people who want to reject the claims of religion but still want warm fuzzies? Maybe atheism wouldn’t get such a bad rap in the public eye if it felt more welcoming for people who want truths but also want the sense of community provided by religion.
Paganism? It seems like one of the more accepting groups, and you don’t need to actually believe to celebrate/be in a community.
Interesting idea, but identifying as pagan will probably raise as many eyebrows as atheism, if not more. I think it would be better if there was more “The universe isn’t concerned about us, so it’s our job to be concerned about each other” among the atheist community, or something else that sounds welcoming and friendly.
Humanism?
So true, I totally think that way.
But warm fuzzies are bullshit.
.
I haven’t once in my life made a good decision based on feel good thinking. Naturally I may be an outlier but overall models of the world that “feel good” are generally wrong models. I value having a accurate map even if it isn’t useful (yes having a wrong map can be instrumentally valuable, and a positive outlook actually often is).
Also warm fuzzies are one of the easiest way to manipulate someone. When someone tries to shower me with them I nearly indistinctly try to counterbalance them. Hm, now that I think of it that pattern matches to being a cynic.
I would have expected things to go your way every now and then simply by chance.
.
Sure.
C.S. Lewis, Introduction to a translation of, Athanasius: On the Incarnation
If I may continue it:
From http://www.worldinvisible.com/library/athanasius/incarnation/incarnation.p.htm
Not likely to be much help if the new outlook is built upon the old in such a way that the mistakes of the old outlook are addressed by the new, but the mistakes of the new were not raised to the point of being able to be addressed within the old.
True, on the other hand, I suspect people around here tend to massively overestimate how often that happens.
Or, you know, some new books with a fresh outlook. Just saying.
Not written yet.
--H.L. Mencken
-Winston Churchill
The rest of the story is interesting; from http://www.winstonchurchill.org/learn/speeches/quotations
An apt comparison would be Napoleon’s reconstruction of Paris with broad straight streets, I think. (Code is Law.)
“We are shaped and fashioned by what we love.” — Goethe
Songs can be Trojan horses, taking charged ideas and sneaking past the ego’s defenses and into the open mind.
John Mayer, Esquire (the magazine, not the social/occupational title)
-- Eric Raymond
Don’t shut up and do the impossible!
I’m not sure who originally said this but I vaguely remember the quotes from law school.
I like to say “there are such things as dawn and dusk, but the difference between night and day is like …”—and here I pause just long enough for the audience to mentally anticipate me—“the difference between night and day.”
-Stephen Crane
More accurate:
A man said to the universe: “Sir, I exist!”
The universe says nothing.
Right, because Eliezer Yudkowsky wasn’t addressing it.
groan
There’s no Universe; there’s only a set of things which Eliezer Yudkowsky allows to exist !
Note from a “sympathetic outsider”: I know you are joking, but the sorts of things like this subthread sometimes come across more creepy than funny.
There’s a whole page of them too!
I was making an allusion...
-Kvothe, The Name of the Wind
osewalrus
Thornton Wilder, The Ides of March.
Yvain, 2004, source
Edit: authorial instance specified on popular demand.
The next sentence is
Skeptics will tell you that yes, it did. Belief that the Sun needs human sacrifices to rise in the morning killed their beloved big brother, and they’ve had a terrible hatred of it ever since. And they must slay all of its allies, everything that keeps people from noticing that Newton’s laws have murder-free sunrise covered. Even belief in the Easter bunny, because the mistakes you make to believe in it are the same. That seems like a pretty good reason to be concerned with it.
Indeed. In fact there’s a website: What’s the Harm? that explains what damage these beliefs cause.
Victims of Moon Landing Denial
Marvellous.
That actually seems to be a victim of belief in moon landing by people who have landed on the moon.
I was impressed when a skeptic source (sorry no cite) admitted that most people who read astrology columns do it for entertainment rather than for guidance in how to live their lives.
I don’t know why some people and groups damp out most or all of the ill effects of their arbitrary beliefs, while others follow arbitrary beliefs to the point of serious damage or destruction. I don’t think I’ve seen this discussed anywhere.
It sounds like the line of reasoning of someone who is a horrible, horrible thinker and who should stop speaking forever and maybe avoid procreation. (ETA: the aforementioned “should” shouldn’t be taken too literally)
More accurately, Yvain-2004
Is it more accurate to put it thus because Yvain-2012 disagrees with Yvain-2004 on this issue?
I don’t know if there’s enough of a specific, meaningful claim there for me to disagree with, but Yvain-2012 probably would not have written those same words. Yvain-2012 would probably say he sometimes feels creeped out by the levels of signaling that go on in the skeptical community and thinks they sometimes snowball into the ridiculous, but that the result is prosocial and they are still performing a service.
(really I can only speak for Yvain-2011 at this point; my acquaintance with Yvain-2012 has been extremely brief)
ETA2: I really dislike skeptics. Do other people really dislike skeptics? If so maybe I like you! (This comment was previously a lot more aggressive in tone; I said I wanted to cast not-particularly-harmful magical fireballs at skeptics because I hated them. I probably don’t actually even know what hatred is so I decided to edit my comment to more accurately reflect my emotions.)
ETA: I’m guessing my testosterone levels are up ’cuz I just seriously kicked ass at ultimate frisbee, but still.
It’s a tragedy that you can’t prove people wrong when they’re right, isn’t it?
Yes! That sure is a tragedy. Cuz at that point an inquisition is the only option, ya know? Everybody loses.
*cough* Million Dollar Challenge *cough*
Dean Radin thinks there’s something real going on with Ganzfeld telepathy experiments, and runs the numbers on what it’d cost to set up a satisfactory demonstration based on the assumption that people are getting a 32 % success rate instead of the statistically expected 25 % without any known physical causation. He comes up with an experiment that takes 14 years to run and costs somewhat in the excess of $1 million, and concludes that it’s not worth doing for a million dollar price but would probably be for ten million dollars.
He could make additional money from bets. After getting preliminary results, he could ask for donations. He could just contact the X prize foundation and possibly some other skeptics’ organizations and ask for ten million rather than one, showing a slightly more detailed version of his post (especially how blinding is done). I expect they would agree.
Fun fact: “radin” is French for “penny-pincher”.
It’s like we are playing chess and I play 1.Nf3 and you play …d5 and you annotate it as 1.Nf3?? d5!! . Am I misinterpreting the point of your comment?
I think you are. Here it is without the funny business:
As I’m sure you already know, if you really think you have paranormal powers, you can use them to win a huge amount of money and fame. I’m sure you could come up with some good use for the money, and the skeptics would have to admit they were wrong.
(P.S. I was taking the comment I was responding to seriously. Did you mean it that way?)
The positive results in parapsychology tend to be like 35% hit rates where chance would give a 25% hit rate. It isn’t really financially feasible for researchers to gather enough data for a p value of 0.000001 (which, quite reasonably is what Randi requires—who wouldn’t take a 5 in 1000 shot at a million bucks?).
Which isn’t to say I think psy is the best explanation for the p= 0.005 results out there, just that no one challenging for the Randi prize doesn’t say much about academic parapsychology.
I meant it seriously, though I’ll probably regret the strong language and it makes me kinda sad that skeptics are so good at inadvertently trolling me. (ETA: Yup, regretted it. Edited comment to be less in-your-face.)
And what I meant with my chess comment, without the funny business, is that I’m aware of the Randi prize, aware that no one has claimed it, and still assign a decent amount of probability mass (aghhhh that doesn’t actually work, probability isn’t fundamental like that, but whatever) to the magick hypothesis. So I guess maybe I was annoyed at the implication that I hadn’t thought these things through carefully already. I apologize for being brusque.
Inadvertent trolling is impossible, trolling is in intent. If your own reaction is similar to being trolled, it is a the genuine emotions trolls try to create through dishonest means… also it makes it seem like you are trying to paint yourself the victim which would make a real troll happy but a non-troll sad. Well, it would also make someone who dislikes you happy, and someone who wants two way communication without signaling that you are wounded and deserve special reprimands sad. I don’t want to make you angry, I want to have a conversation.
The magick hypothesis can be tested, can’t it? I mean, at this point it seems to me either magick is false, or there is a conspiracy to prevent it from being proven true, like all the White Wolf World of Darkness games have. Some settings have several competing conspiracies to keep the masses ignorant of the nature of reality, some have a big monolithic one. It affects the availability of that hypothesis, for me, at least.
If bits of magic were already discovered and incorporated into science, would that count for the skeptics or magicks? The way herbalism and alchemy have been eaten up by chemistry, skeptics kept pace with the abilities traditionally handled by wise old people and shared the knowledge for many. If, say, life auras were found, skeptics would want to use the knowledge of that too. If auras do not respond to machines we can build, we’ll train animals, like aura sniffing dogs as well as gunpowder and drug sniffing ones we already have. The fact that it looks like we are using familiars to find poisons does not deter us now.
What, exactly, is it that makes skeptics so infuriating? Is it mostly the way to point out a link to someone saying something snarky and then walk away, instead of conversing? I know I find that infuriating when someone says something snarky and acts like the conversation is over. I can’t sit through creationism movies sometimes without wanting to punch through the internet when they make a joke about apes having human chests and leaving the subject implying because it’s funny then it is not a valid point that humans and apes are related lets move on. rage rage rage...
Hello welcome to less wrong please don’t mind if we obsess and fail to notice you’re upset like aspies it is the culture here. :P
Well, even if Yvain-2012 does not disagree with Yvain-2004, it would be nice to have the year attached. I would like that the year-attachment convention for attributing quotes and ideas becomes more widespread. Right now, the default assumption that everybody makes is that people are consistent over time. In reality, people almost surely change over time, and it is unreasonable to expect them to justify something which their earlier selves said. So, it would be really nice if the default was year-attachment.
That would seem to have benefits relative to no further information (except the author’s name), but would the benefits be greater than those afforded by the current convention of citing the relevant work? Or maybe you think people don’t follow that convention enough and they would be more likely to cite something if the thing they had to cite was just a date?
Citing the original work would be the best I suppose. But in relatively informal contexts, like internet forums, it is probably easier for the reader to quickly have a sense of when the given quote was said if the year is attached.
I would say that for instance I don’t believe that most alt med stuff works but this is exactly the reason I care that others know this and how we know this. This attitude infuriates me.
The fact is that there are many battles worth fighting, and strong skeptics are fighting one (or perhaps a few) of them. (As I was disgusted to see recently, human sacrifice apparently still happens.) However, I also think it’s ok to say that battle is not the one that interests you. You don’t have the capacity to be a champion for all possible good causes, so it’s good that there is diversity of interest among people trying to improve the human condition.
I totally agree if its not your cup of tea fine. What pisses me off is the line about ” if you don’t believe it exists it seems like a good reason to not be concerned with it”
The previous quotation would seem to speak in favor of more strong skeptics.
They often do [scramble the reels] at art houses, and it would seem that the more sophisticated the audience, the less likely that the error will be discovered.
--Pauline Kael, Zeitgeist and Poltergeist; or, Are Movies Going to Pieces?
Related
Morton Blackwell
I might have upvoted the first sentence of this—it’s accurate, at least, if a little unproductive—but out of context the rest is difficult to parse and might imply some seriously problematic attitudes. I take it political technology means something along the lines of “rhetoric”?
Sure, it probably does, on the part of Blackwell. He is something of a fairly mindless conservative, not much of a libertarian, and he supports central bankers. But this part of his philosophy is worthy. He believes that if you’re in a fight for your life, you should fight hard. …Similar to Penn Jillette’s advocacy of evangelism, even evangelism that he personally disagrees with. If the stakes are high, then even those on the wrong side of the stakes should value their position enough to fight for it, or change their opinion.
Not necessarily so. Rhetoric is far from the only means of shifting a vote. It is one tool, there are many, many others. In fact, almost any vote can be shifted, given enough effort. Enough effort can be directed at nonvoters to mobilize them, etc...
So, if you’re trying to do something important (such as end slavery, release the victimless crime offenders from prison, etc...) you should learn how to win elections, since that’s easier than engaging in violence commensurate with the level of importance attached to the issue.
At some point, vital issues of life or death decay to violence (Civil War), if there is no political solution forthcoming. ---The victimized eventually refuse to stay victimized, or worse, the victimizers refuse to settle with too little victimization. (And then you have the Hutus outlawing Tutsi firearm possession, and hacking them apart with machetes.)
-Gene Ray, The Wisest Human
http://www.timecube.com/timecube2.html
--James Anthony Froude
“Hit ’em where they ain’t”. --Douglas MacArthur commenting on his island-hopping strategy in WW2.
Sun Tzu said it better; VI, ‘Weak Points and Strong’:
I think Willie Keeler said it first. (I think I saw Babe Ruth, played by John Goodman, say it in The Babe, but that was a long time ago.)
Attributed to Voltaire (referring of course to the Gregorian calendar reform) though evidence that Voltaire actually said or wrote any such thing seems scanty. Reversed stupidity is not intelligence.
Reversed perceived stupidity. (Sorry, it’s just that I think the Catholics are cool and think maybe we shouldn’t call popes stupid unless we have arguments for it. But yeah, I guess I’m in the minority on that subject.)
Here’s an argument from Tim Minchin that some of the guys at Berkeley showed me.
Also, popes (say that they) believe there is a god and that said god makes the pope infallible. What arrogant dumbasses (or brilliant machiavellian pricks).
Technically, only popes since 1868 have claimed that. (It was also the inspiration for Lord Acton’s famous quote about absolute power.)
And even more technically, they’ve only claimed that they believe god makes the pope infallible when he’s doing certain things.
So… still no argument.
I see three. The second two refer specifically to intellectual capability (to the extent that they are not subverted by rejecting the premise “popes are not outright evil bastards”).
Do you need a diagram?
In high school when someone was being obtuse my friends would draw a diagram to explain the point; not only did it work, they also somehow always managed to make the diagram look like a penis. Stylin’ bonus.
Belief in God: Correct; since this is a non-obvious question about decision theory they get my props for answering it correctly. Belief that God makes pope infallible: I dunno ’cuz I haven’t read this yet; definitely not sure if that makes them stupid. Tim Minchin’s song: dude politics seriously?
Politics only to the extent that the pope is a politician and his actions are political. That is—of course he is but I didn’t bring up the pope advocacy based on the coolness of catholics so I deny responsibility.
I’m surrounded by Catholics and they aren’t cool, trust me.
I meant people like Aquinas and Chesterton and stuff. I suspect most popes would be more like them and less like the people you’re surrounded by.
Going against the evidence when there’s a massive amount of evidence is at best deeply irrational and counterproductive. I don’t know if “stupid” is a better term because it has connotations of issues with intelligence, but in the context of “reversed stupidity is not intelligence” it applies just as well in the form “reversed irrationality is not rationality”.
Either way works, but it’s irrelevant unless said “massive amounts of evidence” exist, which I am skeptical of.
John Hawks
--William F. Buckley
--Deng Xiaoping
-- Democritus
How are those things “convention”? Did all sentience have a pow-wow some time back and decide to experience such and such sensations when confronted with such and such physical things?
I read it as saying they’re conventional in the sense that the lines between categories of sensory experience are drawn by consensus, lacking direct access to the experiences of others.
We of course lack direct access to the atomic-scale world as well, but I imagine that’s the point—atomism was a lot more abstract to the likes of Democritus than atomic theory is to us. The underlying physical reality is in a certain sense abstracted away from us, and we reflect that by talking about physical experiences in a conventional way, but those experiences are still rooted in the reality of atoms and space—or distributions of probability density, if you prefer.
Including three kinds of light sensors, including one for around 700 nm electromagnetic waves (also called “red”), is a common design convention for humans (for reference, see “Further Random Experiments with Photoreceptors in Furry Legged Things”, in Proceedings of the Council of Azathoth, 63 million y. BC)
Three types of cones, plus one type of rod....
woops I indeed forgot the “Enhancements for Night Vision” part...
Yes. (Democritus was at least essentially wrong about the atoms and space thing upon a somewhat naive interpretation, but he was right about convention. And I say this as someone who dislikes Democritus.)
--Aristotle
-William James
— James Clerk Maxwell
Never work against Mother Nature. You only succeed when you’re working with her. —Cesar Milan, quoting his grandfather in Cesar’s Way, a book about rehabilitating dogs
--William Ransom Johnson Pegram
Professor: So, the invalidation of the senses and cognition as a means of knowing reality is a common thread through eastern mysticism and platonic philosophy. We will study the resurgence of these ideas within secular western philosophies starting with the explanation of how it’s impossible to know things “as they are” versus things as they are within the bounds of our minds.
Phone: Beep Beep Beep ♪
Professor: See you on Monday.
(He answers)
Professor: Yes?
Wife: Honey, Angelica is having trouble with her vision. I’m going to use some of the rainy day account to take her to the optometrist.
Professor: Hahah! Actually, vision is merely a sense that supplies the mind with perceptions, interpreting with all biases and forming only-
Wife: Honey.
Professor: Oh. Yes dear. Go ahead.
~Jay Naylor, Original Life
‘withing’. Also, I don’t entirely understand—is the point that the professor, contra his students, argues in the reliability and objectivity of vision and then turns around and argues the opposite against his wife?
I think the point is that the professor’s stated philosophical beliefs (that sense-perceptions are an invalid means of knowing reality) contradict his commonsense desire for his daughter to have good vision, and thus his elaborate arguments are shown to be disconnected from reality.
The professor’s hypocrisy isn’t (non-negligible) evidence for or against the connectedness of his arguments to reality. Instead, it is evidence that there is divergence between the professor’s stated beliefs and his actual beliefs (assuming he that he cares about his daughters eyesight, believes an optometrist can help her eyesight, etc...).
True, good point.
“A professor from Columbia University had an offer from Harvard. He couldn’t make up his mind—whether he should accept or reject… So a colleague took him aside and said, ‘What is your problem? Just maximise your expected utility! You always tell your students to do so.’ Exasperated, the professor responded, ‘C’mon, this is serious.’”—Gigerenzer
Dupe and bad paraphrase of http://lesswrong.com/lw/890/rationality_quotes_november_2011/5aq7
Fixed, thanks!
The professor isn’t arguing a different point to his wife than he was lecturing to his students; he’s just responding to her from the viewpoint of the philosophy he is teaching. Interestingly, some of what he says isn’t that different from LW ideas. His problem is that he forgets that his view of reality should add up to normality. Just because people can’t see things directly but must instead look at copies of things within their own brain does not make vision “mere” or mean that fixing his daughter’s eyesight is somehow less important (as his wife amusingly reminds him).
-- Deng Xioaping
Duplicate, but I like your translation better.
Lack of experience diminishes our power of taking a comprehensive view of the admitted facts. Hence those who dwell in intimate association with nature and its phenomena grow more and more able to formulate, as the foundations of their theories, principles such as to admit of a wide and coherent development: while those whom devotion to abstract discussions has rendered unobservant of the facts are too ready to dogmatize on the basis of a few observations.
-Aristotle, On Generation and Corruption
While this quote isn’t directly about rationality, it reminds me a good deal of Tsuyoku Naritai!.
~ Theodore Roosevelt, The Man in the Arena
(Edit: Just to clarify as some might misinterpret the posting of this to be a knock on rationality, the relevance of this quote is that what counts is trying to solve problem. While with hindsight it’s easy to say how (to pick a mundane example) one might work out the area under a curve once you already know calculus, it’s not so easy to do it without that knowledge.)
George Orwell
--Alfred Korzybski
It seems most common to mix those two modes as convenient.
A latin proverb, and I think part of Roman law, it means no-one should be a judge in their own cause.
-Cleverbot
http://cleverbot.com/cleverness
“A man’s gotta know his limitations.”—Dirty Harry
Former U.S. Presidential Candidate Herman Cain who was quoting from the movie Pokémon 2000.
A Pokémon quote Cain didn’t repeat:
.
Human behavior is predictable if sad. As much as we like to delude ourselves we are rational thinkers we usually tend to fall back on habit and mental shortcuts. You can easily train your brain to overcome this but it does take some work on your part. So it probably isn’t going to happen. But I’ll do my part trying to point out your many and varied shortcomings and you can go along, nodding wisely and congratulating me on my benevolent teachings while all the while planning to ignore me and do things the same way as before. [...]
The family house you grow up in is what you see as normal. That is the definition of shelter in your life. If you encounter a new product, that first price is what you use as a “normal” one. So everything can suffer from your first encounters ( or look better in comparison ). This is why most people won’t look for shelter. They look for a house. Or an apartment. Whatever they are used to. They are not used to finding a way to keep the elements out, they are used to finding a house or apartment. This is the way it is done and any suggestion otherwise is ignored. They might pretend to be open to new ideas but once they find fault with any way other than their own they can claim to be objective while remaining safely cocooned in their normal world.
People don’t look at how to get from one point to another. They don’t look at the need for transportation, they look at the need for a car. So by comparison shopping for cars they ignore scooters or bicycles or public transport or even carpooling. They are used to having a car and that is the only way to do it. People don’t look at how to become secure, they look at how to make money. To them money equals security and there is no other way. They ignore being out of debt, they ignore decreasing dependence on a paycheck ( note I said decrease, not eliminate ). They ignore all but getting money. This is how it was done before and it is how they are going to continue to do it.
~James Dakin, throwing the anchor overboard
Recomputing everything/random things/currently unsatisfying things is expensive and error-prone. The standard for new good ideas may be to look at other cultures. For example, public transport was my first thought (I’ve lived in large cities in Western Europe). If nobody anywhere has implemented your awesome suggestion, maybe it’s a rare problem so few solutions have been tried, maybe everyone got stuck in poor local optima, or maybe it sucks.
I agree that such looking ought to be one’s first recourse, for exactly the reasons you cite. I note, however, that one should look at subcultures for ideas as well, not just at the mainstream cultures of different geographical regions. For example, if I were to look at methods of solving the issue of shelter mentioned in the quote, I would not just look at how regular people lived in the cities of Japan or the countryside of North America, but also at how, say, people in the frugality movement or soldiers in the military dealt with it. Maybe some historical cultures, too, if I could easily find enough information about them.
There is something to be said for the wisdom of crowds. Information cascades are a thing, but the reason they happen is that it’s rational for each individual to go along with the crowd, and you’re not going to form a new equilibrium by yourself.
Following the crowd is often rational, but not so often that you can just state it universally. Sometimes the crowd is simply wrong, and you’re better off buying a bike. People, they crazy.
-- David Hume
If he didn’t use the word “merely,” this would be an even more amazing rationality quote than it already is.
I don’t think it’s very good either way. It’s just a flat statement—presumably it was the thesis or conclusion to some long chain of arguments proving it. But as a quote? It is not very memorable, or witty, or a novel argument or any of the usual criteria I judge our quotes on.
Agreed, it’s not particularly insightful, but I liked it because it’s an easy-to-understand and memorable example of the Mind Projection Fallacy.
John Philips, 1781
Dan Dennett
To explain: a “nominal essence” is just an abstract idea that humans have decided to use to pick out a particular type of thing. This is contrasted with a more Aristotelean view of essence.
Because I’m curious:
Is Dennett’s position intended to be a response to the theory of scientific incommesurability, or some other aspect of philosophy of science?
His quote is about conceptual analysis and intuition’s role in philosophy in general, and about where to draw the boundary.
“People are stupid; given proper motivation, almost anyone will believe almost anything. Because people are stupid, they will believe a lie because they want to believe it’s true, or because they are afraid it might be true. People’s heads are full of knowledge, facts, and beliefs, and most of it is false, yet they think it all true. People are stupid; they can only rarely tell the difference between a lie and the truth, and yet they are confident they can, and so are all the easier to fool.” —Zeddicus Zu’l Zorander from the book “Wizard’s first rule” by Terry Goodkind.
In case this gives anyone the false impression that the Sword of Truth series is good, let me advise you: it isn’t. What starts out as a decent premise devolves into the most convoluted argument for Objectivism since Rand herself.
I didn’t notice the Objectivism, since the S&M and scat play drove me away first. The first book was enjoyable.
I’d have to confirm that. It started out decent but I tired of the series a few books in.
At least the first book, is written nicely and tells a good traditional story as long as you don’t go any deeper into the meanings of it all, though it gets worse as the series goes on. I find the last book hilarious as the main character defeats the communists by first beating them in a game of American football. All in all, it’s actually decent, if a bit… Grim at times, if you only want a fantasy novel.
It could certainly be better, and a little less transparent, but it has some good, useable quotes.
This is a useful quote when one remembers to apply it to oneself. “You know how transparently full of shit everyone else is? Guess how stupid you are yourself.”
See also http://lesswrong.com/lw/2ev/rationality_quotes_july_2010/28gb
I’m a little confused by why people are downvoting the quote. That the book has other problems is not a reason to downvote a genuinely accurate and succinct quote.
Wrong month.
Breaking Bad, Season One Episode Three
Learning proceeds for all in this way—through that which is less knowable by nature to that which is more knowable; and just as in conduct our task is to start from what is good for each and make what is without qualification good for each, so it is our task to start from what is more knowable to oneself and make what is knowable by nature knowable to oneself.
-Aristotle, Metaphysics
--Piet Hein
Lesswrong!
You can edit your own comments, for future reference. There’s an icon in the bottom right of the comment that looks like a pencil over some paper.
Edit: Wait, you crossed out this comment so you must already know that. I am confused!
Getting crossed out is what happens to comments when they’re retracted.
Ah, thank you for clarifying.
I knew that it was possible to edit comments. It was just that it didn’t occur to me at that particular point of time. I saw ‘Retract’ and thought it was my best bet.
Saint Thomas Aquinas, Summa Theologica, Question 83, Article 1
Is that just a theist version of compatibilist free will? Or an assertion that somehow you could create something without being responsible for its future actions, either by creating the policy that decided them or making them dependent on a source of randomness?
As Jack says, it’s the “theist version” of compatibilist free will, but you can replace “God” with “the universe” and the point goes through, Aquinas uses God because he’s trying to build up a coherent metaphysics. And quite successfully! He gave the “right answer” to the “free will problem” off-the-cuff as if it was no big deal. This raises my confidence that Aquinas is also insightful when he discusses things I don’t yet understand, like faith.
Aquinas gives all his answers off-the-cuff as if they were no big deal.
As far as early compatibilists go I prefer Chrysippus.
The former.
Is this down voted because it has the word “God” in it?
It doesn’t really have the form of a “rationality quote”. It’s too long to be quotable, not directly bearing on rationality, and doesn’t give rationality-warm-fuzzies like “that which can be destroyed by the truth should be”.
That said, probably yes.
I think it is hard to dispute that several such statements have been upvoted in recent rationality quote threads.
Well, I downvoted it because it essentially replaces one ungrounded assumption (or rather, the answer to a wrong question,) with another ungrounded assumption. It’s an exercise in rationalization, not rationality.
It’s clear to infer what he’s getting at, but this reminds me of nothing quite so much as Timecube.
I think it would come across as less crazy if it didn’t use the word “fiction.” But then it probably wouldn’t have gotten into MoMA.
Bolesław Prus, “The Pharaoh” (translation mine)
I’m not sure what this is supposed to mean.
I’m guessing that the intent is that each person would like to be happy, but no one wants everyone to be happy.
This seems better supported by the text than my first thought, which was that people want to be happy, but are unwilling to do what is necessary to be be happy.
That’s interesting, because I read it as saying that people will object to anyone imposing on them their own idea of what will make them happy. Or to clear up the pronouns, X will object to Y imposing a grand scheme of what Y thinks will make everyone including X happy.
What did the author intend, in context?
The Pharaoh is trying to reform the half-ruined country, and is running into entrenched opinions, vested interests and pervese incentives.
Yeah, I probably could have translated more carefully and with more context.
There’s also an interesting passage where he speaks of himself as an Uncentivised Incentiviser:
(No, I don’t necessarily equate rationality with pro-monarchy sentiment. :) Still, a bunch of things in that book have new interest for me after exposure to Less Wrong and its environs).
-- William of Baskerville, Played by Sean Connery, Name of the Rose (1986)
Reverend Theo: Wow, you really do think you’ve become a God.
Petey: I’m just trying to do what I think a god would do if he were in my position.
Schlock Mercenary MONDAY JULY 31, 2006
Order babies dashed upon the rocks. Knock up married women then have your kid killed. Punish folks for not committing genocide.
— Bertrand Russell History of Western Philosophy (from the introduction, again.)
Generalization’d.
Sorry I’m new. I don’t understand. What do you mean?
Um, so the ” ’d ” suggests that something has been affected by a noun.
In this case, the statement “every disputant is partly right and partly wrong” is affected by generalization. In that it is, er, a false generalization.
What do you mean “the statement is affected by a generalisation”? What does it mean for something to be “affected by a generalisation”? What does it mean for a statement to be “affected”?
The claim is a general one. Are general claims always false? I highly doubt that. That said, this generalisation might be false, but it seems like establishing that would require more than just pointing out that the claim is general.
Right. So calling it a “false generalization” needed two words.
Anyhow: Where does the sun go at night? How big is the earth? Is it harmful to market cigarettes to teenagers? Is Fermat’s last theorem true? Can you square the circle? Will heathens burn in hell for all eternity?
Er. What? You can call it a false generalisation all you like, that isn’t in itself enough to convince me it is false. (It may well be false, that’s not what’s at stake here). You seem to be suggesting that merely by calling it a generalisation is enough to impugn its status.
And in homage to your unconvential arguing style, here are some non sequituurs: How many angels can dance on the head of a pin? Did Thomas Aquinas prefer red wine or white wine? Was Stalin lefthanded? What colour were Sherlock Holmes’ eyes?
Suppose that I wanted to demonstrate conclusively that a generalization was false. I would have to provide one or more counterexamples. What sort of thing would be a counterexample to the claim “each party to all disputes that persist through long periods of time is partly right and partly wrong?” Well, it would have to be a dispute that persisted through long periods of time, but in which there was a party that was not partly right and partly wrong.
So in my above reply, I listed some disputes that persisted for long periods of time, but in which there was (or is) a party that was not partly right and partly wrong.
Ah I see now. Glad we cleared that up.
Still, I think there’s something to the idea that if there is a genuine debate about some claim that lasts a long time, then there might well be some truth on either side. So perhaps Russell was wrong to universally quantify over “debates” (as your counterexamples might show), but I think there is something to the claim.
- Ta-Nehisi Coates
(Edited to remove objectionable and politically biased commentary)
I think you would have done better quoting just the second sentence, which is indeed a sharp rationality quote and needs no context to be appreciated. To include the previous sentence is just inviting downvotes for politics-is-the-mindkiller reasons. (Both in the plain level that conservatives might confuse it for “rah liberals” quote, and in the meta level that some people dislike any mention of ordinary politics in LW regardless of sides.)
I originally had just the second sentence, but I thought that the “making yourself right” phrase was too ambiguous (it allowed for the interpretation “correcting yourself” as well as “justifying current beliefs despite evidence”). The first sentence provided context, but then excerpting the problematic political view felt too much like editing, especially considering the contentious allegation was already in the URL. My apologies.
Completely different topic, but is there a cheat-sheet for how to format comments for bold and italics? I would have liked to bring the focus (2nd sentence) of my quote to the forefront.
When you are writing a comment, there is a “Show help” button in the bottom right corner of the text box. It normally shows the codes for bold, italics, links, etc. Now it is not working for me for some reason.
Bold and italics are done enclosing the text to be formatted with one asterisk and with two asterisks on each side, but I never remember without looking which uses one and which two. Let’s see: One asterisk on each side, Two on each side.
If you hit Show Help on the lower right of the comment box, it will have some brief primers. To answer your specific question, writing one astrix at the beginning and end will make italics (no spaces). Two astix will make bold
Writing the greater than sign
http://www.indiana.edu/~g105lab/images/gaia_chapter_13/vent_communities.htm
Is that even true? Are there no anaerobic pathways for deep sea-vent organisms?
They do use oxygen made by green plants. Had been no plants, those vent communities would also die.
If there were no plants, the composition of those communities would certainly change, but whether they would also die is precisely what I am asking.
As they are now, they would die, as any animal on the land, for example. They need the oxygen. And the oxygen does not come from those vents, but from the green plants in the sea and on the land. Some anaerobic bacteria would of course survive, but not those multicellular fish, crabs and so on, near those vents. They are Sun dependant, after all.
It is just untrue, they aren’t. Even if it usually assumed they are. Wrong. Bias.
Is that a surprise for you?
If you admit that, then the quote as it stands is misleading or false:
These specific microbes may require oxygen, yes, but there are anaerobic microbes waiting in their stead.
Anaerobic microbes are here and there, not only near the vents. The entire ecology around the vent is NOT Sun independent. What has been always suggested to us.
Fish, worms, crabs—everything multicellular needs green plant produced oxygen. That is the whole point of the quote.
-Henry David Thoreau
What does this have to do with rationality?
Aretha Franklin’s cover of “It Ain’t Necessarily So” by George and Ira Gershwin
Well, I liked it.
I’m amused at your downvote.
I’m not sure if that’s more amusing if Manfred downvoted me, or if someone is actually keeping tabs on this already hidden away thread to punish all who contribute.
Well, it wasn’t me—it’s more likely that someone just clicked on it in the Recent Comments.
I suppose someone might have written a script to do so automatically.
That’s an interesting take on this graph.
On the big questions, there’s wisdom in the search, foolishness in finding answers.
I think you may have misunderstood this website.
Because he didn’t cite the quote? ’Cuz the quote is obviously true the vast majority of the time. Maybe he should’ve put quotes around “answers”, but as sane readers we should do so in his stead.
Big questions do sometimes have no answer. The trouble comes because although this isn’t always true, some people would like to pretend that it is so that they can ignore all challenges to their pet idea. “Does the Sun revolve around the Earth” is a big question where the answer was there to be found, and yet it was resisted by those who used words like “big questions.”
It’s not that big questions often don’t have answers, it’s that most proffered answers are often wrong. So the majority of the time finding “answers” is foolishness, whereas continuing to search for an answer is wise. (If the questions were presumed not to have answers then searching for those answers would be somewhat odd.) Anyway I realize that’s not how you interpreted the quote. As to your interpretation...
Um… that is the weirdest form of argument I have seen in awhile.
Anyway, it’s not that people thought that the question was unanswerable, they just thought they already had the answer. Kinda unrelated. Your conclusion is almost certainly correct, but you need to rationalize it better. (ETA: I don’t think good rationalization is bad, by the way; I didn’t intend any negative connotations.)
Agreed,
Yeah, no. :P If there isn’t an answer, and you understand why there isn’t an answer, don’t keep searching for an answer.
As dlthomas pointed out, you seem to have replied to some stupid comment that is vaguely similar to mine except doesn’t actually exist. Are you trolling? Or are you just kinda meh about this whole ‘actually reading what the other person says’ thing? I know I am sometimes.
Why yes, now that you mention it: I do like purple. How did you know?
The map is not the territory!
True. But I’m out of gas.
No, no, just failing basic reading comprehension / replying to the OP instead of the post I actually hit the “reply” button to.
I think that’s what the parent said.
Edited to add: Or that’s not quite right—it’s that you’re speaking of a world the parent explicitly denied. The sentence before what you quoted was
Saying that what follows is wrong in the opposite case is immaterial.
My interpretation of that argument was that ‘people who used words like “big questions”’ refers to people who considered the question of whether or not the sun revolved to be a philosophical matter with moral implications, rather than a mundane true-or-false. If the truth of the statement “the sun revolves around the earth” is implied to mean that “God created our planet at the centre of the universe because he loves mankind”, then most people who believe in its truth would be reluctant to look for mundane, commonplace answers concerning actual gravity and solar system models and stuff.
And once there was a concrete answer to that question, for many people it ceased to be a “big question” with moral implications about human worth. I know plenty of people who are well educated in cosmology and say “well, duh” to the statement that “the earth revolves around the sun”, but who still think that “is morality an innate quality of the universe or purely evolved by human brains?” to be a Big Question, with good versus bad answers instead of true versus false.
Your use of the word “purely” here confuses me; this isn’t an either-or question. Evolution happens due to selection effects, selection effects come from contingent facts about the environment but also less-contingent logical facts about types and equilibria of timeless games and many other things like that. Superrational game theory is an “innate” quality of the universe and seems to have a lot to do with our intuitions about morality. We don’t know if “morality” is a powerfully attractive telos or contingent result of primate evolution. In general moral philosophy is not obvious. If it was then my life would be a lot easier.
(ETA: And when it comes down to actual decision policies you have to do a lot tricky renormalization anyway, so even if it was obvious that morality (the truly optimal-justified decision policy) was a powerful telos it’s not clear how much it would help us to know that fact. Yeah, maybe everything will turn out okay in the end, but maybe it will only do that if you act as if it won’t. (Or maybe it only will if you act as if it will, as Borges and Voltaire talked about.))
Agreed that it’s more complicated than either/or. However, I was using it as an example of a “Big Question” that some people believe shouldn’t be investigated for fear of damaging moral consequences. To people who see it that way, I think it would be an either/or.
Well, looking too closely at a Schelling point is likely to destroy it, even if the Schelling point was serving a useful function.
That doesn’t sound true. I’d go as far as to say it is likely to strengthen it.
Frequently, looking at the Schelling point one will notice it is fundamentally arbitrary, or at the very least be tempted to move it “just a little” in one direction or another.
My model of the local universe differs and I don’t believe you. I expect more strengthening than weakening.
Here is an example of the phenomenon I’m talking about, see especially my comment here.
When I consider examples of Schelling points I think of scenarios directly analogous to archetypal examples. The times when you notice that you are playing a coordination game and need to guess what other people will guess that you will guess. When you notice this and start to ask “What is the schelling point here?” you become even more likely to adhere to a common, predictable solution than to follow your own independent whims.
If I’m driving along an isolated dirt road (like those I grew up on) and I’m feeling philosophical I may well notice that the side of the road that I’m driving on is fundamentally arbitrary. Given that a lot of these roads are narrow enough that you drive in the center of the road it becomes a decision of which way to swerve when encountering the occasional oncoming traffic. And if there aren’t any cops around to enforce a legal coordination one way is pretty much the same as the other. In fact I know those wacky Americans drive on the wrong side of the road all the time. But when I notice that the situation is arbitrary and start to think about the Schelling point it makes me think “he’s going to swerve left and if I swerve right I’m going to @#@% die”. Follow the Schelling points, cut this independent thinking nonsense!
That’s because you recognize that what you’re dealing with is in fact a Schelling point. If one doesn’t realize this fact, one will weaken the Schelling point.
Yes, I think this is where we had most of our initial disagreement.
Voltaire
(Will Sawin pointed out that this also works if you replace “God” with “computers”; I agree, since in the limit they mean the same thing.)
Insofar as the heavens manifest computers. Though I suppose we can treat that part as pure poetic frippery. Of course, if we do that, the quote also applies to high-speed cargo rail.
I’m not sure what Aquinas would make of the idea that one perfection of God is “high-speed cargo rail-ness”. Computers are a lot more like gods than trains are; hence Leibniz’s monadology, which is about both God and computer programs. A similarly compelling metaphysics involving trains instead would be kinda hard to pull off, I imagine.
I’m not claiming that trains are particularly like gods, I’m claiming that “If high speed cargo rail didn’t exist, it would be necessary to invent it” is also true.
Ah, that makes even more sense.
Ronald Reagan
This implies that anti-Communists are a strict subset of Communists.
One can understand someone without reading their work.
I read Marx, then understood Marx by reading a primer on Communism. I disintegrate into two socio-democrats and gamma radiation.
Only if you take the quote too literally.
I almost missed your pun there.
What pun?
Literally/literary.
Still don’t get it. I thought that was just a typo?
I assume that DSimon and possibly Eugine mean that the word “Communists” is included in the word “anti-anti-Communists”.
I just thought they meant that the statement is only true if you assume all information about communism is only in books. But Eugene corrected the spelling, so I guess it was just a false punsitive.
(Sorry.)
-- Marshall from How I Met Your Mother