Rationality quotes: June 2010
This is our monthly thread for collecting these little gems and pearls of wisdom, rationality-related quotes you’ve seen recently, or had stored in your quotesfile for ages, and which might be handy to link to in one of our discussions.
Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
Do not quote yourself.
Do not quote comments/posts on LW/OB.
No more than 5 quotes per person per monthly thread, please.
- Feb 19, 2013, 8:49 PM; 7 points) 's comment on Searching for consequence-imagining games for children by (
- Jun 2, 2010, 8:27 PM; 4 points) 's comment on Rationality quotes: June 2010 by (
- Mar 6, 2012, 4:28 PM; 0 points) 's comment on Rationality Quotes March 2012 by (
“I accidentally changed my mind.”
my four-year-old
Well, that hopefully lead to a teaching moment...
It leads to a contemplative moment for me—I suspect accidentally changing one’s mind happens relatively often.
http://www.spring.org.uk/2007/12/our-secret-attitude-changes.php
Thank you. I know of a couple things I’ve changed my mind about, but I may be unusually inclined to keep track.
Maybe you can explain to me what the lesson is here?
It depends precisely on the context of what the kid meant. But I would think that high on the list would making sure the kid understands that changing one’s mind based on evidence is a good thing. And discussing with the kid why they changed their mind? Did they do so for a good reason based on facts and thinking? Or was it purely emotional?
I will not teach my children that changing one’s mind based on emotion is bad. Particularly not before I establish whether their emotional or logical thinking is their strong point.
What do you mean by emotional thinking? And how do you determine whether someone has good emotional thinking?
I was running with the distinction from the context: “A good reason based on facts and thinking vs X”
I make the observation that many (most?) people can more effectively guide their lives by doing what ‘feels’ right than by facts and reason. In fact, humans come wired with mechanisms which allow feelings to override reason. The quote Roko made on this very page alludes to why.
You look at whether they screw up more—When they do what they think is the right thing to do or when they go with what they feel is right. In all cases there will be times for each kind of thought and the balance will depend on personality and aptitude for various thinking patterns.
What I will NEVER do is train my children to associate “facts and thinking” with ‘good’ and and contrasted with ‘emotional’. It takes a HUGE amount of facts and thinking to equal the quality of thought that emotions represent and quite often those that are best at giving priority to “good reason, facts and thinking” over emotions are not those who are the most successful. (Even though I’ll probably like them more. ;))
I was running with the distinction from the context: “A good reason based on facts and thinking vs X”
I make the observation that many (most?) people can more effectively guide their lives by doing what ‘feels’ right than by facts and reason. In fact, humans come wired with mechanisms which allow feelings to override reason. The [quote Roko made on this very page](> What do you mean by emotional thinking?
I was running with the distinction from the context: “A good reason based on facts and thinking vs X”
I make the observation that many (most?) people can more effectively guide their lives by doing what ‘feels’ right than by facts and reason. In fact, humans come wired with mechanisms which allow feelings to override reason. The quote
You look at whether they screw up more when they ) alludes to why.
You look at whether they screw up more—When they do what they think is the right thing to do or when they go with what they feel is right. In all cases there will be times for each kind of thought and the balance will depend on personality and aptitude for various thinking patterns.
What I will NEVER do is train my children to associate “facts and thinking” with ‘good’ and and contrasted with ‘emotional’. It takes a HUGE amount of facts and thinking to equal the quality of thought that emotions represent and quite often those that are best at giving priority to “good reason, facts and thinking” over emotions are not those who are the most successful. (Even though I’ll probably like them more. ;))
Oh, I see. Thanks.
-- Gregory Bateson, “Steps to an Ecology of Mind”
Great quote! (Talk about be smacked with your own hidden assumptions...)
Got a link to the surrounding text?
Care to explain to me what you got out of it? I think I might be missing the point of this quote.
When you interact with someone, you may think, I will do this, so that they will do that, or think such-and-such, or feel thus-and-so; but what is actually going on for them may bear no resemblance to the model of them that you have in your head. If your model is wrong at the meta-level—you are wrong about how people work—then you will either notice that you have difficulty dealing with people at all, or not notice that the problem is with you and get resentful at everyone else for not behaving as you expect them to.
Here, Mrs. B.F. Skinner imagines that she is reinforcing the behaviour that she desires, of eating spinach, by providing the reinforcer, ice-cream. Or is she really punishing the consumption of ice-cream by associating it with spinach? Or associating herself with an unpleasant situation? Or any number of other possibilities.
Sure thing. For me, it was the sudden realization that I had made assumptions from the very start of reading it, and that I had ranked certain outcomes far lower than the problem—taken in isolation—would justify.
When I read it, I immediately thought, “Okay, rewarding a kid for eating spinach, same ol’ same ol’ …”; then when I got to the end, I—very quickly—absorbed the insight that, in order for the process not to result in the child hating the mother, certain conditions have to hold, which are probably worthy of probing in depth.
I know all of this may sound obvious, but I really had an aha!/gotcha! moment on that one.
On Google Books (limited online availability, but search for “habitually”).
If you Google the title, you’ll find the full text on a Brazilian website; whether legally or not I don’t know.
See also http://lesswrong.com/r/discussion/lw/dxm/clarification_behaviourism_reinforcement/
Hate spinach, love ice cream, love mother. What’s so difficult?
For you, understanding what was asked. The question is not, “what will happen?” The question is, “What information do you need in order to know which outcome will happen?”
Can someone explain why the parent is upvoted? Is everyone just assuming that the Bateson quote is just a sarcastic, roundabout way of asking what will happen?
ETA: In case you weren’t aware, cousin_it is not joking with his comment.
How dare you dispel Deep Wisdom of Master Bateson?
I try not to downvote people when they are right.
Nor, apparently, when they’re not even wrong. cousin_it’s reply was non-responsive.
Thanks. I was surprised to be downvoted, but decided not to ask why.
I downvoted you because you’re either completely missing the point of the quote, or you’re unsuccessfully trying to be funny.
In case it’s not the latter: Yes, since you already know the answer, it’s easy to “infer” the result from the givens. But the question is, what additional information are you using that constrains your answer to that? That’s what you need to say to solve it, not just repeat back from the answer key.
Furthermore, it’s not at all clear that children get the result you claim.
I don’t quite understand your objection. “Love mother” was an unconditional answer, yes. Most people love their mothers, even though the mothers did try to “shape” them in childhood with rewards and punishments. But “hate spinach” and “love ice cream” were inferred from the information in the question. The kid dislikes spinach, or the mother wouldn’t need to reward him; but he does like ice cream, or the mother wouldn’t use it as a reward. And I haven’t heard of any cases where the mother succeeded in “shaping” the kid’s food preferences like this.
If I’m not allowed to use real-life common sense, it’s not clear how I would even understand the question, let alone solve it. Okay, what additional information do you think one should need? Why?
Are you serious? The problem is to specify which “common sense” reasoning leads you to which conclusion! Yes, now that you’ve explained one reason why one outcome holds (even though it doesn’t account for children who grow up recenting their mothers and so isn’t even right on its own terms), you’ve given the kind of information the question is asking for.
Stating which outcome your common sense tells you would result—which is what you did—is non-responsive. And even now, you haven’t told what conditions determine the 27 possible outcomes—just one reason why one outcome would result.
Black-box “common sense” reasoning is exactly how you stray from rationality. You should open the box, and see what’s inside.
8 possible outcomes, not 27. But I think I see your point. Let me ask some more questions in this vein:
A man jumps off a 100 foot tall bridge. What additional information do you need to determine if he’ll die?
I have just washed my cup. What additional information do you need to determine if my cup is clean now?
What additional information do you need to determine whether the Sun will rise tomorrow?
If all such questions are effective in making you open your eyes, question your assumptions and upvote away, well, then I guess we’ll have to agree to disagree about the nature of “rationality”.
Within what degree of confidence? He could have a parachute of some form, or a bungee cord or there could be some form of trampoline to break the fall.
Moreover, you miss the point of the original quote. The question relies on standard assumptions about how humans learn and absorb values. Since humans are very complicated entities, understanding explicitly what assumptions we make about them can be helpful.
Thanks—point taken.
27 if you allow for “no effect”, which you should.
It’s true that you can construct similar questions in other domains.
But the questions you posed are different from that in the quote because it refers to a:
-more common situation with a
-more common inference that is
-more often poorly grounded and hinges on complex aspects of human sociality, which are
-more relevant to our everyday lives because of the
-more frequent occurrence of similar situations.
See also Richard’s further remarks.
The rationality issue involved in the quote is one of how you come to a conclusion, and I think it’s fair to say you might have missed some of the factors that come into play regarding manipulation of children, which Richard explains. There’s a difference between
a) “What does your gut tell you would happen?”, and
b) “What information should you use to justifiably reach a conclusion about what would happen?”
You were answering a), while the question was asking b).
Silas, you’re making strong arguments but mixing in emotion that makes it harder for your interlocutor to change their mind.
Understood; I’ve edited the GP comment to be more diplomatic and improve the formatting. Let me know what you think.
However, regarding the other comment, my question “Are you serious?” is an honest question. I don’t see how cousin_it could misinterpret the question as “What is X?” when it’s clearly asking “How do you know what X is?” So I don’t see why his answer of “X is …” got modded up.
I think I’d eventually come to hate ice cream in that kid’s situation. A treat is no longer a treat when it’s systematically used to manipulate you into eating something you hate.
I think it depends on how much you hate the spinach compared to how much you love the ice cream. People’s memories of an experience are strongly affected by the last bit, so the love of the ice cream may do quite a bit to overwrite the memory of hating spinach. Almost certainly not enough to affect feelings about spinach, but probably enough to not interfere with love of mother.
— Leo Tolstoy, 1896 (excerpt from “What Is Art?”)
Not only to recognize my mistakes, but to actually speak outloud about them frequently has given me great strengh in doing it in questions that really matter. If you have social status, it is worth sparing some change in getting used to not only being wrong, but being socially recognized as wrong by your peers…
Emperor Sigismund, when corrected on his Latin, famously replied:
John Stuart Mill, On Liberty
He seems to have understood that 0 and 1 are not probabilities.
You know, that post is somewhat annoyingly titled. 0 and 1 are probabilities by the common definition of “probability”, but defining “probability” such that 0 and 1 are not probabilities results in a system which is also consistent and useful.
“It’s wonderful how much we suck compared to us ten years from now!”
-- Michael Blume
A troubling possible implication of this is if the impact we can expect to have on existential risks diminishes over time, then as the competence of our plans and actions increases, the expected importance of those choices tends to decrease.
The section where I’ve added an ellipsis is a section where he discusses Newton in more detail. That entire part of the text is worth reading. Priestly wrote the book before he did his work on the composition of air. The book is, as far as I am aware, the first attempt at actual history of science. (I’m meaning to read the whole thing at some point, but the occasionally archaic grammar makes for slow reading.)
Euler is one of the few mathematicians who provide an exception to this rule. To quote Polya (Mathematics and Plausible Reasoning):
(the quoted passed in the text is apparently from Condorcet, although I don’t know the initial source)
Polya is, of course, one of the few other mathematicians who break this mould. Explicitly writing books about the process of discovery.
You quoted:
But I just read the original and it is written:
Now it makes more sense to me, the ‘a’ makes all the difference.
Mistranscription by me. Fixed now. Thanks.
It’s a coincidence that I was thinking along these lines recently. Most science is just the result of tiny footsteps put one after the other, but when you see the final result it is impressive. Most teaching books are in fault because they only portray the end result whereas the painstaking but simple steps that lead there often in a natural way are omitted.
It is an issue that has been discussed here before. Eliezer generally uses Einstein as the example rather than Newton. See for example Einstein’s Superpowers and My Childhood Role Model.
The book is available for free on google books, can you tell us the page nr. of the quotation please?
575 and 576 in the edition on Google Books.
(Raymond Smullyan)
I have found it in an OB comment by Zubon, but it was never posted as a rationality quote.
Raymond Smullyan is a gold mine for different cached thoughts. Maybe I should start finding quotes on random pages for these threads.
-- W. C. Fields
Another Twain quote.
On a similar theme:
TV Tropes
I’ve asked this question before, but where the hell does the high-quality rationality on TV Tropes come from?
Perhaps it’s due to the fact that TV Tropes’ mission is essentially to perform inference on the entire body of human fiction, and create generalised models (tropes or trope complexes) from that data. In many ways, it’s science applied to things that are made up!
This was a nice exercise in generating a host of just-so stories.
Yes, but many of these are testable. Thus for example, Oscar’s hypothesis that “Things are only tropes if they happen more often in fiction that in reality, so to detect them you need an accurate map” is testable. You could take a random sample of people who edit TVtropes and test their map accuracy in completely separate areas (say things that can be often estimated with a Fermi calculation) and compare that to a general sample of people. Oscar’s hypothesis suggests that the Tropers will do better.
RobinZ’s point is difficult to test, but presumably if one examined in detail what pages have historically stuck around and which have been merged or deleted, one could get data that would test it.
I would also consider my thesis undermined were it demonstrated that the rate of rationally-insightful contributions to TV Tropes was significantly higher than for other notable Wikis (e.g. Wikiversity).
How would you measure the rate of rationally-insightful contributions? I’m also not sure which wikis would be useful to test this on. Some wikis (such as say the various Wikipedias) have prohibitions on original research. Other wikis have narrow goals that will mimimze the number of rational insights. Thus, I’d expect a very low insight rate on say Wikispecies since that is devoted to cataloging existing biological knowledge.
Good points. What I was attempting to measure was the relative measure* of rationalists on TV Tropes versus other nerd communities. The part of my thesis being tested is that no notable difference need be hypothesized to explain EY’s perception of unusual rationality in the wiki.
(Was I mistaken to believe that EY thought TV Tropes was unusually rational compared to other nerdy Internet communities, as opposed to compared to other Internet communities, full stop? I agree that TV Tropes is nerdier than most of the Internet.)
* i.e. fraction of population weighted by intensity of participation.
I noticed that—I believe it is a classic case of (warning: TV Tropes) the Rhetorical Question Blunder.
(In my defense, I tried to make mine testable.)
People who can see through the conventions of entertainment and who enjoy posting about those conventions for free are likely to be much more awake than usual.
Here’s a variant on that. In fiction, everything is calculated to manipulate you or fulfill some simple recognizable pattern.
Troping is training in figuring out how the manipulation works and what the patterns are; this is a skill that carries over into everything else. (Doesn’t matter if it’s an author trying to manipulate you—or a bad argument.)
The quality on TVTropes comes from the same place as the quality on Wikipedia: obsessive nerds who want things to be right. Like Wikipedia, TVTropes has successfully set up a filter such that the good stuff tends to stick more than the bad.
Of course, it has the same problem with people wanting to add garbage as Wikipedia, as Desrtopa points out. But the overall slight bias to good seems sufficient to grow quite remarkably high quality.
I think it’s that the website is dedicated to identifying common structures that make stories entertaining, with an emphasis that they are fictional structures. It’s the very use of the word “tropes” in the title. Thus the user base is a bunch of people who enjoy a lot of bad (and usually absurdly bad) t.v., yet also have fun analyzing what psychological manipulation they were supposed to have been subjected to.
Also, I know a few TVTropes addicts who are regular LW readers (from a forum on which Dresden Codak left a large impact), and wouldn’t be surprised if they have contributed.
I am one such, but I’m not aware of any other Koala Wallopers who’re regular editors of tvtropes.
Things are only tropes if they happen more often in fiction that in reality, so to detect them you need an accurate map.
ETA: And everyone is already in hole-picking mood. So any cognitive biases showing up will be jumped on.
ETA2: What does ETA stand for anyway?
ETA = Edited to add (not “estimated time of arrival”, the more common usage)
I sometimes use ETC, edited to correct, but that hasn’t caught on.
ETA: And here’s the LessWrong acronym list—we need to link it from the front page.
Where I live, ETC stands for Electronic Toll Collection and is posted at the entry ramp of toll-roads equipped appropriately.
What’s wrong with just using “Edit: additional note goes here”
That’s what I use, come to think of it.
Nothing’s wrong with that, but ETA is shorter and faster to type.
Not necessarily; perhaps one is accustomed to typing words that start with at most one capital letter.
CEV needs to be added. I’m not doing it myself because I’m not sure what would be a good description of it to link to.
Rational Tropers. QED.
Was that a deliberate attempt at a mysterious answer? If so, I am amused.
It looked like a joke along the lines of:
Q (on discovering a pile of eggs in a strange place): Where did these eggs come from?
A: Chickens.
Or “Where do these stairs go?” … “They go up.”
The way TV Tropes is set up, technologically and culturally, it seems relatively easy for a rational person to contribute an insight that persists—is there some systemic pattern that this effect cannot account for?
I think that there may be something of a sampling bias going on here. The sort of structure analysis of storytelling we do attracts some particularly intelligent and rational people, but also quite a lot who are, charitably speaking, not. It’s a constant struggle to keep the prominent pages full of the stuff that’s actually worth reading, and to shove the rest under a sofa when we can’t get rid of it entirely.
That article is full of goodies.
-- Mr. Spock is Not Logical
The last part deserves extra emphasis:
See also here.
Similarly,
-- John Stuart Mill
Unfortunately Mill gets wordy in the middle, instead of just saying ”… various secondary ends. Attempting to focus directly on happiness generally terminates in...”
-- jman3030
Can someone get Yvain to photoshop up a “Fallacyzilla!”
-Thomas Kuhn, The Structure of Scientific Revolutions
The source is Jonathan Haidt, right?
Whoever it was, it was clearly someone who has never proved a theorem.
Ok, in context he’s talking about moral beliefs, but still.
We could say (with some degree of insight) that proving a theorem is something that the “press secretary” sometimes does as a hobby in its off hours.
“Sanity is conforming your thoughts to reality. Conforming reality to your thoughts is creativity.”
-- Unknown
I would prefer to say that conforming your thoughts to reality is science, and conforming reality to your thoughts is engineering...
Putting those together, science is sanity and engineering is creativity. Or to merge this with an aphorism of George Bernard Shaw, science sees things and asks “why?”, and engineering dreams things that never were and asks “why not!”
-- G.B.S.
Meanwhile I would want to employ a creative scientist but a sane engineer. Perhaps it is a matter of balance...
Have you noticed how many G.B.S. quotes are in the top quotes list?
clippy.paperclips: how many humans, as a fraction of total humans, have a belief about whether or not they are a human, and believe they are not a human?
me: this is a subculture of humans that believes they are really animals: http://en.wikipedia.org/wiki/Furry_fandom
clippy.paperclips: so those are the normal ones? and it’s like a war against the irrational majority?
me: bad example.
-- Dylan Thomas
THE SCANSION IS QUITE PLEASING.
Not strictly a rationality quote, but screw it, it’s beautiful anyway:
-- natehoy
I disagree with this. Although it’s difficult to guess how people thought back then when they didn’t have all these marvels of today, it’s still really easy to appreciate the things themselves. Things related to cultural evolution, like basic human rights, art, modern goverments, those are a bit more difficult to see, but still, with little thought, it’s easy to see much of the sheer awesomeness they have. It’s also easy to see how things could easily be so much worse, and usually it’s easy to check from history that yeah, things have been very much worse.
Though much of this quote is about how awesome things are in our world, and I totally agree with that.
I guess that might depend on the person. For me, even though I can appreciate things like television and electricity on an intellectual level, I can’t really appreciate them on an emotional level the way I can appreciate things like ubiquitous mobile phones or Wikipedia that weren’t there when I was a kid.
“You rationalize, Keeton. You defend. You reject unpalatable truths, and if you can’t reject them outright you trivialize them. Incremental evidence is never enough for you. You hear rumors of Holocaust; you dismiss them. You see evidence of genocide; you insist it can’t be so bad. Temperatures rise, glaciers melt—species die—and you blame sunspots and volcanoes. Everyone is like this, but you most of all. You and your Chinese Room. You turn incomprehension into mathematics, you reject the truth without even knowing what it is.”
--Jukka Sarasti, rationalist vampire in Peter Watts’s Blindsight. Great book on neuroscience and map != territory.
Um, wasn’t he more of a p-zombie who just happened to be rational?
(In that novel, vampires are a near-human species who lack consciousness—so all the vampires are a bit like p-zombies, except they don’t claim to be conscious.)
I’m not entirely sure what your criticism is. I’ll take it as meaning ‘isn’t it just an arbitrary accident that the vampires happen to be more rational than humans, and not an intrinsic part of those characters?’
No. It isn’t. If you remember, one of the running suggestions in Blindsight is that consciousness is a useless spandrel that sucks up tons of brainpower, and which can/will be discarded with much benefit. The vampires may be rationally superior to humans because they are p-zombies, and they evolved that way in order to effectively predict human actions and hunt them. The arbitrary accident was the cross glitch—otherwise the vampires would have won rather than died out. If the vampires could as well have been less-rational-than-humans p-zombies, that would undo that major theme.
I actually meant that “rationalist” is a label that doesn’t make sense when applied to an entity that’s already rational, but I’ll admit my phrasing was confusing… probably because my attention was mainly focused on trying to make a joke about p-zombie vampires. ;-)
Er, what? How exactly do you tell the difference between a p-zombie and a being with conscious thought? I thought the whole freaking point is that there is no way, so the story can’t hinge on it, right?
No, the whole point is if you have two entities, one of whom is a zombie and one of whom is conscious, there must be some physical difference in their brains. (‘p-zombie’ normally is used in the context of the (impossible) thought experiment where there is no physical difference in the two brains, but only one is conscious.)
In Blindsight, the vampires’ brains have a very different architecture than ours, and IIRC they explicitly state they do not have consciousness.
Please don’t misunderstand—I agree with all of that! I meant that the whole point of using the term “p-zombie” is to specify a being with the (hypothetical) properties that it looks just like a human (or being that is normally accepted as conscious), in all physically discernable ways, but (somehow) lacks consciousness. So I was confused as to how it could affect the storyline for some being to be specified as a p-zombie, since you wouldn’t know the difference.
I agree that such a being can’t exist, for the standard reasons.
If the vampires actually have different brain architectures, then they shouldn’t be called p-zombies, because they don’t have the form of something normally conscious, like a human. It would make as much sense as saying that a rock is a p-zombie.
You’re right that the term is being used incorrectly (or at least very loosely). However, I think it makes slightly more sense than calling a rock a p-zombie, since the vampires in Blindsight do behave like humans and have normal conversations like humans: that is, they would pass the Turing test. Entities like this are sometimes called “behavioral zombies” (as opposed to “physical zombies”), and it’s not clear whether they are possible, though Eliezer seems to think so.
qwern is using p-zombie slightly incorrectly. In this case, these are entities that act more or less like humans but functionally state their own lack of conscious awareness.
Yes; in my defense, pjeby started it!
I lack conscious awareness.
There, do you regard me as a p-zombie now?
“More or less” requires unpacking approximately equal in length to the novel, but the non-sentience of the vampires is weakly implied, (spoiler) juvyr gur aba-fragvrapr bs gur nyvraf gurl zrrg vf rkcyvpvg naq abg ng nyy zrgnculfvpny.
I thought it was more implied by the ending, myself. (Does Blindsight really need spoilers ROT13ing? I mean, the book is right there for anyone to read.)
The fact of information being available does not make it known. Billions of people have never read The Woman in White by Wilkie Collins, despite it being freely available in most places around the world, for example. The use of spoilers is not to protect the copyright of the writers, but to protect the surprise of the readers when they discover what has been written.
Nearly everything else that people do not want spoilers for is right there for anyone to consume. I do not think that is the point...
RobinZ seconded… I may go read both these stories due to this thread and I’d prefer not to see spoilers.
By the way: if you like The Woman in White, try also The Moonstone. Those two are Wilkie Collins’ famous stories.
-- Stephen Jay Gould
-- Democritus
The quote is good but I can’t help but be bothered by the source, and wonder if rationality is really on display here.
Democritus may have had an atomic theory, but his reasons for having it were no better than those for the “earth, wind, fire and water” theory; i.e., wild conjecture.
That’s not true. He had perfectly good reasons for atomism in his context.
The ontological arguments of Parmenides (and as exposited by Melissus) lead to extremely unpalatable, if not outright contradictory, conclusions, such as there being no time or change or different entities. The arguments seem valid, and most of their premises are reasonable, but one of his most important and questionable premises is that void cannot exist.
Reject that premise and you are left with matter and void. How are matter and void distributed? Well, either matter can be indefinitely chopped up (continuous) or it must halt and be discrete at some point. The Pluralists like Anaxagoras take the former approach, but continuousness leads to its own issues with regard to change.* So to avoid issues with infinity, you must have discrete matter with size/divison limits - _atom_s.
So, Democritus and Leucippus are led to Atomism as the one safe path through a thicket of paradoxes and problems. Describing it as wild conjecture is deeply unfair, and, I hope, ignorant.
* One argument, if I remember it from Sextus Empiricus’s Against the Physicists correctly, is that if matter really is infinitely divisible, then you should be able to divide it again and again, with void composing ever more of the original mass you started with; if you do division infinitely, then you must end up with nothing at all! That is a problem. Cantor dust would not have been acceptable to the ancient Greeks.
You make a good case. I repudiate my previous statement.
Also, Democritus observed Brownian motion, and realised from that the atomic nature of gas.
Smart guy.
Even if that argument is valid, it doesn’t seem to rule out matter being “infinitely divisible” in the sense that it could be divided arbitrarily many times—which is what a rejection of atomism really means—just the stronger sense of being able to divide something infinitely. As an argument against atomism it seems to rely on an equivocation between these two, unless that case is somehow ruled out.
I don’t really follow. How is arbitrarily divisible not the same as infinitely divisible? If there is some limit to the arbitrariness then it’s just atomism discussed. If there is no limit, then it seems like infinity to me.
(Most of the arguments against continuity use this sort of induction; continuity causes problems at this level, and it logically causes problems on n+1 levels; therefore it causes problems on all levels.)
If atomism is false, then for any matter and any n, the matter can be divided into n parts (whatever that may mean); this is a finite division, so if we started with a nonzero amount of “stuff” (whatever that may mean), the parts we divided it into should also comprise nonzero amounts of “stuff” that total to the original amount of “stuff”. No problem there.
The apparent problem (which I would say isn’t one really, but that’s not the point, because there’s no way the Greeks could have been expected to come up with why not) appears if you allow it to be divided into infinitely many parts, and then argue, since there was only finitely much “stuff” originally, and we’ve divided it into infinitely many parts, each part must have zero “stuff”, but any sum of zeroes is zero.
Weak induction will reach all natural numbers, but it won’t reach infinity.
Would a conserved Lebesgue measure have been acceptable? I don’t see why infinitely dividing matter has to reduce the amount of anything.
Not unless you enjoy anachronisms. The Greeks probably wouldn’t have liked that either; the basic point stands: if at any point one can divide it to produce a void and 2 smaller masses, then where is the ‘real’ mass? Any point you pick, I can turn into void. If I can do it for any point, I can do it for every point, and if every point is void...
Unless you postulate a knife with really weird properties, cutting a continuous object in half isn’t turning matter into void. It’s moving some of the matter without changing its density (hence my offer of conservation of volume). You can do that to every point currently occupied by an object, but only by reserving an equal amount of space that’s currently void and displacing all of the matter to there.
It’s more of a conceptual knife—pointing out that by the definition of continuity, segment X is made of void and 2 smaller segments, Y and Z; but Y and Z are themselves made of void (and 2 smaller segments), and so on.
(Any conceptual knife just illustrates how motion was supposed to be possible in a continuous framework: the matter in the knife fits into the voids of what it is moving into.)
Oh, so the “made of void” thing comes from the void that the knife fits into. That wasn’t at all clear—it seemed like we were just talking about separating things into parts, not about the physical process of cutting.
From The Dharma Talks of Zen Master Bankei, translated by Norman Waddell. Quoted by Torkel Franzén as a perfect description of Usenet flamewars.
-- Cosma Shalizi on Graphical Models
Obligatory xkcd reference
That said… treating correlations as evidence of causation isn’t unreasonable, as long as I remember that the world is full of evidence of falsehoods as well as truths, and calibrate accordingly.
[comment deleted]
Far too true. :)
Far too true. :)
What I cannot build, I do not understand.
— Errett Bishop
Bad advice.
I do not want to understand all untrue statements. Not only is there an opportunity cost associated with thinking or learning, understanding false statements can bias your brain in undesirable directions.
There are times (say, you know an expert you trust on the subject) when you can establish with acceptable confidence that a statement is not true. In such cases it is often better to not bother giving the false statement another thought.
-- Thomas Cathcart & Daniel Klein, Plato and a Platypus Walk into a Bar… : Understanding Philosophy Through Jokes
Looked it up in Google Books and found this gem as a chapter lead-in:
“Without logic, reason is useless. With it, you can win arguments and alienate multitudes.”
I’ll have to get a copy sometime.
It’s very Philosophy 101; you can get more in-depth info online. But it does provide an entry into a variety of topics, and some of the jokes are real zingers.
-- Publius Syrus
-- old Chinese saying
-- Eric Pepke
“He who cannot draw on three thousand years is living from hand to mouth.” —Goethe
“We live on an island surrounded by a sea of ignorance. As our island of knowledge grows, so does the shore of our ignorance”—John Archibald Wheeler
That is precisely the quote I was vaguely alluding to here—thanks ever so much for pinning it down.
A more cheering version: “The larger the island of knowledge, the longer the shoreline of wonder.” Ralph W. Sockman
Montesquieu, “The Spirit of the Laws”, book XXV, chapter XIII. (Link to the book, Original French)
-- Queen Juliana
-- Clark Glymour, What Went Wrong: Reflections on Science by Observation
Longer version:
(I really like the short one for “small, temporarily insane people”, but the one above will tickle many a LW reader’s funny bone.)
Via CRS.
Gawande on information overload in medicine
--James P. Carse, _Finite and Infinite Games_
“Every conviction is a prison” ---- Nietzsche
“All generalizations are false, including this one.”
-Mark Twain
“The Master said, Yu, shall I tell you what knowledge is? When you know a thing, to know that you know it, and when you do not know a thing, to recognize that you do not know it. That is knowledge.”
Analects of Confucius (Wayley’s translation.)
(This quotation is an epigram to chapter 1 of Harold Jeffreys’ Scientific Inference, 1957, Cambridge University Press.)
Found here
- Beta Ray Bill
-- Alonzo Fyfe
I find this rather gnomic. Is he admonishing us to only say ‘ought’ in reference to existing parts of reality? Or simply classifying ought as a nonsensical notion?
Some more context, from the link:
This seems… confused. The is-ought distinction is the distinction between preference and fact.
“Preferences” are also facts about minds.
Not directly, there is no fixed “preference mapping” from minds to preferences that works in general. We can only hope for one that works for humans, constructed for searching preference of humans, because it won’t need to work for any almost-humans or not-humans-at-all. I look at a mind and see that its preference is X, you look at the same mind and say it’s Y. There is no factual disagreement, the sense of “preference” was different; and if it was the same, the purpose was lost.
Well, yes; it’s not straightforward to go from brains to preferences. But for any particular definition of preference, a given brain’s “preference” is just a fact about that brain. If this is true, it’s important to understanding morality/ethics/volition.
Hello! You seem to know your way around already, but it doesn’t hurt to introduce yourself on the Welcome page...
Wow. If he keeps playing around with words like that it should only take him two more paragraphs to ‘prove’ the existence of God.
Really?
I interpret him to be saying something fairly non-dualistic—namely, that morality is not an ontologically basic thing separate from physics.
He also may be saying that moral claims reduce to fact claims in some sense, which is almost true (you need to throw some values in as well).
Are you coming at this from the perspective of a moral nihilist?
I did not like the particular way he was trying to make morality relate to physics. I thought it asserted a confused relationship between ‘is’ and ‘ought’.
I think that was a point that he was at least trying to make and it is something I agree with.
No. That’s for people who realise that God doesn’t tell them what morality is and get all emo about it. I more take a ‘subjectively objective’ position (probably similar to what you expressed in the previous paragraph).
Succinctly stated. I love it.
I did not like the particular way he was trying to make morality relate to physics. I thought it asserted a confused relationship between ‘is’ and ‘ought’.
I think that was a point that he was at least trying to make and it is something I agree with.
No. That’s for people who realise that God doesn’t tell them what morality is and get all emo about it. I more take a ‘subjectively objective’ position (probably similar to what you expressed in the previous paragraph).
-Nietzche
del
-KPAX
(I do not present this as an endorsement of the Big Bounce hypothesis.)
How could you distinguish a repeating process consisting of the entire universe, from that process happening only once?
If it were truly repeating, you couldn’t. Unless you were a KPAXian and the screenwriters wrote it to be so.
The idea of the eternal recurrence didn’t originate with that movie.
I know. What’s your point?
I was presenting the quote along the lines of UDT, and this.
Fair enough. It just bugs me on a status level to have an idea that was well-stated by famous philosophers quoted from K-PAX instead. I realize I’m being irrational here.
Gerry Rafferty offers an alternative perspective.
-- Piet Hein
-Jonathan Frakes, as William T Riker
A fine intention. But until we make the technology, we are still, after all, only mortal.
“Had we but world enough and time
But we don’t
So let’s get on with it.”
-- Andrew Marvell, “To His Coy Mistress” (abridged)
Your original quote asserts a definite fate, not a fate which would occur if some particular technology were to remain uninvented.
-- H. P. Lovecraft
Piet Hein is definitely dead.
--author William Saroyan, letter written to his survivors
Holy shit, this the first time ever that I realized the relationship between this Lovecraft quote and classic OB/LW topics. Scary.
You mean you missed it back in February? :)
I love Piet Hein :)
or
Arthur Conan Doyle
I’m embarassed to bring this up again, because I seem to quote steven0461 too often—but, in something close to his words; “When you have eliminated the impossible, whatever remains is likely more improbable than an error in one of your impossibility proofs.”
...Or you’ve just missed something. If all you’re left with is improbable you notice that you are confused. I’ve always thought that quote was off.
Then again, Sherlock never did miss anything.
I also just noticed a Sherlock quote with exactly this meaning:
Sherlock’s a more rounded rationalist than he’s given credit for.
To the contrary, he was roundly defeated on at least one occasion.
I am sorry gentlemen, but this quote of Holmes is the very essence of rationalism as I see it.
Douglas Adams
There is no evidence that is so strong that it will justify a statement no matter how improbable you initially considered it. Thus, as Oscar points out, this quote is off.
In Bayes/Pearl terminology, knowledge of an effect destroys the causes’ independence (d-connects them), and ruling out a cause shifts probability onto the remaining causes.
How does a Bayesian rule out a cause?
As a rationality quote, ”… must contain the truth” would have been better.
“All we have to decide is what to do with the time that is given to us.”—Gandalf, The Lord of the Rings: The Fellowship of the Ring
(This is not necessarily a rationalist quote, but yet, it kinda is :))
-- Joseph Chilton Pearce
Is this true?
It seems to me man’s mind is a mirror of a universe (in the sense of modelling it), full stop.
I’m not sure this is a very rationalist quote. In particular, many judgments people make and many biases come into play at a non-conscious level. We generally need to make a conscious effort to correct for those biases.
Yes and no. You are right about a lot of biases originating from the unconscious. But the other way is also possible, the smart human who does something stupid because he has a good theory of how it should be done. Or choosing a picture because of some salient verbalizable reason instead of just taking the one you like the most without necessarily being able to explain why, etc...
There is more on this topic in “The rational unconscious: Conscious versus unconscious thought in complex consumer choice” by Ap Dijksterhuis and in Jonah Lehrer “How we decide”.
-- JoshuaZ
-- Rationality quotes: June 2010.
“Sometimes to feel like a man you have to dress like a woman.”
Uh, what does this have to do with rationality at all?
It would certainly be more interesting if there really was a connection to rationality, but I suspect this is just a random drive-by.
Counter-signalling?