Rationality Quotes from people associated with LessWrong
The other rationality quotes thread operates under the rule:
Do not quote from Less Wrong itself, Overcoming Bias, or HPMoR.
Lately it seems that every MIRI or CFAR employee is excempt from being quoted.
As there are still interesting quotes that happen on LessWrong, Overcoming Bias, HPMoR and MIRI/CFAR employee in general, I think it makes sense to open this thread to provide a place for those quotes.
- Rationality Quotes—Fall 2024 by 10 Oct 2024 18:37 UTC; 79 points) (
- Rationality Quotes June 2014 by 1 Jun 2014 20:32 UTC; 16 points) (
- Rationality Quotes Thread May 2015 by 1 May 2015 14:31 UTC; 15 points) (
- Rationality Quotes April 2014 by 7 Apr 2014 17:25 UTC; 15 points) (
- Rationality Quotes August 2013 by 2 Aug 2013 20:59 UTC; 12 points) (
- Rationality Quotes November 2014 by 7 Nov 2014 19:07 UTC; 12 points) (
- Rationality Quotes October 2013 by 5 Oct 2013 21:02 UTC; 12 points) (
- Rationality Quotes December 2013 by 17 Dec 2013 20:43 UTC; 12 points) (
- Rationality Quotes February 2014 by 2 Feb 2014 13:35 UTC; 11 points) (
- Rationality Quotes Thread April 2015 by 1 Apr 2015 13:35 UTC; 11 points) (
- Rationality Quotes Thread November 2015 by 2 Nov 2015 12:30 UTC; 11 points) (
- Rationality Quotes January 2014 by 4 Jan 2014 19:39 UTC; 11 points) (
- Rationality Quotes Thread March 2016 by 5 Mar 2016 18:44 UTC; 11 points) (
- Rationality Quotes May 2014 by 1 May 2014 9:45 UTC; 10 points) (
- Rationality Quotes December 2014 by 3 Dec 2014 22:33 UTC; 10 points) (
- Rationality Quotes August 2014 by 4 Aug 2014 3:12 UTC; 10 points) (
- Rationality Quotes Thread February 2016 by 2 Feb 2016 18:17 UTC; 10 points) (
- Rationality Quotes July 2014 by 6 Jul 2014 6:51 UTC; 10 points) (
- Rationality Quotes May 2016 by 6 May 2016 15:15 UTC; 10 points) (
- Rationality Quotes November 2013 by 2 Nov 2013 20:35 UTC; 10 points) (
- Rationality Quotes Thread June 2015 by 31 May 2015 2:12 UTC; 10 points) (
- Rationality Quotes Thread January 2016 by 1 Jan 2016 16:00 UTC; 9 points) (
- Rationality Quotes September 2014 by 3 Sep 2014 21:36 UTC; 9 points) (
- Rationality Quotes Thread August 2015 by 3 Aug 2015 9:50 UTC; 9 points) (
- Rationality Quotes March 2014 by 1 Mar 2014 15:34 UTC; 9 points) (
- Rationality Quotes Thread March 2015 by 2 Mar 2015 23:38 UTC; 9 points) (
- Rationality Quotes Thread July 2015 by 1 Jul 2015 11:04 UTC; 8 points) (
- Rationality Quotes Thread December 2015 by 2 Dec 2015 11:28 UTC; 8 points) (
- Rationality Quotes January—March 2017 by 2 Jan 2017 10:48 UTC; 7 points) (
- Rationality Quotes October 2014 by 1 Oct 2014 23:02 UTC; 7 points) (
- Rationality Quotes Thread September 2015 by 2 Sep 2015 9:25 UTC; 7 points) (
- Rationality Quotes September 2013 by 4 Sep 2013 5:02 UTC; 7 points) (
- Rationality Quotes January 2015 by 1 Jan 2015 2:23 UTC; 7 points) (
- Rationality Quotes Thread February 2015 by 1 Feb 2015 15:53 UTC; 7 points) (
- Rationality Quotes September–December 2016 by 2 Sep 2016 6:44 UTC; 6 points) (
- Rationality Quotes June 2016 by 3 Jun 2016 7:51 UTC; 6 points) (
- Rationality Quotes Thread October 2015 by 3 Oct 2015 13:23 UTC; 6 points) (
- Rationality Quotes August 2016 by 1 Aug 2016 9:32 UTC; 4 points) (
- Rationality Quotes July 2016 by 1 Jul 2016 9:02 UTC; 4 points) (
- Rationality Quotes April 2016 by 6 Apr 2016 7:01 UTC; 3 points) (
- Rationality Quotes April—June 2017 by 1 Apr 2017 9:06 UTC; 3 points) (
- 1 Sep 2015 18:12 UTC; 2 points) 's comment on Rationality Quotes Thread August 2015 by (
- Rationality Quotes May 2014 by 8 May 2014 15:43 UTC; -23 points) (
-- Nominull3 here, nearly six-years old quote
“Goedel’s Law: as the length of any philosophical discussion increases, the probability of someone incorrectly quoting Goedel’s Incompleteness Theorem approaches 1”
--nshepperd on #lesswrong
There’s a theorem which states that you can never truly prove that.
The probability that someone will say bullshit about quantum mechanics approaches 1 even faster.
At least, the possible worlds in which they don’t start collapsing… Or something...
I love that ‘bullshit’ is now an academic term.
That doesn’t say much; perhaps it approaches 1 as 1 − 1/(1+1/2+1/3...+1/n)?
I like your example, it implies that the longer the discussion goes, the less likely it is that somebody misquotes G.I.T. in any given statement (or per unit time etc). Kinda the opposite of what the intent of the original quote seems to be.
Yea, but it’s clear what he’s trying to convey: For any event that has some (fixed) episolon>0 probability of happening, it’s gonna happen eventually if you give it enough chances. Trivially includes the mentioning of Gödel’s incompleteness theorems.
However, it’s also clear what the intent of the original quote was. The pedantry in this case is fair game, since the quote, in an attempt to sound sharp and snappy and relevant, actually obscures what it’s trying to say: that Gödel is brought up way too often in philosophical discussions.
Edit: Removed link, wrong reference.
This is not true (and also you mis-apply the Law of large Numbers here). For example: in a series (one single, continuing series!) of coin tosses, the probability that you get a run of heads at least half as long as the overall length of the series (eg ttththtHHHHHHH) is always >0, but it is not guaranteed to happen, no matter how many chances you give it. Even if the number of coin tosses is infinite (whatever that might mean).
Interestingly, I read the original quote differently from you—I thought the intent was to say “any bloody thing will be brought up in a discussion, eventually, if it is long enough, even really obscure stuff like G.I.T.”, rather than “Gödel is brought up way too often in philosophical discussions”. What did you really mean, nsheppered???
It was the latter. Also I am assuming that you haven’t heard of Godwin’s law which is what the wording here references.
… any event for which you don’t change the epsilon such that the sum becomes a convergent series. Or any process with a Markov property. Or any event with a fixed epsilon >0.
That should cover round about any relevant event.
Explain.
Law of Large Numbers states that sum of a large amount of i.i.d variables approaches its mathematical expectation. Roughly speaking, “big samples reliably reveal properties of population”.
It doesn’t state that “everything can happen in large samples”.
Thanks. Memory is more fragile than thought, wrong folder. Updated.
“How do you not have arguments with idiots? Don’t frame the people you argue with as idiots!”
-- Cat Lavigne at the July 2013 CFAR workshop
If idiots do exist, and you have reason to conclude that someone is an idiot, then you shouldn’t deny that conclusion—at least when you subscribe to an epistemic primacy: that forming true beliefs takes precedence over other priorities.
The quote is suspiciously close to being a specific application of “Don’t like reality? Pretend it’s different!”
That quote summarizes a good amount of material from a CFAR class, and presented in isolation, the intended meaning is not as clear.
The idea is that people are too quick to dismiss people they disagree with as idiots, not really forming accurate beliefs, or even real anticipation controlling beliefs. So, if you find yourself thinking this person you are arguing with is an idiot, you are likely to get more out of the argument by trying to understand where the person is coming from and what their motivations are.
Having spent some time on the ’net I can boast of considerable experience of arguing with idiots.
My experience tells me that it’s highly useful to determine whether one you’re arguing with is an idiot or not as soon as possible. One reason is that it makes it clear whether the conversation will evolve into an interesting direction or into the kicks-and-giggles direction. It is quite rare for me to take an interest in where a ’net idiot is coming from or what his motivations are—because there are so many of them.
Oh, and the criteria for idiotism are not what one believes and whether his beliefs match mine. The criteria revolve around ability (or inability) to use basic logic, tendency to hysterics, competency in reading comprehension, and other things like that.
Yes, but fishing out non-idiots from say Reddit’s front page is rather futile. Non-idiots tend to flee from idiots anyway, so just go where the refugees generally go to.
LW as a refugee camp… I guess X-D
That can be a useful method of learning. Pretend it’s different, act accordingly, and observe the results.
This is more to address the common thought process “this person disagrees with me, therefore they are an idiot!”
Even if they aren’t very smart, it is better to frame them as someone who isn’t very smart rather than a directly derogatory term “idiot.”
(Certainly not my criterion, nor that of the LW herd/caravan/flock, a couple stragglers possibly excepted.)
I think you missed a trick here...
The term ‘idiot’ contains a value judgement that a certain person isn’t worth arguing with. It’s more than just seeing the other person has having an IQ of 70.
Trying to understand the world view of someone with an IQ of 70 might still provide for an interesting conversation.
Except that often it can’t be avoided/ is “worth” it if only for status/hierarchy squabbling reasons (i.e. even when the arguments’ contents don’t matter).
That’s why it’s not a good idea to think of others as idiots.
Indeed, just as it can be smart to “forget” when you have a terminal condition. The “pretend it’s different” from my ancestor comment sometimes works fine from an instrumental rationality perspective, just not from an epistemic one.
Whether someone is worth arguing with is a subjective value judgement.
And given your values you’d ideally arrive at those through some process other than the one you use to judge, say, a new apartment?
I think that trying to understand the worldview of people who are very different from you is often useful.
Trying to explain ideas in a way that you never explained them before can also be useful.
I agree. I hope I didn’t give the impression that I didn’t. Usefulness belongs to instrumental rationality more so than to epistemic rationality.
That’s … not quite what “framing” means.
I predict the opposite effect. Framing idiots as idiots tends to reduce the amount that you end up arguing (or otherwise interacting) with them. If a motivation for not framing people as idiots is required look elsewhere.
Qiaochu_Yuan
I really want to see the context for this.
http://lesswrong.com/lw/g9l/course_recommendations_for_friendliness/#comments
“Taking up a serious religion changes one’s very practice of rationality by making doubt a disvalue.” ~ Orthonormal
-- Scott Alexander, On first looking into Chapman’s “Pop Bayesianism”
-- NancyLebovitz
You can find the comment here, but it is even better when taken completely out of context.
-Robin Hanson, on a Blogginheads.tv conversation with Daniel Sarewitz. Sarewitz was spending a lot of time criticizing naive views which many smart people hold about human enhancement.
Experience is the result of using the computing power of reality.
-- Roland
Quoting yourself is probably a bit too euphoric even for this thread.
Eliezer Yudkowsky, My Childhood Role Model
Not a very advanced idea, and most people here probably already realised it—I did too—but this essay uniquely managed to strike me with the full weight of just how massive the gap really is.
I used to think “human brains aren’t natively made for this stuff, so just take your biases into account and then you’re good to go”. I did not think “my god, we are so ridiculously underequipped for this.”
Perhaps the rule should be “Rationality Quotes from people associated with LessWrong that they made elsewhere”, which would be useful, but not simply duplicate other parts of LW.
I think the rule should be simply the exact converse of the existing Rationality Quotes rule, so every good quote has a home in exactly one such place.
How about a waiting period? I’m thinking that quotes from LW have to be at least 3 years old. It’s way of keeping good quotes from getting lost in the past while not having too much redundancy here.
I think three years is too long. I would imagine that there are a large number of useful quotes that are novel to many users that are much less than three years old.
Personally I would say we should just let it ride as is with no restrictions. If redundancy and thread bloat become noticeable issues then yeah, we might want to set up a minimum age for contributions.
This would be ideal. I like the notion of having a place for excellent rationalist quotes but like having the “non-echo chamber” rationality quotes page too.
I think let’s see what happens.
It is tempting but false to regard adopting someone else’s beliefs as a favor to them, and rationality as a matter of fairness, of equal compromise. Therefore it is written “Do not believe you do others a favor if you accept their arguments; the favour is to you.” -- Eliezer Yudkowsky
— Eliezer Yudkowsky, Creating Friendly AI
— Eliezer Yudkowsky, Creating Friendly AI
— Steve Rayhawk, commenting on Wei Dai’s “Towards a New Decision Theory”
— Steve Rayhawk
-Luke, Pale Blue Dot
Stirring quotes from this video about the Singularity Institute (MIRI
IMO synthetic biology constitutes a third domain of advancement—the future of the living world
Isn’t that a subset of the material world? I imagine nanotechnology is going to play a part in medicine and the like too, eventually.
Of course, more than one thing can be about the future of the somethingsomething world.
Anything is a subset of another thing in one dimension or another.
EY—Interlude with the Confessor
EY is right by contemporary theories:
Though he’s making a very different point, I’d like to point something else out inspired by this piece that I do not feel would fit in with the narrative at the generic thread.
In my opinion, violence against men or intimate partner violence as a gender neutral construct is equally important, but more neglected yet, from a more neutral piece, as tractable as violence against women.
To satisfy anyone’s curiosity, I identify neither as a feminist, nor a men’s rights activists, nor as a humanist, but a rationalist.
kamenin on Collapse Postulates
Eliezer Yudkowsky