Rationality Quotes June 2012
Here’s the new thread for posting quotes, with the usual rules:
Please post all quotes separately, so that they can be voted up/down separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
Do not quote yourself
Do not quote comments/posts on LW/OB
No more than 5 quotes per person per monthly thread, please.
- 23 Jun 2012 8:23 UTC; 2 points) 's comment on Wes Weimer from Udacity presents his list of things you should learn by (
- 5 Jun 2012 14:23 UTC; 0 points) 's comment on Open Thread, June 1-15, 2012 by (
Razib Khan
E. T. Jaynes “Probability Theory, The Logic of Science”
I recall a math teacher in high school explaining that often, in the course of doing a proof, one simply gets stuck and doesn’t know where to go next, and a good thing to do at that point is to switch to working backwards from the conclusion in the general direction of the premise; sometimes the two paths can be made to meet in the middle. Usually this results in a step the two paths join involving doing something completely mystifying, like dividing both sides of an equation by the square root of .78pi.
“Of course, someone is bound to ask why you did that,” he continued. “So you look at them completely deadpan and reply ‘Isn’t it obvious?’”
I have forgotten everything I learned in that class. I remember that anecdote, though.
The standard proof of the Product Rule in calculus has this form. You add and subtract the same quantity, and then this allows you to regroup some things. But who would have thought to do that?
--Richard Hamming
IIRC there was an xkcd about that, but I don’t remember enough of it to search for it.
EDIT: It was the alt test of 759.
Note that xkcd 759 is about something subtly different: you work from both ends and then, when they don’t meet in the middle, try to write the “solution” in such a way that whoever’s marking it won’t notice the jump.
I know someone who did that in an International Mathematical Olympiad. (He used an advanced variant of the technique, where you arrange for the jump to occur between two pages of your solution.) He got 6⁄7 for that solution, and the mark he lost was for something else. (Which was in fact correct, but you will appreciate that no one was inclined to complain about it.)
Is 759 the one you are thinking of? The alt-text seems to be relevant.
Yes.
An anecdote concerning von Neumann, here told by Halmos.
This is also why I don’t trust poets who claim that their works spring to them automatically from the Muse. Yes, it would be very impressive if that were so; but how do I know you didn’t actually slave over revisions of that poem for weeks?
It’s “Jaynes.”
Fixed. Thanks.
Does anyone have a link to an ebook of this book?
libgen.info has a variety of versions.
Thank you! Looking forward to reading.
Honestly, I think PT:TLoS is probably best for those who already understand Bayesian statistics to a fair degree (and remember their calculus). I’m currently inching my way through Sivia’s 2006 Data Analysis: A Bayesian Tutorial and hoping I’ll do better with that than Jaynes.
I think PT:TLoS is probably best for those who understand frequentist statistics to a fair degree. He spends a whole load of the book arguing against them, so it helps to know what he’s talking about (contrary to his recommendation that knowing no frequentist statistics will help). The Bayesian stuff he builds from the ground up, calculus is all that’s needed to follow it.
Jaynes begins it with a caution that this is an upper undergrad to graduate level text, not knowing a great deal of probability in the first place, I stopped reading and picked up a more elementary text. What do you think are the core pre-reqs to reading Jaynes?
I have no idea—I’ll tell you when I manage to satisfy them!
I’d agree, with the exception that chapters one and five (and maybe other sections) are great for just about anybody to get a qualitative understanding of Jaynes-style bayesian epistemology.
Ah, yeah—chapter 5 is pretty good. (I recently inserted a long quote from it into my Death Note essay.)
.
Upvoted for the “related”.
http://www.youtube.com/watch?v=I12H7khht7o&feature=player_embedded
Video by Fallon, a scientist who found out that he was a sociopath—he says it doesn’t bother him that everyone he knew said he was bad at connecting emotionally, but he does seem motivated to work on changing.
I really wish we had brain scans of this guy at 19 and at 25. I want to see which areas were developed!
Yes I can. Speak for yourself (Buck).
I read it more charitably, as being isomorphic to Schopenhauer’s “A man can do as he wills, but not will as he wills.” The idea is that you are feeling something and not something else, and regardless of what you are feeling you can and should do right.
The distinction may be between setting up the preconditions for a feeling (which has some chance of working) and trying to make a feeling happen directly (which I think doesn’t work).
Making a feelings happen directly isn’t easy. It’s a skill. Given the demographic on this website there a good chance that a lot of the readers can’t control their feelings. Most of the people here are skilled at rationality but not that skilled at emotional matters.
It’s a bad idea to generalise your own inability to control your feelings to other people.
Can you describe the process of making feelings happen directly?
Directly is a tricky word. In some sense you aren’t doing things directly when you follow a step by step process.
If you however want a step by step process I can give it to you (but please don’t complain that it’s not direct enough):
1) You decide which emotion you want to feel.
2) You search in your mind for an experience when you felt the emotion in the past.
3) You visualize the experience.
4) In case that you see yourself inside your mental image, see the image as if you are seeing it through your own eyes.
5) If the image is black and white, make it colored.
6) Make the image bigger.
7) Locate the emotion inside your body.
8) Increase the size of the emotion.
9) Get it moving.
10) Give it a color.
11) Increase movement and size as long as you want.
That’s the way of doing it I learned at day two of an NLP seminar.
I’m not actually sure of what you mean by ‘directly’ here. Which of the following does ‘setting up the preconditions’ include:
a) changing breathing patterns etc b) focusing thought on particular events etc. c) rationalising consciously about your emotional state d) thinking something like ‘calm down, DavidAgain calm down calm down’
I doubt many people can simply turn a powerful emotion on or off, although I wouldn’t rule it out. I read (can’t find link now...) about a game where the interface was based on stuff like level of ‘arousal’ (in the general sense of excitement), which you had to fine tune to get a ball to levitate to a certain level or whatever. I’d be surprised if someone played that a lot with high motivation and didn’t start to be able to jump directly to the desired emotional state without intermediary positions. And being able to do so obviously has major advantages in some more common situations (e.g. being genuinely remorseful or angry when those responses will get the best response from someone else and they’re good at reading faked emotion, or controlling panic when the panic-response will get you killed)
This game sounds awesome, I am going to try and search for it so I can test this.
A while (i.e. about a decade) ago, I read about a variant of Tetris with a heart rate monitor in which the faster your heart rate was the faster the pieces would fall.
Looks like there are a few pc input devices on the market that read brain activity in some way. The example game above sounds like this Star Wars toy.
Well, what works for someone may not work for someone else. (Heck, what works for me at certain times doesn’t work for me at other times.)
Really? Are you sure you’re not just making yourself believe you feel something you do not?
Yes. It’s not an unusual ability to have. It can take a long time and concerted effort to develop desired control over one’s own feelings but it is worth it.
Yes.
I’m sure. Certain feelings are easier to excite than others, but still. All it takes is imagination.
A fun exercise is try out paranoia. Go walk down a street and imagine everyone you meet is a spy/out to get you/something of that sort. It works.
(Disclaimer: I do not know if the above is safe to actually try for everyone out there.)
Anger is pretty easy, too. All I have to do is remember a time I was wronged and focus on the injustice of it. Not very fun, though.
I’m not sure it would work for me, knowing that (e.g.) setting my watch five minutes early doesn’t work to make me hurry up more even though it does work for many people I know.
On the other hand, I can trigger the impostor syndrome or similar paranoid thoughts in myself by muling over certain memories and letting the availability heuristic make them have much more weight than they should.
What in the actual fuck? This is the exact opposite of what is rational: “Relinquish the emotion which rests upon a mistaken belief, and seek to feel fully that emotion which fits the facts. If the iron approaches your face, and you believe it is hot, and it is cool, the Way opposes your fear. If the iron approaches your face, and you believe it is cool, and it is hot, the Way opposes your calm. Evaluate your beliefs first and then arrive at your emotions. Let yourself say: “If the iron is hot, I desire to believe it is hot, and if it is cool, I desire to believe it is cool.” Beware lest you become attached to beliefs you may not want.”
Emotions are generally considered instinctive, not deliberated. You could argue that anything instinctive should be thrown out until you have a chance to verify that it serves a purpose, but brains are not usually that cooperative. Knowing you have an emotion which you do not want (I get angry when people prove that I am wrong about something which I have invested a lot of time thinking about), and being able to destroy it are two different things. If you are able to act in accordance to your best plan instead of following instincts at all times, and run error correction routines to control the damage of an unwanted emotion on your beliefs you are doing something rational.
The latter half of the quote is fine, but the first half is completely wrong and is the opposite message of what rationality says.
You seem to be suffering from is-ought confusion. Yes, it would be nice to eliminate the irrational emotion, but this isn’t always possible or requires too much effort to be worthwhile.
.
Technically false. Consider adding an extra word in there to indicate scope. Even “the” between “make water” would do. (Making an unspecified amount of water is far easier than irrigating a field from a nearby river.)
.
Philosophy Bro
Upvoted for introducing me to one of the funniest blogs I’ve ever seen. The ironic writing style is brilliant:
There’s something deep here. Meanings aren’t just in your head… but whose head are they in anyway?
This whole notion of Analytic vs Continental tradition in philosophy boggles my mind and lowers my already low opinion of philosophy in general even further. If two prominent schools cannot even agree on the basic ideas, those ideas are not worth agreeing on.
“This whole notion of evolution vs creationism tradition in biology boggles my mind and lowers my already low opinion of biology in general even further. If two prominent schools cannot even agree on the basic ideas, those ideas are not worth agreeing on.”
I don’t agree with the parent of your post, but I don’t think your example responsive. He is using the struggle between A and B to argue that both are worthless. Your rephrasing relies on the fact that B is already known to be worthless. In short, the lack of parallelism means your criticism of the structure of his argument is misplaced.
Jack is probably right on the merits.
Even if you don’t know anything about A and B, the fact there is struggle doesn’t clearly mean there isn’t one that’s clearly right. This debate has been hashed out a lot in the literature on moral realism, and I think most people ultimately find the naive argument from disagreement unconvincing.
I any case, I think continental philosophy is already known to be worthless.
What?
EDIT: I am not aware that biology has a creationism tradition.
Larks can speak for themself, but I think their analogy was
analytic philosophy : continental philosophy :: (evolutionist) biology : creationism
so what seems to an outsider like a disagreement between schools is actually a disagreement between people doing “real” philosophy and goofy people doing something that they call philosophy.
(This seems overstated at best to me.)
And you are not goofy the moment you start doing any kind of philosophy that is not ‘reduction to cognitive algorithms’ or ‘consequentialist utilitarianism’? I see philosophy mostly as people trying to sound clever.
ETA: I have nothing personally against Philosophers, I just think philosophy as a well-respected field has taken a few too many wrong turns.
(Disclaimer: My knowledge of philosophy is at the 101) level.)
I do think a lot of what passes for philosophy is bogus. But the bogus philosophers still have “Department of Philosophy” on their university letterhead. Meanwhile: To a first approximation, all biologists believe in evolution.
Hm. I’m not sure how many biologists there are, but my guess is this allows for uncertainty on the order of a million biologists. Is the situation really that bad? I would have guessed that at least to a third approximation all biologists believe in evolution.
Are you reading “to a first approximation” as “to one significant figure”? I thought it meant something like “using the lowest order function which is an accurate fit to the data”. So, to a first approximation, pi is roughly 22⁄7, and to a first approximation, the distance to Earth’s horizon in kilometers is 3.57 times the square root of the height above sea level in meters.
To say that “to a first approximation, all biologists believe in evolution” is, by this definition, to say that the fraction of biologists that don’t is so small that it is not easily measured. I believe this to be the case because that fraction is so small that it is significantly affected by the choice of definition of “biologist”.
Yes. Perhaps incorrectly.
Yes, but my “nth approximation” module only has settings for “zeroth” and “first”.
Why was this downvoted? It points how a way that Larks’s example is not analogous to shminux’s.
My guess is that I gained some notoriety here and my comments tend to get a few downvotes because of this, rather than because of their content. Which tells me that I have to phrase my replies much more carefully. Still working on it. (If whoever silently downvoted some of my recent comments think that this guess is out to lunch, I’d greatly appreciate their feedback, here or in PM.)
Hi, shminux. I recently downvoted this comment of yours. I did recognize you, but that’s from seeing you in the lesswrong IRC channel, where you make a significant portion of the interesting discussion, not from lesswrong.com, where I don’t generally look at the authors of comments or posts unless I’m having trouble following a discussion or I feel that it would be prudent to associate the author with their comment or post (for instance, I learned the name of user Nisan after they posted Formulas of Arithmetic That Behave Like Decision Agents, which contained a splendidly unusual amount of math for lw). I was particularly surprised by the low quality of your arguments in that thread, given my past experience with you. Still, I disliked one of your comments first, and saw your name second.
I also responded to one of your comments in that thread, here. I didn’t further downvote your comments, because I make a point of not downvoting people whom I’ve engaged in discussion, just as a point of argumentative hygiene. Absent that, I might have downvoted every comment of yours that I read, without reply. I don’t have any problem downvoting silently. It might be a polite norm to give feedback to any post or comment of low quality, but it is not a good use of my time in general, certainly not for that thread, in which many people were responding to you with comments to the effect that your conclusions were sloppy or informal. If other people behave as I do, then I would guess it was not one person who downvoted you, but a few people who did, and that the downvotes were given on the basis of your comments, rather than on who you are.
Thank you for your feedback! Upvoted. Though I don’t believe I ever commented on the thread you mention. Maybe you mean some other thread. I’d also appreciate if you elaborate on what in particular constitutes “low quality” for you.
I wasn’t one of the silent downvoters, but I went ahead and downvoted without being silent because your comment just misunderstands Larks’s. He did not even implicitly claim that there is a creationism tradition in biology, but rather an ongoing, publicized debate between evolution and creationism, which is analogous to analytic vs. continental philosophy, if one is laughably wrong but still famous for whatever reason.
I guess I fail to see an analogy between a debate between two factions in what is supposedly a science and that of science vs religion. In the latter case, it is easy to tell who the loony is, while in the former the only conclusion I can make is that they both are.
What’s your algorithm for telling who the loony is? Look for the one not wearing a lab coat?
Hmm, if you need help figuring out who the loony is in the evolution/creation debate, this comment thread is not the place to set things straight.
I didn’t say anything about my method of telling the looney. My point was that your method of telling the looney seems to boil down to who has high status/is wearing a lab coat.
Yeah, there does seem to be some amount of karmassination going on here.
That’s OK, it’s a risk you run if you stick your neck out.
I think the philosophy bro also overstates the disagreement. I’m in a philosophy department myself, and I know of no one at the graduate student level or above who thinks there’s a serious division along analytic vs. continental lines. Part of that, though, is that much of what was called continental philosophy has now become literary criticism, etc. Part of it is that what got called ‘analytic philosophy’ 70 years ago isn’t really around any more.
This is by no means a consensus view, but I think it’s a mistake to think of philosophy as something which produces results from a common theoretical basis. Philosophy can seem like a science, especially as a result of academia’s way of organizing things, but a lot of it doesn’t really resemble one in practice. There are trends in discussion, but philosophy has no fixed subject matter. There are schools of thought on various issues, but no unifying theoretical framework or methodology.
Now, these facts might justifiably cause you to have a low opinion of philosophy. But it’s worth considering that your standards for intellectual activity are being misapplied here: maybe philosophy isn’t supposed to be like a science.
I don’t mind that approach, as long as philosophy is treated as art. Then one would simply appreciate the beauty of its best masterpieces, rather than argue which one is more right. Which also means that it has absolutely nothing to do with rationality.
But we have an even poorer comparison there. Works of philosophy are presented, reviewed, and discussed as arguments, not as aesthetic artifacts. I know of no philosophers who think the aesthetics of an argument is anything but a secondary consideration, and no part of philosophical training looks like the training an author of novels or poet might get. Philosophers are expected to be clear and engaging, but not artists. I’d say it has about as much in common with art as does physics or mathematics.
With physics, we could be having a conversation about some theory or experiment, but we wouldn’t be doing physics. But just having this conversation about philosophy is itself philosophy. We’re doing philosophy, right now, in exactly the same sense professionals do. And one of the things we’re doing is arguing about what the right thing to think is, and we’re holding ourselves to standards of rationality. So there it looks a little bit like science. On the other hand, neither of us is deploying any fixed method, and we’re not trying work out the implications of a specific theory of intellectual activity we both accept. So there it doesn’t seem like a science. What is it that we’re doing, and how are we doing it?
So, does this mean that you agree with my assessment of philosophy in the original comment (currently downvoted to −10)?
Well, I don’t have a low opinion of my chosen profession: I’m very happy with what I do. But I do have a low opinion of some philosophers and some philosophical work, along with a high opinion of some others.
My original reply was just intended to point out that a) the analytic/continental divide is no longer a significant part of the academic philosophical world, and b) that you don’t have good reasons to compare philosophy to an academic science (or to a form of art).
As to how we should think of philosophy, I think we have an easy way to approach the question: how do you think about what you’re doing right now? Do you take yourself to be producing a work of art? Do you take yourself to be engaging in scientific theorizing or experiment of some kind? What methods are you applying? What standards are you holding yourself to?
I’m asking these questions in seriousness, not as a rhetorical move. I want to know your answers. I don’t think I’m presently engaged in either science or art. I consider myself held to standards of honesty and sincerity, and to producing good and convincing arguments. I think that if what I’m doing right now has no relation to the truth, then what I’m doing is in vain. I also don’t think I have an answer to the question ‘what is philosophy and how should it be treated?’
Well, right now I’m writing some rather routine software I am paid to write, and I try to do a decent job, but it is by no means art or science, though I do learn a thing or two now and then. When I was doing actual research (calculations and simulations in General Relativity), it was no art, either, but it did produce some non-negligible results, though nothing earth-shuttering. Unfortunately, it was not quite at the level of an experimentally falsifiable model, which would be a fair standard for me.
Ah, I’m sorry, I was unclear. I mean ‘right now’ as in ‘the activity of having a conversation with me’ or at any rate ‘the activity of having conversations roughly of this kind’.
As noted in the link, this is much more of a spectrum and much less of a knife fight than it used to be.
(The broader methodological point extends well beyond philosophy, or was at least quoted with that intention.)
Analytic and Continental philosophers rarely have ideas on the same subjects. And it’s more like one prominent school and English departments.
Andrew Vickers, What Is A P-Value, Anyway?
A Softer World
-Eric Hoffer
--Alan Alda, in an interview at The Colbert Report, telling the story that gave rise to The Flame Challenge. It has been mentioned on LW before, but I thought it was worth posting it here as a perfect illustration of a Teacher’s Password.
-Charles Babbage
Only if you’re using a consistent estimator. (Yes, that’s a frequentist concept, but the same sorts of problems show up in a Bayesian context once you try to learn nonparametric models...)
On the other hand:
A little learning is a dangerous thing; drink deep, or taste not the Pierian spring: there shallow draughts intoxicate the brain, and drinking largely sobers us again.
Alexander Pope
I’d heard that quote before, but this was the first time I recognized the referent for Mount Stupid.
A more general but less witty form of
–Fred Mosteller
You are a little late to that party.
How’s about this?
— Andrew Gelman
Phil Plait, Don’t Be A Dick (around 23:30)
The former is the most powerful method I know of for the latter. As elspood mentioned, it obviously isn’t the victims in particular that will be persuaded.
Wouldn’t killing be better described in this context as coercion? Which feels distinct from persuasion, to me.
On humans it does both. Humans are persuaded by power, not merely coerced. (Being persuaded like that is a handy ‘hypocrisy’ skill given bounded cognition.)
Voted up for the link to the video, which is a good explanation for why dumping hostility on people is not an effective method of convincing them.
FWIW, those that are ‘hostile’ don’t generally believe they’re going to convince the people they’re being hostile to. They’re after the peanut gallery; the undecided.
The effect on the peanut gallery is hard to track.
It’s at least as likely that dumping hostility on outsiders is a way of maintaining group cohesion among those who have already identified themselves with the issue.
As you say, in-group signalling is a more likely explanation—hostility is widely unpersuasive to those who are actually undecided.
I don’t you can properly isolate these two strategies, there is a reason peace so frequently evolves into war: intelligent rational people living in a peaceful time frequently can reach their goals most easily by creating a violent environment. Diplomacy is safer, easier, and generally something I prefer, but violence can influence many more people much faster.
Tycho
Alexandre Borovik, quoting an unidentified colleague, paraphrasing another unidentified source, possibly Martin Gardner quoting a letter he got.
It seems to make the same point as the Parable of the Dagger.
(I.e.: logic games are fun and all, but don’t expect things to work that way in the real world. Or: it’s valuable to know the difference between intelligent thinking and smart-assery.)
-- Carl Sagan, 1987 CSICOP Keynote Address
I don’t think that the idea that politicians don’t change their position has much basis in reality. There are a lot of people who complain about politicians flip-flopping.
When a politician speaks publically, he usually doesn’t speak about his personal decision but about a position that’s a consensus of the group for which the politician speaks. He might personally disagree with the position and try to change the consensus internally. It’s still his role to be responsible for the position of the group to which he belongs. In the end the voter cares about what the group of politicians do. What laws do they enact? Those laws are compromises and the politicians stand for the compromise even when they personally disagree with parts of it.
A scientist isn’t supposed to be responsible for the way his experiments turn out.
And if you take something like the Second Vatican Council there’s even change of positions in religion.
Yes, politicians flip-flop, and they take heat for it. And religious organizations do revise their doctrines from time to time.
But they don’t like to admit it. This shows itself most clearly in schisms, where it’s obvious at least one party has changed it stance, yet both present the other side as the schismatic one (splitters).
Thus even though they have changed, they do not “update”—or they do, but then they retcon it to make it look like they’ve always done things this way. (Call it “backdating,” not updating.) This is what the superstates do in 1984.
Coming up with real examples is trivial. Just find a group that has ever had a schism. That’s basically every group you’ve heard of. Ones that come to mind: Marxists, libertarians, Christians, the Chinese Communist Party. Triggering issues for the above groups include the nature of revolution, the relationship between rights and welfare, the Trinity, the role of the state in the economy...
How many scientific papers contain the lines: “In the past the authors of this papers were wrong about X, but they changed their opinion because of Y”?
In short, not nearly enough.
None, because journals are really careful about proof-reading.
Do you mean:
1) Because journals are really careful about proof-reading and there are no errors in journal articles?
2) Because journals are really careful about proof-reading, they delete every sentence where a scientist says that “I’ve been wrong in the past”?
3) Some other way in which careful proof-reading removes the possibility that “I’ve been wrong in the past” appears in a journal article?
It was grammar nitpicking. “The authors where wrong”.
I had guessed it must be something like that, but I failed to see the typo in the grandparent and changed my mind to the parent being some different joke I didn’t get or something. (I’ve retracted the downvote to the parent.)
Also “this papers”.
Inspiring, but not true.
In what respect is it not true? I’ve certainly observed it. I haven’t observed it every day, but most scientists in the world are not under my observation.
If Sagan had actually looked for it happening in politics and religion, he’d have found plenty of examples. Especially in the latter.
If it really does happen in politics and religion at a comparable rate, then the quote is certainly misleading, but I rather doubt that that is the case. Sagan did not say that it never happens in politics or religion, only that he could not recall an instance.
Charles Dickens, David Copperfield (HT Cafe Hayek.)
A reasonable start, but quite insufficient for the long run. Sixpence savings on twenty pounds income is not going to insulate you from disaster, not even with nineteenth-century money.
A disaster is an abrupt fall in income or abrupt increase in expenditures, so it falls under the general claim.
In fact, it may not even outpace inflation, much less the opportunity cost of the interest-free rate.
I would have thought that, having decided to invest X amount of money per unit time, what matters for beating inflation is the interest you can get on it, not the size of X. Sixpence will fail as savings because it’s 0.021% of your annual income, not because of inflation; even if you assumed the value of money was perfectly stable, it would take you a long time to build up any sort of reserve at that speed.
Inflation in England in this period was, as far as I know, remarkably low and <1%, even experiencing periods of apparent deflation. (Whether it beat Roman Egypt Sixpence compounding might go a decent way. See also Gregory Clark, Farewell to Alms:
--Nicolás Gómez Dávila, source
I liked the quote, once I figured out how all the negatives interacted with each other.
Too true.
Patrick McKenzie, the guy who gets instrumental rationality on the gut level.
More from the same source:
Obviously I need to figure out how to start charging for my website!
I’ve had the impression that you’ve been selling yourself short for quite some time.
Maybe you can start by following Patrick’s example and offering some of the choice data you collect and analyze to the people subscribing to your mailing list. You can also figure out who might be interested in the information you collect (a cool project in itself), and how much it would be worth to them.
I wonder if a donate button at the end of each article, tied with a question along the lines of “How valuable was the article you just read?”, would be effective. (You could even set it up so that you can track the amount donated by article, and use that to guide future research- I’m not sure how effective that would be, since that depends on how many alternatives you have to pick from in considering new research topics.)
Well, I do have donation stuff setup; last week I moved the Paypal button from the very bottom, post footnotes (where the Bitcoin address remains), to the left sidebar, to see if that would help. (So far it hasn’t.)
A rating widget is a good idea; I’m messing around with some but I’m not seeing any really good ones hosted by third-parties (static site, remember).
I am completely undisciplined and I do this stuff as the whim takes me. A month ago I didn’t expect to learn how to do meta-analyses and run a DNB meta-analysis and 2 weeks ago I wasn’t expecting to do an iodine meta-analysis either; the day before Kiba hired me to write a Silk Road article, I wasn’t expecting that either...
There’s also a competition effect here. With thousands of free blogs, people don’t want to pay for yours or mine. They’ll just navigate to someone else’s, even if it isn’t quite as brilliantly insightful.
Indeed, that’s a problem. I like to think my content is pretty unique—no other site is as good a resource on dual n-back, no other site is as good a resource on modafinil, etc. - but that doesn’t amount to a hill of beans in this crazy old world.
A way to make real money is to sell to businesses. Do you have any content or service a 100+ person company might want?
Not that I’ve thought of so far.
Well, you have the ability to write articles of exceptionally high quality. They are concise, easy to read, very thoroughly researched, and always offer paths to learn more or elaborate on points of interest.
These sorts of reports are highly valuable to companies and I think you would be incredibly valuable as a knowledge consultant. Think Lisbeth Salander for technical subjects.
I do value your research and writings. I was thinking about offering to buy you a laptop because it sounded like you had an old POS that was hampering said research and writings, but then I decided that would be too weird.
I did have a POS, but in July 2010 I finally bit the bullet and bought a new Dell Studio 17 laptop that has since worked well for me. (The hard drive died a few months ago and I had to replace it, almost simultaneously with my external backup drive dying, which was very stressful, but Dell doesn’t make the hard drives, so I write that off as an isolated incident.)
Ah, then I only need to buy you a 2-year backblaze subscription, that’s far cheaper.
Backblaze sounds great, but they don’t have a Linux client.
tarsnap it is, then.
Tarsnap is cool—I like Colin’s blog and stuff like scrypt. (The latter was relevant to one of my crypto essays.)
For the record: khafra actually did donate to me and wasn’t just cheap signaling. Well done!
Wow. Great stuff khafra. I hereby grant you some portion of the respect granted to gwern for his nootropics research!
Well, that’s a pretty good prestige-per-dollar return, then; thanks! (And thanks to gwern, and keep up the good work).
Crashplan does.
I will sing the praises of git and vim, but I didn’t pay any money for them. He says extract a commitment, not necessarily a monetary commitment; I read half a book before I started using git, and vim took a lot of practice. So you could use more specialized terminology or something like that. git and vim are both very well-spoken of, and I probably wouldn’t have bothered to learn them if they weren’t. But I also don’t bother to spend money on things that don’t have a good reputation, if I haven’t had experience with them already. So, either way, requiring a commitment from the user turns away a lot of them.
(I’ve never read your website)
Probably won’t work very well. If you can program, you can make some money writing some useful software. You can write an app to make it easier for people to perform double blind experiments on their medications for example. People in general only pay for something they directly use.
FFS, how can people misremember who they voted for in an election with only two plausible candidates?
A large number of them may have not voted at all, but remember themselves doing so.
I suspect, with no data to back me up, that is those who were ambivalent when they stepped into the polling booth that genuinely misremember. Others know they voted for the other guy, but want to be seen as one of the ‘winners’.
There are many U.S. elections I have voted in where there were two candidates for an office and I couldn’t tell you which one I voted for. Admittedly, no cases involving Presidential candidates; I’m usually pretty sure who I’m voting for in those cases.
Or the survey he’s referring to is biased. Seems hard for it not to be… did they knock on doors all across the country? If it’s based on mail or telephone responses, are people who voted for Obama more likely to respond to those?
Or, he’s misquoting the survey. If you were testing the hypothesis that people misremember voting for the winner, wouldn’t you sample a smaller area than the whole country, and then compare your results with the vote count from that area? Why would an experiment like that ever get a number meant to be compared with the whole country’s votes?
I suspect, with no data to back me up, that the latter class contains many more people than the former. (If I were that ambivalent, I wouldn’t vote for either major candidate at random; I would either vote for a minor candidate, or not vote at all. But I guess not everybody is like me.)
Wrong question. I’d say people who voted for the other guy remember, but aren’t so eager to respond to surveys.
-The Catholic Encyclopedia
What makes that one most interesting is its source.
I suspect that if the source was a less unexpected one, say Albert Einstein or Carl Sagan, the quote would seem obvious and uninteresting to LWers and its karma score would be less than half what it is.
This makes perfect sense in terms of Bayesian reasoning. Unexpected evidence is much more powerful evidence that your model is defective.
If your model of the world predicted that the Catholic Church would never say this, well… your model is wrong in at leas that respect.
Well, I would have upvoted such a quote no matter who it was by.
Yes, an interesting question is how may readers will update their opinion of the Catholic church based on this.
I was not surprised by this, because I know many Catholics honestly try to be rational… of course only within the limits given by the Church.
They would have absolutely no problem with Bayesian updating; the only problem would be the Solomonoff prior. If you replace it by “the Catholic Church is always right” prior, you are free to update rationally on everything else and remain a good Catholic.
This is why Catholics don’t have a problem to accept e.g. evolution, as long as someone can provide an explanation how evolution can be compatible with “the Catholic Church is always right”. (A possible explanation could be e.g. that God created the first life forms; that evolution is a consequence of physical laws created by God, therefore any result of evolution is still indirectly created by God; and that humans are somehow an exception to this process, because even if their bodies are a result of evolution, they also have an immaterial soul created directly by God.)
I don’t believe theists would have any problem with Solomonoff prior. Some ten-state two-symbol machine with a blank tape can be a God for all we could ever know, and then it could create us within it’s machine and do what ever it wants (and the souls could be just the indices it keeps on us).
You know who actually has problem with Solomonoff prior? People who understand it.
Isn’t this the wrong question? We’d want to know what proportion of ten-state two-symbol machines with blank tapes turned out to be gods.
In so far as that’s what we want, Catholicism still falls to being a huge conjunction of propositions.
Do we? I would think we would want to know what proportion of universes are created by ten-state two-symbol machines that are gods as opposed to ten-state two-symbol machines that are not gods.
That was implied by “proportion”.
One short god will suffice if laws of physics require substantially larger program. And for all we know they do.
edit: Also, there’s only what, 20^10 = about 10 trillion possible ten state two symbol machines? Maybe 9 old British billions after you eliminate non-universal machines. That’s less than the data in physical constants we haven’t derived.
.
The other day a client sent me a new sighting of a bug I’d been stalking for a while. The new info allowed me to trap it between two repository revisions, flush it out of the diffs and stomp on the sucker. It did briefly feel kind of primal.
Paul Graham, What You’ll Wish You’d Known
Bit of a tangent, but something from that essay always bothered me.
Paul Graham
Robert Cialdini, Influence
It doesn’t seem to me that Vincent-as-described-by-Cialdini is someone with a passion for waiting at tables; especially not the sort that could also be described as a “passion for service”. If anything, he has a passion for exploiting customers, or something of the kind. I would expect someone with a genuine passion for table-waiting—should such a person exist—to be as reluctant to mislead customers as, say, someone with a passion for science would be to spend their life working for a partisan think tank putting out deliberately misleading white papers on controversial topics.
(To forestall political arguments: I am not implying that all think tanks are partisan, nor that all white papers put out by partisan think tanks are deliberately misleading.)
...and “Influence” goes onto my “to read” list.
Robert Cialdini, author of “Influence”
Also true of, say, OCD.
This speech was really something special. Thanks for posting it. My favorite sections:
And:
Great stuff.
-- James L. Sutter
Patricia Churchland
--Friedrich Nietzsche, The Gay Science #51
--Eugene Gendlin
Seek not to follow in the footsteps of men of old; seek what they sought. -Matsuo Basho, poet (1644-1694)
Seems like a good way to think of the “seek to succeed, not to be rational” idea.
Daniel Dennett
When I read the opening line I guessed he was going to go in the opposite direction—as Paul Graham probably would have.
I can see uses to both ways of simplifying one’s relationship with the rest of the universe.
Aren’t Graham and Dennett talking about different things entirely? Dennett is trying to help us understand better how materialism is compatible with having free will and a conscious self; his prescription here is to avoid a common pitfall, that of dismissing all “upwards” processing of perception and all “downwards” action-starting signals as “mechanical computing, not part of the self” and locating the Cartesian self at the zero-extension intersection of these two processes. It is better to think of the self as extended in both directions. When Graham says “keep your identity small”, he is talking about a different sense of “identity” and “small”, roughly “do not describe yourself with labels because you might become overly invested in them and lose objectivity and perspective”.
I now want to make up bumper stickers that read “What Would Paul Graham Do?”
Granted, I want to do other things that preclude doing so even more.
Wanting to associate your identity with a person, in part because they have a very good argument for why you shouldn’t associate your identity with things, and then doing something more important instead… there’s something almost poetic or ironic about it.
Poetic? Nice call.
On the plus side at least it indicates that they aren’t so caught up in affiliation that we aren’t able to ignore his dogmas when it isn’t useful to us.
This is only tangentially related, but:
It’s probably really important to notice when you feel a desire to signal affiliation with someone or something by purchasing paraphernalia or, e.g., getting a bumper sticker. Wanting to signal that you like something generally means that your identity has expanded to include that thing. This, of course, can be both a symptom and a cause of bias (although it isn’t necessarily so). See also all this stuff. Or, more concisely: “I want to buy a bumper sticker/t-shirt/pinup calendar/whatever” should sound an alarm and prompt some introspection.
(I’m not trying to imply that you have a bias towards Paul Graham, just making a general statement.)
Yeah, I agree with (at least the core of) this.
Of course, that’s why you what to identify with Paul Graham.
Looking briefly at a few sites specializing in custom bumper stickers, I estimate you could probably make and pay for some in half an hour to an hour. Do you want to do those other things that badly?
You know, it’s actually a really good question.
I think what’s true here, now that I’m considering it for more than five seconds, is that I don’t actually want to do this at all, I just think it’s a funny idea and wanted to share it, and I chose “I want to X” as a conventional way of framing the idea… a habit I should perhaps replace with “It would be funny to X” in the spirit of not misrepresenting my state to no purpose.
Yes, I figured as much. :)
How would Paul Graham approach it?
http://paulgraham.com/identity.html
Why?
Because if you don’t you’ll fail to see what is doing all the thinking, you can’t strip a car of all it’s parts and still expect it to run, if you do, you’re left with saying “nothing is making the wheels turn”.
This comment and the quote make absolutely no sense to me. Splitting a mysterious things like a “me” concept into less mysterious things like a smaller “me” and a “queryable reason store” is the heart of reductionism and explanation. Doing that doesn’t remove the wheels from the car, it just relabels the car into “wheels” and “smaller car which is also up for decomposition.” When you break the “car” concept down, you’re not left with nothing, or with wheels that turn on missing axles; you’re left with a bunch of parts that all work together, which were all parts of the original car but which all now have different names. Names like engine, exhaust manifold, spark plug, carburetor, wind shield fluid, map of Florida, fiberglass, electron. We can talk about all of these things and never reference “car”. “Car” vanishes, but the actual car does not.
And at any point in that reduction, it’s possible (in principle, if not cognitively realistic) to draw a boundary around the parts to reintroduce the car concept. Whether I say “I am beliefs, desires, plans, intentions, wayfinding algorithms, multisensory categories, image schemas, a hippocampus, the concept of digital publishing, a lateral geniculate nucleus, some belief propagation and reinforcement learning, post-synaptic potentials, and everything else science knows about minds” or just “internal dialogue”, there’s nothing erroneous about a small self concept. And even if I don’t stop the reduction to draw a boundary, the imagery doesn’t “shrink back to a singularity”, it just bottoms out at physics.
I think you have misunderstood my point. The quote—or my comment is not disputing reductionism, but rather that the act of deconstructing the mind removes the person—one has to recognize that person or car for that mater consists of parts.
Agreed, I expressed myself poorly but by “strip” I meant “not include into concept car”, so more over if you assign driving as a function of a car, and then reduce the car into parts, finding that the engine, wheels and so on, are in fact the the things that do the work, it is a fallacy to conclude “AHA the car is not doing the driving it’s the engine, wheels. . . .” since car = it’s parts. That is how many people with dualistic intuitions approach the mind.
That depends on what you include in your concept of self—we don’t want this to turn this into a discussion about trees falling in the forest. But I was assuming that a lot of people have the same sens of self as I have, we are all human after all. I think “shrink back to singularity” is a metaphor, not a physical singular point.
Defining yourself down to nothing reduces your willingness to engage with the larger world. Mote-person doesn’t care so much about the loss of a handful of pocket change, a court case, a car, a limb, but that sort of stuff adds up.
Appeal to consequences?
yes?
Last I checked that was a fallacy...
I mean what about truth of the matter? Accuracy? Is there no difference between possible definitions in how well they carve reality, or how deep an understanding they reflect?
Or is it that anything goes, and we can define it however we please and might as well choose whatever is most beneficial.
Spot the fallacy in:
It’s appeal to consequences, after all. Ooh, or better yet, spot the fallacy in:
Not a fallacy when designing.
Identity is not a feature of the world to be understood. It is a feature of a cognitive system to be designed.
I suppose you could ask empirical questions about what form identity actually takes in the human mind, but Strange’s comment is referring to instrumental usefulness of a design.
Unless you expect some factual, objective truth to arise about how one should define oneself, it seems fair game for defining in the most beneficial way. It’s physics all the way down, so I don’t see a factual reason not to define yourself down to nothing, nor do I see a factual reason to do so.
Why yes, when I ask who I am, I am indeed interested in objective truth, or whatever objective truth of the matter may or may not exist. What the relation actually is, between our sense of self, and the-stuff-out-there-in-reality. I don’t understand why this seems so outlandish.
If identity really were up for grabs like that, then that just seems to me to mean that there really ain’t no such critter in the first place, no natural joint of reality at which it would make most sense to carve. In that case that would be what I’d want to believe, rather than invent some illusion that’s pleasing or supposedly beneficial.
It might be more fruitful to ask instead “How is my sens of self generated? - Whatever that may be” and “What work do the self preform—might there an evolutionary advantage for an organism to have a self?”
Where is that from? I think I’d like to read it.
In addition to what Wix said, if you’d like a deeper elaboration of his point the book to read is “Freedom Evolves”. (There are very similar passages there—I thought that was the source before seeing Wix’s response). This is the book that really sold compatibilism to me, changing my view of it from “hmm, interesting argument, but isn’t it a bit of a cop-out?” to “wow, free will makes much more sense viewed this way”.
Interesting reaction. I shall admit that even though Eliezer’s free will sequence was intellectually convincing to me, it did not change my alief that free will just isn’t there and isn’t even a useful allusion. So this is going on my reading list.
What? You are clearly anticipating as if you have control over your actions, or you would not have attempted to type that comment.
(assuming you are acting approximately like a decision maker. Only agents need to anticipate as if they have free will)
No, it just happened. You’re underestimating the degree to which people can have different aliefs.
Precisely what I currently think, except with a little more emphasis and more colorful words.
Guess I’ll have to look at that book.
Thanks! It’s being delivered to my Kindle right now.
That particular quote is from Susan Blackmore’s book Conversations on Consciousness: What the Best Minds Think about the Brain, Free Will, and What It Means to Be Human, the book is divided into specific interviews with philosophers, neuroscientists, psychologists. Great read.
Though I think that the point of quote is something that imbue most of his work.
Thanks!
-- John Fowles, The French Lieutenant’s Woman
-- Terry Pratchett (on Nation)
--Razib Khan, The Erasmus Path in Science
-Roger Bacon
Tales of MU
(I read it for the worldbuilding...)
That is the exact same justification, to the word, that I give for reading it.
It’s a good reason!
Arthur S. Eddington
Cicero, De Natura Deorum
-- Albert Einstein
Any fool can also make a simple theory to describe anything, provided he is willing to hide dis-confirming evidence under the rug.
Slava Akhmechet see also Enso and the rest
Margaret Fuller, intoxicated by Transcendentalism, said, “I accept the universe,” and Thomas Carlyle, told of the remark, supposedly said, “Gad, she’d better.”
This depends on what is meant by “accept the universe”. Does this mean that you’re ready to deal with reality, or that you accept the way the universe currently is and aren’t going to try to make it better?
Given Carlyle’s general attitude towards Fuller, I suspect what he meant was that it’s a good thing for the universe that Fuller accepts it, for otherwise the results might be bad for the universe.
-- Will Wilkinson
Deadpool
Thomas Hardy
Pretty sure most people would pick hallucinations over blindness. Easier to correct for.
Hallucinations are easier to correct for?
Hm.
So, I start out with an input channel whose average throughput rate is T1, and whose reliability is R1.
Case 1, I reduce that throughput to T2.
Case 2, I reduce the reliability to R2.
A lot seems to depend on T2/T1 and R2/R1.
From what I’ve gathered from talking to blind people, I’d estimate that T2/T1 in this case is ~.1. That is, sighted people have approximately an order of magnitude more input available to them than blind people. (This varies based on context, of course, but people have some control over their context in practice.)
Hallucinations vary. If I take as my example the week I was in the ICU after my stroke, I’d estimate that R2/R1 is ~.1. That is, any given input was about ten times more likely to not actually correlate to what another observer would see than it usually is.
Both of these estimates are, of course, pulled out of my ass. I mention them only to get some precision around the hypothetical, not as an assertion about what blindness and hallucination are like in the real world. If you prefer other estimates, that’s fine.
Given those estimates… hm.
Both of them suck.
I think I would probably choose hallucination, in practice.
I think I would probably be better off choosing blindness.
False information is definitely more damaging than non-information, because in the best case scenario you ignore the false information. In less-than-best-case scenarios, you fail to ignore the false information and are actively misled.
Suppose there are 10 boxes, one of which contains cash.
If you could open the boxes and see which one had cash, you’d be in great shape. But if you can’t, you obviously should prefer leaving all the boxes closed (blindness), rather than somehow seeing cash in box #7 even when it isn’t there.
I think the only reason people would be tempted to choose hallucination is that hallucinations in real life are usually relatively mild and often correctible, whereas blindness can be total and intractable with present technology. So given the choice between schizophrenia and blindness, I probably would choose schizophrenia, because schizophrenia is treatable.
One reason I would be tempted to choose hallucination over blindness is that hallucinations feel like knowledge, and blindness feels like lack of knowledge, and I’m more comfortable with the feeling of knowledge than I am with the feeling of the lack of knowledge.
Wrong input > no input? I’m not so sure.
Depends on if you’re hallucinating everything or your vision has at least some bearing in the real world. I mean, I’d rather see spiders crawling on everything than be blind, since I could still see what they were crawling on.
Some things (for instance, eating) would definitely be more enjoyable while blind rather than while hallucinating spiders.
Not 100% sure they would be for me: I’m not arachnophobic at all. (I would be willing to eat a spider for five dollars, if I was sure this couldn’t cause be to get sick.)
If the wrongness is so blatant that it’s easy to tell which parts of the input are likely wrong and disregard them...
A fair point. But then you’re no better off than with not having that part of the input at all.
Well, I would, but “pretty sure most people” sounds like blatant generalizing from one example to me.
Speaking the Truth in times of universal deceit is a revolutionary act. - George Orwell
-Vincent Baker
No. If I want something to exist I’ll offer a reward or plain and simple pay someone to build it.
Perhaps by “it”, he meant money.
Doubtful. Money already exists, but it doesn’t exist my pocket.
If what you want is difficult to explain, it might be as easy to do it yourself.
I read the quote as “make it (exist)!”, instead of “create it”. But whether that’s what was meant or not, I think that to the basic idea, it doesn’t matter all that much whether you cause it to exist directly or via someone else.
As an addition: when I come up with something cool that I wish existed, my first step is to google around if someone else has ever invented it and sells it. : ) Twice so far the answer has been yes.
Nowadays I actually get annoyed when I think up something that’s an obvious combination of existing components and I can’t immediately find it online. It doesn’t happen very often.
Dean Ing, The Ransom of Black Stealth One
Exactly. Buying things is far more practical, harnessing the power of specialization and comparative advantage. Building the thing yourself is almost always the incorrect decision. Build it yourself if you are good at building that kind of thing and, more importantly, suck at doing other things that provide more (fungible) value.
Or if you enjoy the process of building it. Or if the process of building it will help you relax or something so that you’ll be able to do more things-that-provide-more-value later. Or if you’re trying to impress someone. Or any other of the reason people have hobbies. (Also, “suck” suggests a much lower threshold than there actually is, especially in times of unemployment and recession. Telling people who have to cook because they can’t afford eating at restaurants twice a day that they “suck” at making money sounds bad to me.)
Those are all reasons to build things. But not the subject of the context.
Closely related principle: Purchase Fuzzies and Utilons Separately.
I can’t afford to pay someone to do cognitive science, so I’d better try to do it myself.
-William M. Briggs
Brett Evill
The best similar cultural-relativity-based deduction I’ve read, as introduced by Wikipedia:
Why the national customs of Britain should apply in India? :-)
Because Britain has a national custom saying that they do.
It doesn’t follow, from the fact that passing judgment on someone else’s act of passing judgment on people is itself an act of passing judgment on people, that it is impossible not to pass judgment on people.
I’m also not quite clear on whether “passing judgment on” is denotatively the same or different from “judging.” (I understand the connotative differences.)
All that said, for my own part, I want to be judged. I want to be judged in certain ways and not in others, certainly, and the possibility of being judged in ways I reject can cause me unhappiness, and I might even say “don’t judge me!” as shorthand for “don’t apply the particular decision procedure you’re applying to judgments of me!” or as a non-truth-preserving way of expressing “your judgment of me upsets me!”, but if everyone I knew were to give up having judgments of me at all, or to give up expressing them, that would be a net loss for me.
The statement in the quote does not seem to follow, assuming that you have the choice of simply not saying anything. Passing judgement suggests that you actuallly have to let someone else know what you think. On the subject of the value of judgement, it is hard to understand why people are so averse to being judged. Whether someone is being kind or malicious by telling you what they honestly think of your actions it still gives you better information to make future choices.
Is it any harder to understand than why some people experience as a negative stimulus being told they have a fatal illness, or stepping on a scale and discovering they weigh more than they’d like, or being told that there are termites in their walls?
No harder, because it’s the same phenomenon.
But it’s a phenomenon that we as rationalists should resist. If I am dying, or fat, or living with termites, I want to know—after all, there may be something I can do about it.
Absolutely agreed.
You can refrain from passing judgment yourself, but allow others to pass judgement.
For example, rocks are not judgmental.
The ideal libertarian is a rock.
(This is why I am not a libertarian.)
No, the only things that follows logically is that not being judgemental is something that you can’t teach someone else directly without judging yourself.
The zen monk that sits in his monastery can be happy and accepting of everyone who visits him.
Explaining what it means to not passing judgment to someone who never experienced it is like telling a blind person about the colors of the rainbow. If you talk about something being blue they don’t mean what you are talking about.
If you ask the zen monk to teach you how to be nonjudgmental he might tell you that he’s got nothing to teach. He tell you that you can sit down when you want. Relax a bit.
After an hour you ask him impatiently: “Why can’t you help me?” He answers: “I have nothing to teach to you.”
Then you wait another two hours. He asks you: “Have you learnt something?” You say: “Yes”. You go home a bit less judgmental than when you were at the beginning.
It’s possible that the strategy of only judging those who break the anti-judgment norm is the optimal one. Kind of like how most people only condone violence against those who break the anti-violence norm.
Most people condone violence for a lot more reasons than that.
A good example would be using violence to prevent or punish theft.
Some people solve this by stretching the meaning of “violence” to include theft… but if one follows this path, the word becomes increasingly unrelated to its original meaning.
Generally, it seems like a good heuristics to define a set of “forbidden behavior”, with the exception that some kinds of “forbidden behavior” are allowed as a response to someone else’s “forbidden behavior”. This can help reduce the amount of “forbidden behavior” in society.
The only problem is that the definition of the “forbidden behavior” is arbitrary. It reflects the values of some part of the society, but some people will disagree and suggest changes to the definition. The proponents of given definition will then come with rationalizations why their definition is correct and the other one is not.
I guess it’s the same with “judgement”. The proponents of non-judgement usually have a set of exceptions: behaviors so bad that it is allowed to judge them. (Being judgemental, that is judging things not belonging to this set of exceptions, is usually one of them.) They just don’t want to admit that this set is arbitrary, based on their values.
I was with you until you said the choice of forbidden behaviors was arbitrary.
No, it’s not arbitrary; indeed, it’s remarkably consistent across societies. Societies differ on their approaches to law, but in almost every society, randomly assaulting strangers is not allowed. Societies differ on their ideas on sex, but in almost every society, parents are forbidden from having sex with their children. Societies differ on their systems of property, in almost every society, it’s forbidden to grab food out of other people’s hands.
There are obviously a lot of biological and cultural reasons for the rules people choose, and rule systems do differ, so we have to decide which to use (is gay sex allowed? is abortion legal? etc.). But they’re clearly not arbitrary; even the most radically different societies agree on a lot of things.
I don’t have enough data about behaviors in different cultures, but I suspect they are rather different. (I wish I had better data, such as a big table with cultures in columns, behaviors in rows, and specific norms in the cells.)
Of course it depends on how many details do we specify about the behavior. The more generally we speak, the more similar results will we get. For example if I ask “is it OK to have sex with anyone anytime, or is it regulated by some rules?”, then yes, probably everywhere it is regulated. The more specific questions will show more disagreement, such as “is it OK for a woman to marry a man from a lower social class?” or “is it OK if a king marries his own sister?” or “if someone is dissatisfied with their sexual partner, is it OK to find another one?” (this question may have different answers for men and women).
Also it will depend on the behavior; some behaviors would have obvious disadvantages, such as anyone randomly attacking anyone… though it may be considered OK if a person from a higher class randomly attacks a person from a lower class, or if the attacked person is a member of a different tribe.
I guess there is a lot of mindkilling and disinformation involved in this topic, because if someone is a proponent of a given social norm, it benefits them to claim (truly or falsely) that all societies have the same norm; and if someone is an opponent, it benefits them to claim (truly or falsely) that some other societies have it differently. Even this strategy may be different in different cultures: some cultures may prefer to signal that they have universal values, other cultures may prefer to signal that they are different (read: better) than their neighbors.
I’m sure that’s right.
And my point wasn’t to claim that there is no variation in moral values between societies; that’s obviously untrue.
My main objection was to the word arbitrary; no, they’re not arbitrary, they have causes in our culture and evolutionary history and some of these causes even rise to the level of justifications.
Who says that a society’s moral values don’t have causes? The issue is whether those causes are historically contingent (colloquailly, whether history could have happened in a way that different moral positions were adopted in a particular time and place).
Alternatively, can I suggest you taboo the word justification? The way I understand the term, saying moral positions are justified is contradicted by the proliferation of contradictory moral positions throughout time. (But I’m out of the mainstream in this community because I’m a moral anti-realist)
Would you apply the same logic to physical propositions? Would you claim that, for example, saying that astronomical positions are justified is contradicted by the proliferation of contradictory astronomical positions throughout time?
No
--Marie Curie
Well, now people who are in the know can avoid fear by knowing to avoid doing the stuff that she did. It’s mostly the people who believe that radiation is dangerously little understood to whom it seems scary.
Of course, I’d have to say the quote is still incorrect. If I understand that I’m a prisoner of war who’s going to be tortured to make my superiors want to ransom me more, I’m damn well going to be afraid.
But I still find “Now is the time to understand more, so that we may fear less” awfully uplifting.
So the science gets done, and you make a neat quote, for the people who are still alive.
found here
Robert Anton Wilson, from an interview
It depends what kind of maps. Multiple consistent maps are clearly a good thing (like switching from geometry to coordinates and back). Multiple inconsistent ad-hoc maps can be good if you have a way to choose which one to use when.
Wilson doesn’t say which he means, I think he’s guilty of imprecision.
I think he means that people choose not to think about any map but their favorite one (“their way of looking at reality is the only sane way of viewing the world”), to the point where they can’t estimate the conditional probability P(E|a) of the evidence given not-A.
The link with Aristotle seems weak. But the problem obviously makes it harder to use “the logic of probability,” as Korzybski called it, and Wilson well knew that Korzybski contrasted probability with classical “Aristotelian” logic. (Note that K wrote before the Bayesian school of thought really took off, so we should expect some imprecision and even wrong turns from him.)
Or you could always just average your inconsistent maps together, or choose the median value. Should work better than choosing a map at random.
Or accept that each map is relevant to a different area, and don’t try to apply a map to a part of the territory that it wasn’t designed for.
And if you frequently need to use areas of the territory which are covered by no maps or where several maps give contradictory results, get better maps.
Basically, keep around a meta-map that keeps track of which maps are good models of which parts of the territory.
Yeah, that should work.
This seems unfair. I have a map; it reperesents what I think the universe is like. Certainty it is not perfect, but if I thought a different one was better I would adopt it. There is a distinction between “this is correct” and “I don’t know how to pick something more correct”.
I agree with Wilson’s conclusions, though the quote is too short to tell if I reached this conclusion in the same way as he did.
Using several maps at once teaches you that your map can be wrong, and how to compare maps and find the best one. The more you use a map, the more you become attached to it, and the less inclined you are to experiment with other maps, or even to question whether your map is correct. This is all fine if your map is perfectly accurate, but in our flawed reality there is no such thing. And while there are no maps which state “This map is incorrect in all circumstances”, there are many which state “This map is correct in all circumstances”; you risk the Happy Death Spiral if you use one of the latter. (I should hope most of your maps state “This map is probably correct in these specific areas, and it may make predictions in other areas but those are less likely to be correct”.) Having several contradictory maps can be useful; it teaches you that no map is perfect.
“Most people have a wrong map, therefore we should use multiple maps” doesn’t follow. Reversed stupidity isn’t intelligence, and in this case Aristotle appears to have been right all along.
If I’m out charting the oceans, I’d probably need to use multiple maps because the curvature of the Earth makes it difficult to accurately project it onto a single 2D surface, but I do that purely for the convenience of not having to navigate with a spherical map. I don’t mistake my hodge-podge of inaccurate 2D maps for the reality of the 3D globe.
No, but your “hodge-podge of inaccurate 2D maps”, while still imperfect, is more accurate than relying on a single 2-D map—which is the point I took from the original quote.
Note that Google Maps can be described as “a hodge-podge of different maps”; a satellite map and a street map (and sometimes a 3D map if you use Google Earth), and using that hodge-podge is indeed more convenient than using one representation that tries to combine them all.
I know that you didn’t mean hodge-podge in the same sense (you were talking of 3D-> 2D), but I think that Google Maps is a good illustration of how having different views of the same reality is useful.
Isn’t “convenience” also the reason not to use the territory itself as a map in the first place? You know, knowing quantum field theory and general relativity isn’t going to give you many insights about (say) English grammar or evolutionary psychology.
If you’re favoring hedgehogs over foxes, you’re disagreeing with luminaries like Robin Hanson and billionaire investors like Charlie Munger. There is, in fact, far more than one globe—the one my parents had marked out the USSR, whereas ones sold today do not; and on the territory itself you won’t see those lines and colorings at all.
Some recent quotes post here had something along the lines of “the only perfect map is a 1 to 1 correspondence with everything in the territory, and it’s perfectly useless.”
“Rich people plan for three generations. Poor people plan for Saturday night.” —Gloria Steinem
The rest of her quotes are pretty good, too.
Here’s what I don’t like about that quote: It doesn’t tell me which way the causation goes (or if it’s feedback, or a lurking variable, or a coincidence). Does being rich make you plan better? Or does planning better make you rich?
More like the reverse (but this is more because poor people cannot plan for Saturday night if they don’t want to starve on Sunday).
Based on a book I just read called Poor Economics by Abhijit Vinayak Banerjee and Esther Duflo, it is true that extremely poor people are much, much less able to make and follow long-term plans than rich people. They suggest it has to do with various facets of a very poor person’s life (for example, the difficulty of getting loans or even opening a savings account) and also with the “willpower depletion” aspect, because the everyday lives of the poor include so many small decisions that are made automatically by the societies that rich people live. Also, their research established that poor people, even those poor enough that they can’t afford enough food to eat, still spend money on short-term luxuries, like sugary tea.
Good book to read. I would recommend it.
I absolutely love Poor Economics.
Seconding that and upvoting since I can’t see why this should be negative.
Because politics is the mindkiller, unless it’s libertarian politics in which case it’s just normal.
Ahh, hypocrisy and double standards then :(
I figured it was because it was a surprising and more-or-less unsupported statement of fact (that turned out to be, according to the only authority anyone cited, false). When I read ‘poor people are better long-term planners than rich people due to necessity’ I kind of expect the writer to back it up. I would have considered downvoting if it wasn’t already downvoted, and my preferences are much closer to socialist than libertarian.
I don’t have an explanation for the parent getting upvoted beyond a ‘planning is important’ moral and some ideological wiggle room for being a quote, so I guess it could still be hypocrisy. Of course, as of the 2011 survey LW is 32% libertarian (compared to 26% socialist and 34% liberal), so if there is ideological bias it’s of the ‘vocal minority’ kind.
-Aaron Haspel
There’s no context in the source, so: WTF?
He is using “mind” in a broader sense than people usually do with the phrase “change your mind”.
A reasonable interpretation could be “changing one of your beliefs doesn’t automatically change your other related beliefs, your aliefs, your habits and your behavioral triggers”. But “changing your mind” could also mean “changing anything about your mind, such as a personality trait or even a mood”.
For instance, becoming intellectually convinced that sexual jealousy is a bad idea does not purge you of experiencing any.
Ah. So not only is he using “mind” unusually, he’s also using “opinion” unusually. And “change” idiomatically.
Well then, it’s trivial!
Another example: Learning that an opinion of yours was wrong does not destroy all the broken cognitive processes that generated the wrong opinion in the first place.
I think people are seriously underestimating the value of this quote, but then again of course I do; I’m the one who posted it.
--Hejitz et.al.
What’s the significance of this?
Intestinal bacteria have an effect on the nervous system: they affect how we think and how we feel and how our mind develops. This is pretty recent science written by scientists about the function of our mind (or murine minds, at least). That makes it an interesting rationality quote, in my opinion.
It’s interesting, all right, but I think it would likely be better received as a standalone Discussion post (ideally with some more context and expansion). The rationality quotes threads tend to be more for quotes directly about rationality or bias than quotes indirectly contributing to our potential understanding of the same.
I think it could make a pretty interesting Discussion post, and would pair well with some discussion of how becoming a cyborg supposedly makes you less empathic.
Serious question: is the cyborg part a joke? I can’t tell around here.
Fair question! I phrased it a little flippantly, but it was a sincere sentiment—I’ve heard somewhere or other that receiving a prosthetic limb results in a decrease in empathy, something to do with becoming detached from the physical world, and this ties in intriguingly with the scifi trope about cyborging being dehumanizing.
Really? If true, then that is fascinating… Can you link to any of the recent research, though?
EDIT: by popular demand. I’ll be moving this to a discussion instead.
EDIT: the discussion thread is here
As in the attribution, I’m quoting from: Hejitz et.al.: Normal gut microbiota modulates brain development and behavior, 2011.
Here is a review paper.
See also the current special section of science magazine, or google scholar.
Here’s the abstract from The Relationship Between Intestinal Microbiota and the Central Nervous System in Normal Gastrointestinal Function and Disease00346-1/abstract):
Here are results from an RCT on humans with chronic fatigue syndrome
.
Duplicate of this. (Well, close enough that the monicker should apply.)
.
On lost purposes:
-- Tanya Khovanova
George Carlin
-Philippe Petit. On the idea of walking rope in between the World Trade Center towers.
It’s not impossible for sure now. If he thought it was impossible when they were actually in existence then he doesn’t remotely understand the word. That is beyond even a “Shut up and do the impossible!” misuse.
I don’t understand. Are you saying it wasn’t impossible enough?
He actually did it in 1974. It took nearly six years of planning. In order to practice for the walk between the World Trade Center towers he first did tightrope walks between the towers of the Notre Dame and then the Sydney Harbor Bridge. All of these were of course illegal. In WTC case, he had to sneak in, tie the ropes between the towers without anyone knowing and walked between the towers without any harness for nearly 45 mins at that height with the wind and everything. For the complete details, watch the documentary ‘Man on Wire’. I think it was as impossible as it got in his line of work.
How on earth could you not understand? If this is sincere incomprehension then all I can do is point to google: define.
Yes. This quote is an example of nothing more than how to be confused about words and speak hyperbole for the sake of bravado.
If you have to ask whether something is “impossible enough” you have already answered your question.
Have you seen Google’s definitions yourself? Because 2. does seem to match what Stabilizer means.
Your sentence wasn’t clear enough.
About your gripe with use of the word impossible: it’s a quote. Most of the quotes are like applause-lights. Everybody who read that quote understood the intent and meaning. Philippe Petit didn’t employ the literal meaning of impossible. But the literal meaning of ‘impossible’ is rarely used in colloquial contexts. Even in ‘Shut up and do the impossible’, the absolute literal meaning is not employed. Because if the literal meaning is used, then by definition you can’t do it, ever. So the only thing left is the degree of impossibility. You say that the task was too doable to be considered ‘impossible’ under your standards. Fine. Just mentally replace ‘impossible’ in that sentence with ‘really goddamn hard that no one’s done before and everyone would call me crazy if I told them I’m going to do it’ and you’d read it the way most people would read it. The spirit of the quote would still survive.
Yes, it’s an applause light. It isn’t one that made me applaud. It isn’t a rationalist quote. It doesn’t belong here.
No. I instead choose to mentally replace the quote entirely with a better one and oppose this one. Even Nike’s “Just Do It” is strictly superior as rationalist quote, despite being somewhat lacking in actionable detail.
I am forced to disagree; a quote about conquering the (colloquially) impossible with sufficient thought and planning is very appropriate for this site.
Eben Moglen, on how to change the world
I don’t think Moglen always knew exactly what he was doing.
And I’ve never heard of him, so perhaps he didn’t change the world either.
A lot more people have heard of Michael Jordan than have heard of Norman Borlaug. Yet Borlaug is one of the few humans on the planet who can be personally credited with saving millions of lives. Who one has heard of is not likely to be highly correlated with what impact people have had.
(I did perform a quick Google check after writing the comment and before posting it, just to make sure.)
Somewhat ironically, I actually have heard of Moglen for what he’s really famous for, but I thought the quote was from Elon Musk (for whom, it should be said, the quote would be much truer—so far). I was surprised you hadn’t heard of him, so I checked Wikipedia and then realized my mistake.
And sadly, more people know who Snooki is than know who Jonas Salk was.
I wouldn’t be surprised if more people had heard of Jonas Salk, especially outside the US (although I reckon JoshuaZ’s right about Michael Jordan & Norman Borlaug).
I had never heard of either, but after googling both I suspect that there are more people in the US who have heard of Snooki than people who have heard of Jonas Salk worldwide.
Snooki’s pretty well-known in the US, but Jonas Salk’s got staying power. Salk was a big American celebrity in his own right and is probably better known than Snooki among the middle-aged and certainly the old in the US & UK. As most people in the US & UK are at least 35-40 that might be enough to make Salk better known overall in those two countries.
Snooki does gets more hits & searches on Google but Salk’s been a name for far longer and even holds his own against some rock stars in mentions in books.
Salk & Snooki are presumably less famous in non-Anglophone countries, and Salk must be worse off in that respect (reality TV antics better overcome language barriers), but he still has his half-century headstart, and the global effort to beat polio must’ve raised Salk’s profile in quite a few countries.
One of the defence team of Phil Zimmermann in the PGP case. General counsel of the Free Software Foundation and founder of the Software Freedom Law Center. Mostly responsible for the changes between version 2 and version 3 of the GNU General Public License.
I’m not sure any of that counts as changing the world, but it does seem like he’s had some impact.
In the context of the youtube link where the quote is from, he is saying what he learned from working under Thurgood Marshall—a man who probably did change the world.
Furthermore, what he is saying seems trivially true; the thing you need to know to change the world is how to get the change that you want. Knowing which things you need to know doesn’t imply that you know those things!
http://www.youtube.com/watch?feature=player_embedded&v=G2VHf5vpBy8#!
Moglen on what the world needs—in particular, for young people to have full access to computer hardware and software so that they can innovate, and privacy so that people can reboot their lives. I’m not sure whether this is giddy idealism or reasonable and important.
What does it mean, “reboot their lives”?
Start over with a new identity.
I assume this message is intended as some sort of irony? (Just because the message as a straight statement seems wrong and not in fitting to what your world saving attitudes seem to be.)
When it comes to big things I don’t think that you often know beforehand exactly how to get it. As you progress you learn more and it makes often sense to change course. A lot of startups have to pivot to find their way to change the world.
I love truth. It’s such a wonderful thing. It makes you sane, helps you make better, more effective decisions and it irks all the right people. -Aaron Clarey aka “Captain Capitalism”
Advice Dog
I’m curious as to the algorithm that flagged this as a rationality quote.
Well, mostly it was just me having… uhm… an episode, but the idea of getting something you want by giving away something you’d like to get rid of—and being on the lookout for an opportunity to do so—is indeed quite rational. It’s just that there are few such excellent opportunities in daily life, and the “getting rid” part often has delayed costs that come back to you later. Like the delivery guy calling the police.
“I must die. But must I die bawling?”
Epictetus
Jack Parsons
I think this quote is like a paraphrase of, “a sense that something more is possible.” Imagine if someone invented a drug that gave chimpanzees the highest human levels of rationality at random intervals, for a total of about a half hour per day. They’d be pretty much like humans, only physically stronger.
EDIT: Downvoted? My comment is negative about humans, but it’s hopeful. Human nature is pretty squalid, but there is plenty of opportunity for improvement. (Imagine if we could get the median human to the point where they’re operating with clarity twice as much as they are now. Or for that matter, imagine myself.)
Judge Learned Hand
I like Judge Learned Hand, but I think this particular quote is just Deep Wisdom. Living in a pleasant society requires both good laws and good people. There is very little substantive content in LH’s oratory. He could just as easily have made the opposite point:
I like the quote, but I don’t see how it relates to rationality.
There are people in the real world who think that having a good enough decision-making process for making moral decisions (like deciding the right result in litigation) ensures a morally upright decision.
Up to this point, decision-making procedures have always been implemented by humans, so the quality of the decision-making process is not enough to ensure that a morally upright decision will be made. The better guarantee of morally upright decision-making is morally upright decision-makers.
Eric Barker
How does this account for the use of humor in mocking outgroup members?
It doesn’t.
When you choose technology, you have to ignore what other people are doing, and consider only what will work the best. -Paul Graham
Unless your technology will be required to interact with the technology other people are using, which is most of the time. “What will work best” often depends heavily on “what other people are doing”.
No, at that point you still only consider what will work the best. It’s a nitpick, but “what will work the best when others do this” is a different question to “what are the other people doing”.
Absolutely. What I mean is that they are incompatible. In the common case, it’s impossible to simultaneously “consider what will work best” and “ignore what other people are doing”. Figuring out what will work best requires paying attention to what other people are doing.
I find myself doing the latter via reference to the former.
One of the things that other people do is to build standard parts. If one has an unlimited budget, one can ignore them, and build everything in a project from optimized custom parts. This is rare.
Laurell K. Hamilton
A quote I find useful when considering both rationalizing, and the differences of relative perspective.
Huh. Their victims decide, rather than everyone they affect deciding?
I don’t think I agree.
I can’t see how but that both the victims, and everyone else they affect, deciding. That doesn’t mean they’ll all come to the same conclusion, of course.
I’m pretty sure that’s where politics comes from, personally...
Edited to add: I do not mean to imply that if one group decides X, another Y, and a third Z, that it necessarily means that any of them are wrong.
-Charles Kingsley
Explain?
It paraphrases the bottom line of the metaethics sequence—or what I took to be the bottom line of those posts, anyway. Namely, that one can have values and a naturalistic worldview at the same time.
So, having values is moral theism? The choice of words seems suspect.
I’d say “moral atheism” is being used as an idiomatic expression; a set of more than one word with a meaning that’s gestalt to its individual components. One of the synonyms for “atheism” is “godlessness”, so by analogy “moral atheism” would just mean “morality-lessness”.
We have a word for “morality-lessness”, and it is amorality, which coincidentally works more naturally in your analogy: If morality is analogous to theism, then a-morality is analogous to a-theism.
I hope you understand my trouble with the use of an idiom that implicitly equates morality with theism. (Well, amorality with atheism, which is more the problem.)
(sorry about all the edits, this was written horribly.)
William S. Burroughs
My immediate reaction was “No, my knowledge of what is going on starts out superficial and relative, but it sure doesn’t stay that way”. (I object to the “only”).
-Eric Hoffer
That may be Deep Wisdom but it’s surface nonsense. Propaganda contains many untruths that people end up honestly believing in. The quote effectively says “propaganda is useless if only one is brave enough to believe what they know (how?) is really true”. This is simply wrong.
It definitely isn’t nonsense, because I know it is literally false.
Agreed.
I think the idea is that propaganda provides an easy answer, but doesn’t really prevent anyone from doing research to find the harder answer. A more detailed example here.
Except people don’t have the time to research every statement they hear.
But they also often accept statements they should doubt based on the information they already have. Motivated thinking is there, it just needs an official voice that reassures them that they will be in majority even if they are actually wrong.
As mentioned in this post, I think you’re underestimating how many of our ideas come from the group.
Of course not every statement!
Assuming widespread literacy and other educational prerequisites for industrialization, two or three hours per citizen per month poking at the justifications behind the reigning political party’s most central claims, including (but certainly not limited to) seeking out and asking reasonable questions of those who already disagree with such claims, would be enough to utterly shred most historical propaganda efforts by sheer weight of numbers. If even half the people who attended one of Hitler’s rallies thought afterwards “Those were some pretty strong claims; I should go find some Jewish spokesperson to hear the other side of the story” and then made a reasonable effort to do so, do you think things would have gone the same way?
Upvoted for the link to that story.
--Heinlein, speaking in the voice of The Man from Mars
Dwight Eisenhower
Abraham Lincoln
(Okay, seriously, this has also variously been attributed to Mark Twain and Cicero, so if you’re going to credit Eisenhower, maybe do so with a specific source)
Should acronyms count as quotes? If so then:
KISS—Keep it simple, stupid!
Versions I prefer, especially for writing:
Keep It Simple and Succinct, or Keep It Simple and Salient,
the latter one doesn’t work as well, since you almost always have to explain that in the context “salient” means “to the point”.
The latter one doesn’t work at all, since it sounds rather like you’re ignoring the very advice you’re trying to give.