Yes, a blog.
When I recommend LessWrong to people, their gut reaction is usually “What? You think the best existing philosophical treatise on rationality is a blog?”
Well, yes, at the moment I do.
“But why is it not an ancient philosophical manuscript written by a single Very Special Person with no access to the massive knowledge the human race has accumulated over the last 100 years?”
Besides the obvious? Three reasons: idea selection, critical mass, and helpful standards for collaboration and debate.
Idea selection.
Ancient people came up with some amazing ideas, like how to make fire, tools, and languages. Those ideas have stuck around, and become integrated in our daily lives to the point where they barely seem like knowledge anymore. The great thing is that we don’t have to read ancient cave writings to be reminded that fire can keep us warm; we simply haven’t forgotten. That’s why more people agree that fire can heat your home than on how the universe began.
Classical philosophers like Hume came up with some great ideas, too, especially considering that they had no access to modern scientific knowledge. But you don’t have to spend thousands of hours reading through their flawed or now-uninteresting writings to find their few truly inspiring ideas, because their best ideas have become modern scientific knowledge. You don’t need to read Hume to know about empiricism, because we simply haven’t forgotten it… that’s what science is now. You don’t have to read Kant to think abstractly about Time; thinking about “timelines” is practically built into our language nowadays.
See, society works like a great sieve that remembers good ideas, and forgets some of the bad ones. Plenty of bad ideas stick around because they’re viral (self-propagating for reasons other than helpfulness/verifiability), so you can’t always trust an idea just because it’s old. But that’s how any sieve works: it narrows your search. It keeps the stuff you want, and throws away some of the bad stuff so you don’t have to look at it.
LessWrong itself is an update patch for philosophy to fix compatibility issues with science and render it more useful. That it would exist now rather than much earlier is no coincidence: right now, it’s the gold at the bottom of the pan, because it’s taking the idea filtering process to a whole new level. Here’s a rough timeline of how LessWrong happened:
Critical mass.
To get off the ground, a critical mass of very good ideas was needed: the LessWrong Sequences. Eliezer Yudkowsky spent several years posting a lot of extremely sane writing on OvercomingBias.com, and then founded LessWrong.com, attracting the attention of other people who were annoyed at the lower density of good ideas in older literature.
Part of what made them successful is that the sequences are written in a widely learned, widely applicable language: the language of basic science and mathematics. A lot of the serious effort in classical philosophy was spent trying to develop precise and appropriate terminology in which to communicate, and so joining the conversation always required a serious exclusive study of the accumulated lingo and concepts. But nowadays we can study rationality by transfer of learning from tried-and-true technical disciplines like probability theory, computer science, biology, and even physics. So the Sequences were written.
Then, using an explicit upvote system, LessWrong and its readers began accelerating the historically slow process of idea selection: if you wanted to be sure to see something inspiring, you just had to click “TOP” to see a list of top voted posts.1
Collaboration and debate.
Finally, with a firm foundation taking hold, there is now a context, a language, and a community that will understand your good ideas. Reading LessWrong makes it vastly easier to collaborate effectively on resolving abstract practical issues2. And if you disagree with LessWrong, reading LessWrong will help you communicate your disagreement better. There was a time when you couldn’t have a productive abstract conversation with someone unless you spent a few days establishing a context with that person; now you have LessWrong sequences to do that for you.
The sequences also refer to plenty of historical mistakes made by old-school philosophers, so you don’t necessarily have to spend thousands of hours reading very old books to learn what not to do. This leaves you with more time to develop basic or advanced skills in math and science3, which, aside from the obvious career benefits, gets you closer to understanding subjects like cognitive and neuropsychology, probability and statistics, information and coding theory, formal logic, complexity theory, decision theory, quantum physics, relativity… Any philosophical discussion predating these subjects is simply out of the loop. A lot of their mistakes aren’t even about the things we need to be analysing now.
So yes, if you want good ideas about rationality, and particularly its applications to understanding the nature of reality and life, you can restrict a lot of your attention to what people are talking about right now, and you’ll be at a comparatively low risk of missing out on something important. Of course, you have to use your judgement to finish to search. Luckily, LessWrong tries to teach that, too. It’s really a very good deal. Plus, if you upvote your favorite posts, you start contributing right away by helping the idea selection process.
Don’t forget: Wikipedia happened. It didn’t sell out. It didn’t fall to vandals. Encyclopedic knowledge is now free, accessible, collaborative, and even addictive. Now, LessWrong is happening to rationality.
1 In my experience, the Top Posts section works like an anti-sieve: pretty much everything on there is clever, but in any one reader’s opinion there is probably a lot of great material that didn’t make it to the top.
2 I sometimes describe the LessWrong dialogue as about “abstract practicality”, because to most people the word “philosophy” communicates a sense of explicit uselessness, which LessWrong defies. The discussions here are all aimed at resolving real-life decisions of some kind or another, be it whether to start meditating or whether to freeze yourself when you die.
3 I compiled this abridged list of sequence posts for people who already have a strong background in math and science, to accomodate a faster exposure to the LessWrong “introductory” material.
4 This post is about how LessWrong happened as a blog. For recent general discussion of LessWrong’s good and bad effects, consider When you need Less Wrong and Self-Improvement or Shiny Distraction?
- Eliezer’s Sequences and Mainstream Academia by 15 Sep 2012 0:32 UTC; 243 points) (
- References & Resources for LessWrong by 10 Oct 2010 14:54 UTC; 167 points) (
- Knowledge is Worth Paying For by 21 Sep 2011 18:09 UTC; 61 points) (
- Cult impressions of Less Wrong/Singularity Institute by 15 Mar 2012 0:41 UTC; 39 points) (
- 11 Jun 2015 12:21 UTC; 30 points) 's comment on A survey of the top posters on lesswrong by (
- 16 Dec 2011 0:39 UTC; 9 points) 's comment on Rationality Verification Opportunity? by (
- 5 Aug 2012 2:20 UTC; 6 points) 's comment on “Epiphany addiction” by (
- 24 Nov 2010 4:49 UTC; 6 points) 's comment on Rolf Nelson: How to deter a rogue AI by using your first-mover advantage by (
- 16 May 2012 14:45 UTC; 3 points) 's comment on Open Thread, May 16-31, 2012 by (
And this is precisely why I haven’t lost all hope for the future. (That, and we’ve got some really bright people working furiously on reducing x-risk.) On rare occasions, humanity impresses me. I could write sonnets about Wikipedia. And I hate when so-called educators try to imply Wikipedia is low status or somehow making us dumber. It’s the kind of conclusion that the Gatekeepers of Knowledge wish was accurate. How can you possibly get access to that kind of information without paying your dues? It’s just immoral.
I pose this question: if you had to pick just one essay to introduce someone to LW, which one would you pick and why? I’d like to spread access to the information in the sequences so that it can benefit others as it did me, but I’m at a loss as to where specifically to start. Just tossing a link to the list of sequences is.....overwhelming, to say the least. And I’ve been perusing them for so long that I can’t remember what it’s like to read with fresh eyes, and the essays that have the most impact on me now were incomprehensible to me a year ago, I think.
For me I think it’s No One Can Exempt You From Rationality’s Laws.
I seem to be alone in this, but I’d say Truly Part of You is far and away the best one-article summary of the site. Unfortunately, it’s not listed as part of the sequences. For me, though, it’s the one that gave me the “click” and made me appreciate rationality on a gut level.
This seems to demonstrate that ‘sequences’ represents an element of lost purpose. The point of having the sequences compilation and the link to it is not so much to collect posts on a topic that is covered in multiple parts but to compile all the fundamental high quality posts, particularly the early OB ones by Eliezer. Or if not the original purpose of the wiki page then certainly the role that it now takes is not just to collect things that are multi-part.
If you edited that wiki page, perhaps adding an extra category for standalone posts then I would be surprised (and probably disgusted!) if anyone strongly objected. That post belongs there. Particularly since it is part of what was one big sequence. After all in the past the collation has been in the form of a graph based on ‘follow up’ links. And that post has two of them!
I wonder if Outside The Laboratory is a good choice?
How much does that happen? It is my understanding that educators don’t mind students using Wikipedia to gather information, as long as they use Wikipedia’s references to validate that information, and then cite those references. That is, Wikipedia is a valid tool for finding sources and summaries of those sources, but it is not a source itself.
Three out of sixteen teachers I can think of that mentioned Wikipedia recommended using its references, the other thirteen forbade its use and condemned it as inaccurate. They’re usually alright with other encyclopedias, just not the one that clearly cites and links to its sources.
It is hard to admit that finding out most factual information is an outright trivial task these days and that most of what they had initially believed to be critical for rigorous research at the highschool level is now strictly inferior to reading wikipedia.
If they can’t stop students from using Wikipedia, pretty soon schools will be reduced from teaching how to gather facts, to teaching how to think!
I used to TA a class whose covert purpose was teaching students how to think. The class encouraged everyone to use resources like Wikipedia whenever they didn’t know something, so that it could focus on things more interesting than merely gathering information. That class tried to get everyone to think about things, to use their existing knowledge to solve types of problems they’d never seen before, and to learn in a way that went way beyond memorizing facts and regurgitating them on the test. If the class covered probability, it would make students analyze card games or the lottery. If it reviewed trigonometry, students would have to derive some identities. In the labs, they had to write computer programs. And so on.
Many (most?) of the students were actively pissed off by this. Why were their questions to the professor answered with helpful links to Wikipedia or someone’s lecture slides, or a web page? Why did the class refuse to tell them exactly what they’d need to commit to memory to get a good grade on the tests? It went against everything they’d come to expect from “education”. And the computer programming was especially maddening; they couldn’t just pattern-match their way through it without thinking.
It was a required class for all freshmen in electrical engineering, and a lot of the graduating seniors said it had been one of the most valuable classes they’d taken. Not because of the material it covered, but because it had shaken them out of the bad habits they’d been given in high school “to prepare them for college.” It was an uncomfortable process for them at the time, though.
I think a class like this in isolation is bound to be off-pissing, no matter how useful it is. University courses have the extra problem of forcing you to be interested at a specific topic at a specific time. Students learn to grind through traditional courses even if they don’t feel particularly interested in the topic at the time of taking the course. That course sounds like tossing undergrads into something like the environment grad students are in for the duration, and grad school has a reputation for causing massive procrastination. Free-form problems need more spontaneous enthusiasm to come up with good approaches to, and bringing that up for a semi-arbitrary topic on command is harder than having it for a topic you are already interested in.
It’d probably still be learnable, given a whole curriculum of courses like this instead of just the one.
That’s fantastic. What school was this?
But, but, then I’ll lose a good part of my competitive advantage!
I’m curious, have you used Wikipedia for non-scientific/technical stuff? it can be quite a biased source there..
Reading the discussion pages there can help with this problem.
This is what kind of rubs me the wrong way about the above “idea selection” point. Is the implication here that the only utility of working through Hume or Kant’s original text is to cull the “correct” facts from the chaff? Seems like working through the text could be good for other reasons.
Really? My teachers tend to dislike encyclopaedias in general, not just Wikipedia.
The difference is really between using it and citing it. Its a nice first start, but not a good source to quote from.
And yet, citing from Britannica is okay—and Britannica doesn’t cite its sources IIRC. And a head-to-head comparison found Wikipedia to be more accurate. (Citation needed.)
When I was in high school, citing from Britannica was not acceptable!
Wow. What was left? “It doesn’t count unless it is on parchment!”?
I think the reasoning was that an encyclopedia is a good starting point, but isn’t a real source, because it’s brief and compressed. But really I’m not sure why, in fact. Why couldn’t you cite the encyclopedia for simple, verifiable historical facts? It’s not as if Britannica is going to be less accurate than a “real book” with an author. I remember some kid asking about it, the teacher saying scornfully, “Well, encyclopedias aren’t a real source,” and then I decided “encyclopedia = BAD” and thought no more about it.
If I recall my MLA guide correctly from years ago, you don’t need to cite anything for common knowledge, “John Adams was the second president of the United States” being an example of common knowledge. If you needed to cite, you should cite primary sources like newspapers, journal articles, or biographies; not secondary sources like textbooks or encyclopedias.
Your high school was extremely atypical, was it not?
maybe.
The anti-wikipedia bias has shifted from being a pretentious hold-over from the “I spent 8 years learning the names of the relevant sources in my field” to an outright cognitive bias held by the uneducated “Where’d you get that fact—wikipedia? - in that case, I’m allowed to ignore your argument. I get my facts from talk radio.”
That sounds like a perfect example of how knowing about biases can hurt people. It’s similar to something I often see in religious arguments: someone who wants to rationalize away an argument will often come up with a really flimsy counter-argument, overlook its flaws, and stop thinking about the issue immediately. It’s a particularly pathological case of being more critical of opposing views than ones you agree with.
Well, the ones who rail against it are the ones who get most of the press time… “My school encourages using appropriate references to the extent that such use is appropriate for its purpose” doesn’t attract much attention.
I give talks and workshops about Wikipedia. It is shocking how many people think they know how Wikipedia works and how to use it who really have no idea. The educators who forbid it aren’t thinking of it as a jumping-off point for further research, and they don’t actually know how the content is produced and maintained, or at least have never thought about the implications of their beliefs.
(My very favorite clever phrase describing my feelings comes from the name of a Facebook group: “Abolish abstinence-only Wikipedia education.”)
The educators I’ve spoken to tend to dislike encyclopaedias in general. It’s nothing against Wikipedia.
Interesting question.
I’ve forgotten which page convinced me to start reading this site in earnest. It might have been “Generalization from Fictional Evidence”, which is excellent, but that served a rather specific purpose for me and I’m not sure it’d do the same for others.
Looking over some of the sequences now, I think “Positive Bias: Look Into The Dark” might have the right balance of accessible and mind-blowing to hook a layman of no more than average mathematical sophistication. The 2-4-6 task is one of the more elegant ways of demonstrating both bias and possible countermeasures that I’ve encountered here.
Maybe I’m just being habitually contrarian here for no good reason, but it seems to me that for a supposedly “rationalist” community, people here seem to be far too willing to accept claims of LessWrong exceptionality based on shockingly weak evidence. Group-serving bias is possibly the most basic of all human biases, and we cannot even overcome that little?
Claiming that your group is the best in the world, or among the best, is something nearly every single group in history did, and all had some anecdotal “evidence” for it. Priors are very strongly against this claim, even after including these anecdotes.
Yet, in spite of these priors, the group you consider yourself member of is somehow the true best group ever? Really? Where’s hard evidence for this? I’m tempted to point to Eliezer outright making things up on costs of cryonics multiple times, and ignoring corrections from me and others, in case halo effect prevents you from seeing that he’s not really extraordinarily less wrong.
You made up this ‘true best group ever’ idea yourself. “Best at a highly specific activity that is the primary focus of this group” is an entirely different claim.
Eliezer doesn’t have all that much of a halo. People disagree with him and criticise him incessantly. Sometimes deserved, sometimes not. Most times I have seen Eliezer accused of having a halo effect have been when Eliezer disagrees with them on a particular subject and it happens to be the case that the majority of others here do too. Acknowledging that those who disagree with you may be doing so independently based on their own intellectual backgrounds is not nearly so psychologically rewarding as dismissing them as blind followers.
Citation needed. Please do. I pay those costs out of pocket, they can be verified with my insurance agent if need be, and I should very much like to know what on Earth you think you are talking about.
I assume Taw is referring to this.
Eliezer reported what he (Eliezer) actually currently pays per year for term life insurance ($180) and his membership with the Cryonics Institute ($120). This is relevant for youngish people worried about the effect of cryonics on their near-term cash flow. Since he is buying term life insurance, when he renews it (probably after 20 years) he will have to pay higher premiums or have accumulated savings for the cost. The Cryonics Institute is also the cheapest service.
Taw said that this distracts from the total net present value of the stream of premium and membership costs, which has to be close to the net present value of just saving up to pay for the cryonics out of pocket (~$50,000 for CI in a distribution centered decades into the future) plus membership fees. Someone thinking about the tradeoff between cryonics and bequesting wealth to their kids or to charity would worry more about this number. Taw then says that Eliezer is “lying” for giving his current costs rather than this number.
However, that NPV is not the nominal amount of a payout decades into the future. A youngish person can get whole life insurance (where premiums do not increase with age). 24 year old User:AngryParsley pays $768 per year for a $200,000 payout life insurance policy. Over 50 years he will pay $38,400 in premiums, which will be invested by the insurance company (which expects to profit by winding up with more than $200,000 by the time of payout, on average).
There is an additional complicating factor when talking about cases decades into the future that doesn’t arise in Eliezer’s situation (youngish person wanting protection for the next few decades, with expectation of accumulating wealth over time), namely inflation in cryonics costs, but a policy such as AngryParsley’s leaves plenty of margin for that.
What? Why expect this to happen? Wouldn’t cryonics groups plan for this? They do explicitly say how much money is required to be set aside via insurance for people to join, while that could change, why expect them to renege on their promises (contracts? I’m not too familiar) to preserve people for the previously set amount of money?
I wouldn’t trust a business that didn’t plan for changes in the cost of its raw material commodities to so much as make ice for a lemonade stand, much less freeze people. A claim like yours should have some clarification.
When I talked to Alcor, they said that they had raised the cost to join for new members several times, but had never increased the costs for existing members. They also said not to take that as a guarantee that they would never raise the costs for existing members, because they wouldn’t guarantee that.
Please do.
There is a big difference between something being ‘the best group ever’ and being ‘an easier shortcut to rationality than digging through philosophical writings the old-fashioned way’, which is how I interpreted this post. There is a community component to LessWrong that obviously isn’t present in old books, but I don’t think that’s paramount for most people. For me, in the beginning, the Sequences were just a good way to read about interesting ideas in small, digestible chunks during my breaks at work. Now it’s a bit more than that; LessWrong gives me a chance to post my ideas where they’ll be criticized by people who don’t have any social-etiquette reason not to tear apart my arguments. But there’s a big difference between a group being the optimum, the best any group of its kind could be, which LessWrong obviously isn’t...and between being the best out of all the options in a limited area, which is more what this post is claiming (I think).
Reading books never was a good way to learn rationality. You need to learn it in practice, through discussion and debate, and you can do that in the context of mainstream philosophy because mainstream philosophy has its blogs and NGs too. (of course it doesn’t have a “community” with a leader, a set off canonical works and a number of not-very provable doctrines everyone is supposed to subscribe to—and it’s better for it).
I strongly agree.
To reply honestly to this, I think that LW is (close to) superlative in some dimensions. It’s just that when people try to tell the community that there’s a bunch of other more important dimensions that it sucks at, people get angry and shoot the messenger.
I agree. I haven’t found another online forum which I prefer. I agreed with taw because (a) I think that some people here do attach a halo to LW, viewing it as “The Way” in some generalized sense and (b) people forget that a fair portion of what appears on LW is well known within certain circles (c.f. Don’t Revere The Bearer Of Good Info ).
I’ve noticed examples of this sort of thing.
What kind of dimensions did you have in mind here?
There is however room for disagreement on just how much “more important” these “other dimensions” are.
(Not necessarily taking a position myself, mind you.)
I strongly agree, and lay the blame in part on Eliezer’s innate bombast, and in part on karma.
While karma is probably, on balance, a desirable effect, it’s also one hell of a catalyst for the halo effect. “I agree with this post/comment”, “This post/comment took a lot of work” and especially “It made me feel good” all mix into a sugary sauce of equal reward whether you provide a high-value contribution or whether you just titillate the right psychological zones.
That said, I don’t think this post is a very bad one—it provides some solid arguments for its thesis, and it wouldn’t be its fault if LessWrongers jumped on that +1 button on sheer self-congratulatory reflex.
I don’t blame people for upvoting things that make them feel good, and this post is indeed well written. I just don’t like this attitude I’ve seen over and over again. Flattery is pleasant, just don’t take it too seriously.
Upvoted. For needing to be said. Badly.
I’m not saying a view point on whether I agree or not with your premise, I don’t think this is the best group ever but I have not been here long enough to know if others do.
I would however like to point out
is full of Ad hominem errors that to me distract from your argument
There’s no ad hominem here. The original post claims that LessWrong is great, and taw is pointing out some things that suggest that LessWrong is not great. An ad hominem here would be attacking Academian, not attacking Eliezer.
Typo: Academician->Academian
Whoops. Thanks!
How does attacking Eliezer here add to the argument?
To a large extent, and especially at the time this was written LW was practically synonymous with Eliezer. Also, Taw is (at least primarily) referring to things Eliezer Said on LW, thus its seems pretty relevant to the question of LW’s greatness.
I think I understand now thank you.
Qiaochu is questioning the presence of “ad hominem”. This issue doesn’t depend on the worth of the argument whose discussion hypothetically contains the error.
and Taw was attacking Eliezer because Eliezer is so associated with LW, and LW with him, that problems with one will often (at least be taken as) problems with the other. If Eliezer is systematically wrong, so is the sequences, and thereby probably LW too.
hmm… I very much enjoy reading LW, and I’d heartily recommend it to other people who are interested in the kind of subjects discussed here, but I think some humility is in order as well.
It’s hard to put my finger on it, but esp. when it comes to philosophy, I think a lot of it can be summarized as philosophy through the eyes of a computer programmer—not necessarily a bad point of view, but not the only one.
Just the only useful one. :)
random idea: disable the upvote button if a reader reached an article by browsing through the list of top posts. Do this to prevent an echo chamber effect, in which the articles that already have the most upvotes getting even more upvotes, while other upvote-worthy articles aren’t even looked at.
I checked LessWrong’s echo chamber effect when it came up by comparing the ratings of the first page of top posts to the ratings of the next page.
Using the anti-kibitzer to vote your way through an exchange, and then re-reading without the names and votes blocked is also a good method for analysing possible echo chambers.
And you found...?
Only the top couple of posts appear to be abnormal in number of votes accrued. The kinds of echo chambers I was worried about would have had everything on the first page upvoted and few bothering to go the second page—there might be some upvoting of Generalising from One Example going on, but that would happen simply because it’s the best post so far and not because the community thinks it’s good.
Also, seeing Humans are not automatically strategic make it into the top-voted recently is good evidence against an echo-chamber.
As for the anti-kibitzer? The only thing I noticed was that snark and sarcasm are interpreted as positive or negative depending on the poster’s reputation. Someone like wedrifid or Eliezer_Yudkowsky scores generally positive karma for such comments; most other score neutral or negative. This doesn’t bother me, as I am well aware of how difficult it is to communicate such forms of humour—and a major part is having enough goodwill for the name attached to the post to even consider snarkiness or sarcasm. Only certain people have enough reputation around here to pull it off. Overall, I think that’s a good thing: snark and sarcasm are great fun, but distracting and detrimental. When applied to an undesirable topic, the distracting and detrimental parts are also good things—but LW has lots of stuff I really want to discuss and see discussed, so the current low level of sarcasm is great.
The effect appears to be small. I don’t know if the database logs times of votes, but the times of actual posting are fairly homogeneous, so being a top post early on isn’t a big enough advantage to stay a top post forever.
I did notice an interesting trend for the number of upvotes to as a function of place to undulate in an unexpected way, indicating either that people are upvoting slightly differently based on quirks of the number or based on page position in the “top” pages.
If your goal is to protect yourself from an echo-chamber effect by reading everything regardless of others’ scoring, I’m not quite sure what stops you from doing that. Just browse via the “new” listing rather than the “top” listing… no?
LessWrong has a dual nature. On one hand, it’s a place where anyone can post, and where almost any idea can get a hearing.
On the other hand, LessWrong promotes the ideas of Eliezer Yudkowsky. This is inevitable, and fair, since it was originally based on Eliezer’s posts. This is also intentional; no post makes it onto the home page unless Eliezer endorses it; and he has to my knowledge never endorsed a post that disagreed with or questioned things he has said in the past.
I’m not complaining. I applaud Eliezer for opening up top-level posting to everyone; he could have just kept it as his blog. But LessWrong shouldn’t simultaneously be Eliezer’s place, and a base to use to build an entire discipline, if you want that discipline to be well-built. That’s like trying to build a school of journalism at Fox News.
Could LessWrong become such a place, if Eliezer relinquished control of the coveted green button? I don’t know. There’s more memetic homegeneity here than I would prefer for such a venture. But I don’t see any more likely candidates at present.
The other dual nature of LessWrong is that it’s about rationality, and it’s about Friendly AI. The groupthink exists mainly within the FAI aspect of LessWrong. Perhaps someday these two parts should split into separate websites?
(Or perhaps, before that happens, we will develop a web service interface enabling two websites to interact so seamlessly that the notion of “separate websites” will dissolve.)
Here’s one example of a post that criticized Eliezer and others associated with SIAI but nevertheless got promoted to the home page: http://lesswrong.com/lw/2l8/existential_risk_and_public_relations/
I think there have been others, though I don’t remember any specific ones off the top of my head.
Off the top of my head, Abnormal Cryonics.
Sometimes there are right answers, and smart people will mostly aggree. I suspect your perception of “memetic homegeneity” results from your insistance on disagreeing with some obviously (at least obviously after the discussions we’ve had) right answers, e.g. persistance of values as an instrumental value.
What? Someone disagrees with that? But, but… how?
Ask Phil
If I understand what you are talking about, I have expressed disagreement with it a couple of times. My disagreement has to do with the values expressed by a coalition (which will be some kind of bargained composite of the values of the individual members of that coalition).
But then when the membership in that coalition changes, the ‘deal’ must be renegotiated, and the coalition’s values are no longer perfectly persistent—nor should they be.
This is not just a technical quibble. The CEV of mankind is a composite value representing a coalition with a changing membership.
The case of agents in conflict. Keep your values and be destroyed, or change them and get the world partially optimized for your initial values.
The case of unknown future. You know class of worlds you want to be in. What you don’t know yet is that to reach them you must make choices incompatible with your values. And, to make things worse, all choices you can make ultimately lead to worlds you definitely don’t want to be in.
Yes. That is the general class that includes ‘Omega rewards you if you make your decision irrationally’. It applies whenever the specific state of your cognitive representation interacts significantly with the environment by means independent of your behaviour.
No. You don’t need to edit yourself to make unpleasant choices. Whenever you wish you were are different person than who you are so that you could make a different choice you just make that choice.
It works for pure consequentialist, but if one’s values have a deontology in the mix, then your suggestion effectively requires changing of one’s values.
And I doubt than instrumental value that will change terminal values can be called instrumental. Agent that adopts this value (persistence of values) will end up with different terminal values than agent that does not.
No, it’s the red button that makes the biggest difference.
The Sequences shouldn’t simultaneously be a slowly laid out, baby-steps introduction to rationality and the main resource to learn about EY’s ideas for domain specialists. They are trying to do contradictory things.
In my own attempts to study philosophy, I’ve found classical monologue-based instruction almost invariably suffers in comparison to dialogues between multiple people genuinely trying to convince each other of their ideas. When the author cannot interact with and respond to their audience, it’s easy to become complacent. Dialogue forces one to refine both one’s ideas and the presentation of one’s ideas, and makes it much easier to realistically compare a point of view to the most compelling alternatives.
I feel like the state of philosophical education would be much improved if the students were given texts constructed collaboratively, or even adversarially, with multiple co-authors trying to convince each other of their positions.
Or, of course, you could simply send them off to follow a blog or forum with high standards of debate.
Definitely true. Might I add that forums/blogs are better than real life (verbal) discussions for developing insightful ideas, precisely because you must put in more time to develop a forum/blog post (and also because there are no distracting non-verbal stimuli that may make someone look “smarter” or more “authoritative” than someone else—so only the best ideas get selected, rather than the ideas from the most respected/most prestigious person), and also because it’s much easier to refute individual (quoted) points that way.
Furthermore, people will forget points communicated verbally. Points communicated through forums/blogs will stay for a long time, where someone can search for them (and think about them) months later.
Why can I not upvote this post again each time I stumble across it and love it again?
But by commenting on it, you can lead others (like me) to get to read it for the first time. Even better! :)
The reason why people read those works is to figure out how those people arrived at their wrong conclusions, what has changed so that we today know better and what this tells us about possible shortcomings of contemporary ideas. Learning from the failure of history and about our cultural evolution and the associated conceptual revolutions are some of the reasons to read what you might perceive as simply outdated.
That may be why people ought to read them, but I don’t think it’s why they read them. Philosophy, as taught in colleges and books, places almost no emphasis on methodology, identifying errors, critiquing and disposing of extremely bad or outdated ideas, etc. It’s as if the Enlightenment never happened.
Michael Vassar says philosophy is a field unconcerned with what people in college teach as philosophy, but I don’t know what he means. Possibly he means philosophers now are either analytic philosophers or deconstructionists.
?! Add “anything other than” right after “almost no emphasis on” and you’d have it about right. At least, judging by the two universities whose philosophy courses I took. YMMV.
I suspect what he means is that the general philosophy course tends to the ‘read and summarize as best you can’, with some questioning to test recall and a bit of comprehension, while philosophy in practice is more about nailing down arguments into a formally valid argument and then figuring out what to accept or reject about it.
The best philosophy course I ever took consisted basically of each class, the teacher walked in, defined some terms, wrote up a valid syllogism or propositional argument (sometimes messing it up just to test us), asking us whether we accepted the conclusion, or rejected one of the premises; which one and why? and then we debated each other.
Doesn’t seem right to me; modern philosophy isn’t quite that simply divided. (Where does someone like Jurgen Habermas, to name someone in the news recently, fit in? He’s far from analytic, although he’s quite sharp in person (as I can attest), but also not merely deconstructing existing things.)
I used to think that when I became a teacher, if I ever did, my class would be either about Darwin Dangerous Idea, or Godel Escher Bach.
Not that much for the content, but because philosophy students in Brazil just need to learn how to think. (For those who have read Feynman’s QED, what was true of physics in his time is still true of philosophy now. We are massive producers of Teacher Password guessers)
But after having been through the sequences, it is really tough to decide.
I’ve been thinking about offering lectures designed to help humanities people think. If you feel like doing something for free that may help some people but at the very least generate some interesting debate for you, I would like to offer a hand in cooperation.
I was thinking about getting together enough quality material (presentations, accompanying write up and a literature list) for about six 45 minute lectures followed by a bit of Q&A time and debate, with the early ones enticing and the later ones selecting people away. Its purpose would be to get rid of some of the activation energy needed to start thinking about being less wrong, especially about things those majors tend to think wrongly about, but would of course fall very much short of a “rationality education” or even a rationality “class”.
I’m organizing a group in brazil to centralize latin american transhuman activities. We are still not as organized as it would be necessary to get to the point you suggest. But I am sure that by february we will.
Which means that if you want you can send me e-mails about me to keep me enticed, and as soon as we (there are 3 of us currently) have an organized website, I will contact you back?
Is this good for you? I’m open to suggestions. my email is diegocaleiro at, the symbol, gmaill dot, the symbol, com
where are you from, by the way?
Still thinking about that? would you be giving AVI lectures or MPG lectures? :)
::applauds::
Applause light is a phrase that is expected to invariably elicit a cheering response; it’s something socially frowned upon to argue against. It is also characterized by lack of specificity and relevant arguments.
In contrast, Academian elaborates why he thinks LW is good. He provides arguments. Also, criticizing LW is not a taboo here, as the fairly upvoted such posts attest.
OP does preach to the choir and almost automatically activates the anti-cult and anti-groupthink reflexes so ingrained in the LW readership. Maybe it should be rather read by Academian’s dumbstruck friends. But it isn’t an applause light.
This article doesn’t have to be solely an applause light for Crono’s implied criticism to be entirely valid. This article is too focused on heaping praise upon LessWrong in the second half. It fails to talk about other forms, doesn’t explore the thoughts of his friends at all, fails to mention all but the most harmless downsides of LW, and relies extensively on commonplace statements.
I mean, it’s not the worst thing ever. But if I had to describe it succinctly, especially in the context of ~250 karma points, “applause light” would be it.
If you’re looking for downsides to the blog format, stressing over reputation is a big one. Check out the google hits for “blogging stress”.
I wasn’t really thinking of “downsides for poor Eliezer” :P Although I guess it could lead him to write worse—though I can’t think of any format that wouldn’t allow worrying about what people will think.
I was thinking more along the lines of arguments against the ability of the blog format being the best to produce a philosophical treatise. Lack of larger structure, infrequency of revisions, rarity of outside editing, ability to get away without writing for a “timeless” audience, arguments like that.
Is playing devil’s advocate difficult for other people? I’ve never found out if it’s normally more difficult than ordinary thinking.
Comments making things worse by overpraising problematic ideas. Hyperlinked text leading to poorer understanding on the part of the reader than pure words because it distracts and breaks up the structure. Specifically applicable to LW: some notable philosophical works had much more effort put into them (thinking of an analysis of Wittgenstein in particular here)… hmm, and I’m out of ideas for the moment. Well, respectable ones at least :D
As always I affirm my appreciation of the devil while deploring advocacy of all kinds! Playing devil’s advocate encourages terrible thinking and using arguments as soldiers. On the other hand being labelled a devil’s advocate simply because you make points that are neglected even if they aren’t for the prestigious side is usually a good sign (epistemologically if not politically).
I also believe that the previous paragraph would be improved by making ‘arguments as soldiers’ a hyperlink. Lack of hyperlinking is perhaps the worst things about books as a medium, even though there is real value in reading through an organised text that goes through the fundamentals of a subject systematically. The sequences blur the line here… they are, after all, part of Eliezer’s efforts to write a book!
I come to the exact opposite conclusion. It’s certainly possible to be a poor Devil’s Advocate: an easy way is choosing poor, unconvincing, arguments. But the exercise of trying to make the most serious and plausible argument for the other side involves carefully examining the opposing arguments and evidence, and taking them seriously, rather than as enemy soldiers to be shot down. A good Devil’s Advocate learns to judge arguments and weight them based on how useful and effective they are, how convincing they are to the unaligned. While not the same thing as “how true they are”, it is a serious step up from “how much of a security blanket they are to the unconvinced”, and much more likely to lead to “how true is this”, especially in the context of rational unaligned third parties, where truth is actually a strong element of convincing.
I totally agree with this post. When people ask me what is the best book I’ve ever read, or the most important book I’ve ever read, or what I think the best book ever written is, I say: “The Sequences, by Eliezer Yudkowsky.” Even if The Sequences were Eliezer’s only gift to humanity, that contribution alone would rank him pretty high on my list of most important people in history.
Somewhat dissenting view: For progress to be by accumulation and not by random walk, read great books.
A really dissenting view from Robin Hanson.
That’s not a dissenting view; Vassar is pointing out ideas that we now know to be correct, and suggesting that we study how they were created in order to be better at creating new good ideas. That would be impossible if we didn’t have the present vantage point of knowing which ideas turned out correct and fruitful: at least for humans, solution checking is way easier than solution finding… that’s why we still don’t know if P=NP.
The history of science is very romantic and inspiring, precisely because we know which works to look to for inspiration. And of course, we should do that.
I believe Vassar’s point is that some of the ideas we now believe could actually be wrong (and in fact a lot of them probably are), and some older ideas might be closer to the truth.
Keep in mind that societies frequently reject ideas for reasons unrelated to their truth values.
I was making both points, the former for physics, the latter for almost all other fields.
Sorry, you’ll have to excuse a bit of my ignorance here.
What are some of Hume’s “bad” ideas? He’s a philosopher I cherish quite a bit. I’d be interested to know what his “bad” ideas are. (Have you read Hume at all? Or anything about Hume?)
I think reading Kant about “Time” (why capital T?) could be a bad idea, since so many ideas about space and time were influenced by modern physics. (For instance, Kant thought that physical space, a priori, was Euclidean—please correct me if I’m misinterpreting Kant here-, which is unfortunate but completely reasonable.)
I think the most exciting idea Kant had was his attempt to establish a “Copernican Revolution” in philosophy—that our perception of the world and minds were somehow limited and subject to constraints like any other object in the world. I will direct all interested parties to this podcast .
I also think Hume was pretty amazing, which is why I picked him. Accusing him in particular of “bad” “ideas” is a bit harsh, since my issue is as much with non-ideas as with “bad” ones (so thanks for pointing this out). Let me say this better:
1) First, read the Wikipedia article on Hume and his many awesome ideas.
2) Next, start reading, say Part 1 of his Dialogues Concerning Natural Religion (including Pamphilus to Hermippus).
They’re about the same length, but the density of ideas in (2) that are interesting by modern standards is extremely low in comparison to (1). This is, of course, a credit to Hume: he was so right that his writing mostly looks like overly-verbose common sense these days, at least to regular readers of LessWrong.
I think I’ll edit the OP to better reflect my view here. New sentence:
While I agree that Less Wrong is a great venue for learning about rationality, I think we can improve the newbie experience here for those who are coming from the perspective of reading the gathered thoughts of the Very Special Person to the community blog setting of many people writing together. I am in particular concerned by this question as I am starting up a nonprofit devoted to spreading rationality among the broad masses, and hope to channel advanced students to Less Wrong. Do you have any thoughts on how to smooth the transition for newcomers?
I’m having trouble parsing this sentence. Would you mind elaborating or supplying an alternative phrasing?
Ah, looking back I think I got it, to → the.
Is it broken only for me? When I click on it, it shows me the last two posts (and nothing else). I didn’t spot any setting in Preference that seemed to cause this.
I agree and found what you wrote to be thought provoking. I have a thought I want to share with the author. What if?
The basic unit of conceptual evolution is a self evolving form survey that solicits feedback on the survey form design. The minimum conceptual seed topic for a form survey is a story that includes information with multiple interpretations. Short stories and surveys are numbered by default preferred order of presentation, but users can choose their own order. Each photo and story solicits reader contributions with complex meaningful questions to claim time reading, reasoning, and writing about the story. All reader surveys are public and are voted on and used for iterative refinement of the photo stories and survey questions. If users read and critique 2 or more stories they are encouraged to competitively rate and vote for their favorite story by answering complex questions about the values of each presentation comparatively in a 3rd survey.
Anyone can create a survey or presentations by copying and editing an existing version. Complete a contest entry form to receive credit in “Effort Minutes” with verification. Then compete to be voted favorably in categories of entertainment, insight, value, inclusive, distributive, enlightened, or other[you decide]. You get credit just for their minutes of time choosing to read what you created and answer your survey questions. Your contest prizes from votes and reading attention minute royalties can be converted into line wait minutes or any other form of Time Value Accounting metric.
source: the hOEP Project, https://docs.google.com/document/d/1-NJXPgQEhxCQouBott8j3rOut794laBOA3aU3UGD10U/edit