If we want to know where the truth lies in particular cases, we have to look.
Richard Dawkins, The Selfish Gene
If we want to know where the truth lies in particular cases, we have to look.
Richard Dawkins, The Selfish Gene
None are so fallible as those who are sure they’re right.
Strunk & White, The Elements of Style
I was very surprised to see that too, to the point of questioning whether the result was real, but apparently it is. (The particular result is on page 10 — and possibly elsewhere, I haven’t read it through yet.)
I found the Coming of Age series to be both self-indulgent and quite dull, and I think that it’s very difficult to use yourself as an example of vice or virtue without running into one or both of those issues. I also find that I (more-or-less automatically) downgrade an author’s ethos by a lot when he’s talking about himself as an illustrative example. But for this one, it’s the skeeviness factor that dominates — it’s just plain creepy to hear about your love life as a source of telling anecdotes. And that’s distracting.
Polyamory may be great, but the right way to promote it is not by slipping into a post the implication that it’s the endpoint of rational thinking about romance. Which is what this reads as, whether you intended it to or not. If you want to advocate polyamory here (and honestly, I’m not sure that Less Wrong is the right place to do so), you should devote an entire post to it, and set forth clear arguments as to why it’s the better option, rather than presuming it in your advice.
The Sequences do not consist of Eliezer promoting himself as a master rationalist, nor do they assume that you already think he is. He argues for certain positions, and the reader comes to believe that he is a good rationalist as a result of being convinced that the positions he holds are rational. The tone of this is much closer to the motivational-seminar pitch of “I turned my life around using these three simple principles”, with the additional difficulty that we’re just taking your word for your romantic success. It’s not credible.
Also, found the bite-sizing of the lessons made them feel like distractions to be skipped over rather than principles that the anecdotes were illustrating.
Downvoted.
It’s interesting and potentially useful, and I liked some of the links; however, I felt seriously skeeved-out throughout, probably due to the combination of uncomfortably personal authorial bildungsroman (with connotations of “if you do this right, you can be just like me”), and the implied promotion of polyamory. Would work much better if you could remove the autobiographical aspects.
Another erratum: Noah’s Flash Flood is listed as level 9 in the chart and level 8 in the descriptions.
Also, the first page incorrectly describes it as being the DM guide.
FYI, I showed the manual to a (non-Less Wrong) philosophy-major friend who runs D&D games, so you may develop a splinter group.
This one is really important, for a reason that wasn’t spelled out in the original article — hindsight bias makes people think folk wisdom is more valid than it really is, and thereby opens the door to all kinds of superstitious belief. If people interpret scientific evidence as confirming ‘common-sense’ or handed-down knowledge (because they select the ‘common-sense’ belief that turned out to be true after seeing the data, rather than having to select one from the morass beforehand), then they’re likely to increase their credence in other knowledge of that type. You see this all the time when people say things like “science is just now finding proof for medicines that indigenous societies have been using for thousands of years — so here, try this snake oil!”
The Atlantic put up a piece today using HP:MoR as the take-off point for discussing fanfiction and fan communities.
To be precise, knowing that someone is biased towards holding a belief decreases the amount you should update your own beliefs in response to theirs — because it decreases the likelihood ratio of the test.
(That is, having a bias towards a belief means people are more likely to believe it when it isn’t true (more false positives), so a bias-influenced belief is less likely to be true and therefore weaker evidence. In Bayesian terms, bias increases P(B) without increasing P(B|A), so it decreases P(A|B).)
So CarmendeMacedo’s right that you can’t get evidence about the world from knowledge of a person’s biases, but you should decrease your confidence if you discover a bias, because it means you had the wrong priors when you updated the first time.
Sometimes, apparently rational self-interested strategies turn out (as in the prisoners’ dilemma) to be self-defeating. This may look like a defeat for rationality, but it is not. Rationality is saved by its own open-endedness. If a strategy of following accepted rules of rationality is sometimes self-defeating, this is not the end. We revise the rules to take account of this, so producing a higher-order rationality strategy. This in turn may fail, but again we go up a level. At whatever level we fail, there is always the process of standing back and going up a further level.
Quoted in The Blank Slate by Steven Pinker
We should be careful to get out of an experience only the wisdom that is in it.
Mark Twain
It was a good answer that was made by one who when they showed him hanging in a temple a picture of those who had paid their vows as having escaped shipwreck, and would have him say whether he did not now acknowledge the power of the gods,—‘Aye,’ asked he again, ‘but where are they painted that were drowned after their vows?’ And such is the way of all superstition, whether in astrology, dreams, omens, divine judgments, or the like; wherein men, having a delight in such vanities, mark the events where they are fulfilled, but where they fail, though this happens much oftener, neglect and pass them by.
Francis Bacon
As a somewhat casual reader and participant, my immediate reaction (regardless of functionality, which I really haven’t tried out yet) is that the new design is horrendously ugly compared to the old one. I was intending to go through the Sequences soon, but the visual change is a pretty strong disincentive.
If at all possible, I’d like the ability to view posts using the old interface.
Since being introduced to Less Wrong and clarifying that ‘truth’ is a property of beliefs corresponding to how accurately they let you predict the world, I’ve separated ‘validity’ from ‘truth’.
The syllogism “All cups are green; Socrates is a cup; therefore Socrates is green” is valid within the standard system of logic, but it doesn’t correspond to anything meaningful. But the reason that we view logic as more than a curiosity is that we can use logic and true premises to reach true conclusions. Logic is useful because it produces true beliefs.
Some mathematical statements follow the rules of math; we call them valid, and they would be just as valid in any other universe. Math as a system is useful because (in our universe) we can use mathematical models to arrive at predictively accurate conclusions.
Bringing ‘truth’ into it is just confusing.
I find that Less Wrong is a conflation of about six topics:
Singularitarianism (e.g. discussion of SIAI, world-saving)
Topics in AI (e.g. decision theory, machine learning)
Topics in philosophy (e.g. metaethics, anthropic principle)
Epistemic rationality (e.g. dissolving questions, training mental skills, cognitive biases)
Applied rationality (e.g. how to avoid akrasia, how to acquire skills efficiently)
Rationality community (e.g. meetups, exchanging knowledge)
These don’t all seem to fit together entirely comfortably. Ideally, I’d split these into three more-coherent sections (singularitarianism and AI, philosophy and epistemic rationality, and applied rationality and community), each of which I think could probably be more effective as their own space.
A very popular error: having the courage of one’s convictions; rather it is a matter of having the courage for an attack on one’s convictions.
Friedrich Wilhelm Nietzsche
The pop-up window you get when you click on a voting button before logging in always seemed ugly and discordant to me.
The keywords here are “randomized response”. There are some interesting variations (from the Wikipedia page):