What probability should I assign to being completely wrong and brainwashed by Lesswrong? What steps would one take to get more actionable information on this topic? For each new visitor who comes in and accuses us of messianic groupthink how far should I update in the direction of believing them? Am I going to burn in counter factual hell for even asking?
The first thing you should probably do is narrow down what specifically you feel like you may be brainwashed about. I posted some possible sample things below. Since you mention Messianic groupthink as a specific concern, some of these will relate to Yudkowsky, and some of them are Less Wrong versions of cult related control questions. (Things that are associated with cultishness in general, just rephrased to be Less Wrongish)
Do you/Have you:
1: Signed up for Cyonics.
2: Agressively donated to MIRI.
3: Check for updates on HPMOR more often then Yudkowsky said there would be on the off hand chance he updated early.
4: Gone to meet ups.
5: Went out of your way to see Eliezer Yudkowsky in person.
6: Spend time thinking, when not on Less Wrong: “That reminds me of Less Wrong/Eliezer Yudkowsky.”
7: Played an AI Box experiment with money on the line.
8: Attempted to engage in a quantified self experiment.
9: Cut yourself off from friends because they seem irrational.
10: Stopped consulting other sources outside of Less Wrong.
11: Spent money on a product recommended by someone with high Karma (Example: Metamed)
12: Tried to recruit other people to Less Wrong and felt negatively if they declined.
13: Written rationalist fanfiction.
14: Decided to become polyamorous.
15: Feel as if you have sinned any time you receive even a single downvote.
16: Gone out of your way to adopt Less Wrong styled phrasing in dialogue with people that don’t even follow the site.
For instance, after reviewing that list, I increased my certainty I was not brainwashed by Less Wrong because there are a lot of those I haven’t done or don’t do, but I also know which questions are explicitly cult related, so I’m biased. Some of these I don’t even currently know anyone on the site who would say yes to them.
I’m in the process of doing 1, have maybe done 2 depending on your definition of aggressively (made only a couple donations, but largest was ~$1000), and done 4.
Oh, and 11, I got Amazon Prime on Yvain’s recommendation, and started taking melatonin on gwern’s. Both excellent decisions, I think.
And 14, sort of. I once got talked into a “polyamorous relationship” by a woman I was sleeping with, no connection whatsoever to LessWrong. But mostly I just have casual sex and avoid relationships entirely.
In “The Inertia of Fear and the Scientific Worldview”, by the Russian computer scientist and Soviet-era dissident Valentin Turchin, in the chapter “The Ideological Hierarchy”, Soviet ideology was analyzed as having four levels: philosophical level (e.g. dialectical materialism), socioeconomic level (e.g. social class analysis), history of Soviet Communism (the Party, the Revolution, the Soviet state), and “current policies” (i.e. whatever was in Pravda op-eds that week).
According to Turchin, most people in the USSR regarded the day-to-day propaganda as empty and false, but a majority would still have agreed with the historical framework, for lack of any alternative view; and the number who explicitly questioned the philosophical and socioeconomic doctrines would be exceedingly small. (He appears to not be counting religious people here, who numbered in the tens of millions, and who he describes as a separate ideological minority.)
BaconServ writes that “LessWrong is the focus of LessWrong”, though perhaps the idea would be more clearly expressed as, LessWrong is the chief sacred value of LessWrong. You are allowed to doubt the content, you are allowed to disdain individual people, but you must consider LW itself to be an oasis of rationality in an irrational world.
I read that and thought, meh, this is just the sophomoric discovery that groupings formed for the sake of some value have to value themselves too; the Omohundro drive to self-protection, at work in a collective intelligence rather than in an AI. It also overlooks the existence of ideological minorities who think that LW is failing at rationality in some way, but who hang around for various reasons.
However, these layered perspectives—which distinguish between different levels of dissent—may be useful in evaluating the ways in which one has incorporated LW-think into oneself. Of course, Less Wrong is not the Soviet Union; it’s a reddit clone with meetups that recruits through fan fiction, not a territorial superpower with nukes and spies. Any search for analogies with Turchin’s account, should look for differences as well as similarities. But the general idea, that one may disagree with one level of content but agree with a higher level, is something to consider.
Could people list philosophy oriented internet forums with high concentration of smart people and no significant memetic overlap so that one could test this? I don’t know any and I think it’s dangerous.
What steps would one take to get more actionable information on this topic?
I’d suggest starting by reading up on “brainwashing” and developing a sense of what signs characterize it (and, indeed, if it’s even a thing at all).
For each new visitor who comes in and accuses us of messianic groupthink how far should I update in the direction of believing them?
Presumably this depends on how much new evidence they are providing relative to the last visitor accusing us of messianic groupthink, and whether you think you updated properly then. A dozen people repeating the same theory based on the same observations is not (necessarily) significantly more evidence in favor of that theory than five people repeating it; what you should be paying attention to is new evidence.
Note that your suggestions are all within the framework of the “accepted LW wisdom”. The best you can hope for is to detect some internal inconsistencies in this framework. One’s best chance of “deconversion” is usually to seriously consider the arguments from outside the framework of beliefs, possibly after realizing that the framework in question is not self-consistent or leads to personally unacceptable conclusions (like having to prefer torture to specks). Something like that “worked” for palladias, apparently. Also, I once described an alternative to the LW epistemology (my personal brand of instrumentalism), but it did not go over very well.
Brainwashing (which is one thing drethlin asked about the probability of) is not an LW concept, particularly; I’m not sure how reading up on it is remaining inside the “accepted LW wisdom.”
If reading up on brainwashing teaches me that certain signs characterize it, and LW demonstrates those signs, I should increase my estimate that LW is brainwashing people, and consequently that I’m being brainwashed. And, yes, if I conclude that it’s likely that I’m being brainwashed, there are various deconversion techniques I can use to negate that.
Of course, seriously considering arguments from outside the framework of beliefs is a good idea regardless.
Being completely wrong, admittedly, (the other thing drethlin asked about the probability of) doesn’t lend itself to this approach so well… it’s hard to know where to even start, there.
If reading up on brainwashing teaches me that certain signs characterize it, and LW demonstrates those signs, I should increase my estimate that LW is brainwashing people, and consequently that I’m being brainwashed.
Reading up on brainwashing can mean reading gwern’s essay which concludes that brainwashing doesn’t really work. Of course that’s exactly what someone who wants to brainwash yourself would tell you, wouldn’t it?
Sure. I’m not exactly sure why you’d choose to interpret “read up on brainwashing” in this context as meaning “read what a member of the group you’re concerned about being brainwashed by has to say about brainwashing,” but I certainly agree that it’s a legitimate example, and it has exactly the failure mode you imply.
For what it’s worth, gwern’s findings are consistent with mine (see this thread). I’d rather restrict “brainwashing” to coercive persuasion, i.e. indoctrinating prisoners of war or what have you, but Scientology, the Unification Church, and so forth also seem remarkably poor at long-term persuasion. It’s difficult to find comparable numbers for large, socially accepted religions, or for that matter nontheism—more of the conversion process plays out in the public sphere, making it harder to delineate, and ulterior motives (i.e. converting to a fiancee’s religion) are much more common—but if you read between the lines they seem to be higher.
Deprogramming techniques aren’t much better, incidentally—from everything I’ve read they range from the ineffective to the abusive, and often have quite a bit in common with brainwashing in the coercive sense. You couldn’t apply most of them to yourself, and wouldn’t want to in any case.
Brainwashing (which is one thing drethlin asked about the probability of) is not an LW concept, particularly; I’m not sure how reading up on it is remaining inside the “accepted LW wisdom.”
No argument there. What I alluded to is the second part, incremental “Bayesian” updating based on (independent) new evidence. This is more of an LW “inside” thing.
Sorry, I wasn’t trying to be nonresponsive, that reading just didn’t occur to me. (Coincidence? Or a troubling sign of epistemic closure?)
I will admit, the idea that I should update beliefs based on new evidence, but that repeatedly presenting me with the same evidence over and over should not significantly update my beliefs, seems to me nothing but common sense.
Of course, that’s just what I should expect it to feel like if I were trapped inside a self-reinforcing network of pernicious false beliefs.
So, all right… in the spirit of seriously considering arguments from outside the framework, and given that as a champion of an alternative epistemology you arguably count as “outside the framework”, what would you propose as an answer to drethlin’s question about how far they should update based on each new critic?
what would you propose as an answer to drethlin’s question about how far they should update based on each new critic?
Hmm. My suspicion is that formulating the question in this way already puts you “inside the box”, since it uses Bayesian terms to begin with. Something like trying to detect problems in a religious moral framework after postulating objective morality. Maybe this is not a good example, but a better one eludes me at the moment. To honestly try to break out of the framework, one has to find a way to ask different questions. I suspect that am too much “inside” to figure out what they could be.
And I can certainly see how, if we did not insist on framing the problem in terms of how to consistently update confidence levels based on evidence in the first place, other ways of approaching the “how can I tell if I’m being brainwashed?” question would present themselves. Some traditional examples that come to mind include praying for guidance on the subject and various schools of divination, for example. Of course, a huge number of less traditional possibilities that seem equally unjustified from a “Bayesian” framework (but otherwise share nothing in common with those) are also possible.
What probability should I assign to being completely wrong and brainwashed by Lesswrong?
Wrong about what? Different subjects call for different probability.
The probability that the Bayes Theorem is wrong is vanishingly small. The probability that the UFAI risk is completely overblown is considerably higher.
LW “ideology” is an agglomeration in the sense that accepting (or not) a part of it does not imply acceptance (or rejection) of other parts. One can be a good Bayesian, not care about UFAI, and be signed up for cryonics—no logical inconsistencies here.
For each new visitor who comes in and accuses us of messianic groupthink how far should I update in the direction of believing them?
As long as the number is small, I wouldn’t update at all, because I already expect slow trickle of those people on my current information, so seeing that expectation confirmed isn’t new evidence. If LW achieved a Scientology-like place in popular opinion, though, I’d be worried.
Am I going to burn in counter factual hell for even asking?
Seriously though, I’d love to see some applied-rationality techniques put to use successfully doubting parts of the applied rationality worldview. I’ve seen some examples already, but more is good.
The biggest weakness, in my opinion, with purely (or almost purely) probabilistic reasoning is the fact that it cannot ultimately do away with us relying on a number of (ultimately faith/belief based) choices as to how we understand our reality.
The existence of the past and future (and within most people’s reasoning systems, the understanding of these as linear) are both ultimately postulations that are generally accepted at face value, as well as the idea that consciousness/awareness arises from matter/quantum phenomena not vice versa.
The biggest weakness, in my opinion, with purely (or almost purely) probabilistic reasoning is the fact that it cannot ultimately do away with us relying on a number of (ultimately faith/belief based) choices as to how we understand our reality.
In your opinion, is there some other form of reasoning that avoids this weakness?
That’s a very complicated question but I’ll try to do my best to answer.
In many ancient cultures, they used two words for the mind, or for thinking, and it is still used figuratively today. “In my heart I know...”
In my opinion, in terms of expected impact on the course of life for a given subject, generally, more important than their understanding of Bayesian reasoning, is what they ‘want’ … how they define themselves. Consciously, and unconsciously.
For “reasoning”, no I doubt there is a better system. But since we must (or almost universally do) follow our instincts on a wide range of issues (is everyone else p-zombies? am I real? Is my chair conscious? Am I dreaming?), it is highly important, and often overlooked, that one’s “presumptive model” of reality and of themselves (both strictly intertwined psychologically) should be perfected with just as much effort (if not more) as we spend perfecting our probabilistic reasoning.
Probabilities can’t cover everything. Eventually you just have to make a choice as to which concept or view you believe more, and that choice changes your character, and your character changes your decisions, and your decisions are your life.
When one is confident, and subconsciously/instinctively aware that they are doing what they should be doing, thinking how they should be thinking, that their ‘foundation’ is solid (moral compass, goals, motivation, emotional baggage, openness to new ideas, etc.) they then can be a much more effective rationalist, and be more sure (albeit only instinctively) that they are doing the right thing when they act.
Those instinctive presumptions, and life-defining self image do have a strong quantifiable impact on the life of any human, and even a nominal understanding of rationality would allow one to realize that.
Maximise your own effectiveness. Perfect how your mind works, how you think of yourselves and others (again, instinctive opinions, gut feelings, more than conscious thought, although conscious thought is extremely important). Then when you start teaching it and filling it with data you’ll make a lot less mistakes.
Am I going to burn in counter factual hell for even asking?
You’d be crazy not to ask. The views of people on this site are suspiciously similar. We might agree because we’re more rational than most, but you’d be a fool to reject the alternative hypothesis out of hand. Especially since they’re not mutually exclusive.
What steps would one take to get more actionable information on this topic?
Use culture to contrast with culture. Avoid being a man of a single book and get familiar with some past or present intellectual traditions that are distant from the LW cultural cluster. Try to get a feel for the wider cultural map and how LW fits into it.
I’d say you should assign a very high probability for your beliefs being aligned in the direction LessWrong’s are, even in cases where such beliefs are wrong. It’s just how the human brain and human society works; there’s no getting around it. However, how much of that alignment is due to self-selection bias (choosing to be a part of LessWrong because you are that type of person) or brainwashing is a more difficult question.
What probability should I assign to being completely wrong and brainwashed by Lesswrong? What steps would one take to get more actionable information on this topic? For each new visitor who comes in and accuses us of messianic groupthink how far should I update in the direction of believing them? Am I going to burn in counter factual hell for even asking?
If Lesswrong would be good at brainwashing I would expect much more people to have signed up for cryonics.
Spend time outside of Lesswrong and discuss with smart people. Don’t rely on a single community to give you your map of the world.
The first thing you should probably do is narrow down what specifically you feel like you may be brainwashed about. I posted some possible sample things below. Since you mention Messianic groupthink as a specific concern, some of these will relate to Yudkowsky, and some of them are Less Wrong versions of cult related control questions. (Things that are associated with cultishness in general, just rephrased to be Less Wrongish)
Do you/Have you:
1: Signed up for Cyonics.
2: Agressively donated to MIRI.
3: Check for updates on HPMOR more often then Yudkowsky said there would be on the off hand chance he updated early.
4: Gone to meet ups.
5: Went out of your way to see Eliezer Yudkowsky in person.
6: Spend time thinking, when not on Less Wrong: “That reminds me of Less Wrong/Eliezer Yudkowsky.”
7: Played an AI Box experiment with money on the line.
8: Attempted to engage in a quantified self experiment.
9: Cut yourself off from friends because they seem irrational.
10: Stopped consulting other sources outside of Less Wrong.
11: Spent money on a product recommended by someone with high Karma (Example: Metamed)
12: Tried to recruit other people to Less Wrong and felt negatively if they declined.
13: Written rationalist fanfiction.
14: Decided to become polyamorous.
15: Feel as if you have sinned any time you receive even a single downvote.
16: Gone out of your way to adopt Less Wrong styled phrasing in dialogue with people that don’t even follow the site.
For instance, after reviewing that list, I increased my certainty I was not brainwashed by Less Wrong because there are a lot of those I haven’t done or don’t do, but I also know which questions are explicitly cult related, so I’m biased. Some of these I don’t even currently know anyone on the site who would say yes to them.
No
I’m a top 20 donor
Nope
yes.
Not really? That was probably some motivation for going to a mincamp but not most of it.
Nope.
Nope.
a tiny amount? I’ve tracked weight throughout diet changes.
Not that I can think of. Certainly no one closer than a random facebook friend.
Nope.
I’ve spent money on Modafinil after it’s been recommended on here. I could count Melatonin but my dad told me about that years ago.
Yes.
Nope.
I was in an open relationship before I ever heard of Leswrong.
HAHAHAHAHAH no.
This one is hard to analyze, I’ve talked about EM hell and so on outside of the context of Lesswrong. Dunno.
Seriously considering moving to the bay area.
I’m in the process of doing 1, have maybe done 2 depending on your definition of aggressively (made only a couple donations, but largest was ~$1000), and done 4.
Oh, and 11, I got Amazon Prime on Yvain’s recommendation, and started taking melatonin on gwern’s. Both excellent decisions, I think.
And 14, sort of. I once got talked into a “polyamorous relationship” by a woman I was sleeping with, no connection whatsoever to LessWrong. But mostly I just have casual sex and avoid relationships entirely.
… huh. I’m a former meetup organizer and I don’t even score higher than two on that list.
Cool. I think maybe we’re not a cult today.
Good. I score 5 out of 16 by interpreting each point the broadest reasonably possible way.
Beware: you’ve created a lesswrong purity test.
In “The Inertia of Fear and the Scientific Worldview”, by the Russian computer scientist and Soviet-era dissident Valentin Turchin, in the chapter “The Ideological Hierarchy”, Soviet ideology was analyzed as having four levels: philosophical level (e.g. dialectical materialism), socioeconomic level (e.g. social class analysis), history of Soviet Communism (the Party, the Revolution, the Soviet state), and “current policies” (i.e. whatever was in Pravda op-eds that week).
According to Turchin, most people in the USSR regarded the day-to-day propaganda as empty and false, but a majority would still have agreed with the historical framework, for lack of any alternative view; and the number who explicitly questioned the philosophical and socioeconomic doctrines would be exceedingly small. (He appears to not be counting religious people here, who numbered in the tens of millions, and who he describes as a separate ideological minority.)
BaconServ writes that “LessWrong is the focus of LessWrong”, though perhaps the idea would be more clearly expressed as, LessWrong is the chief sacred value of LessWrong. You are allowed to doubt the content, you are allowed to disdain individual people, but you must consider LW itself to be an oasis of rationality in an irrational world.
I read that and thought, meh, this is just the sophomoric discovery that groupings formed for the sake of some value have to value themselves too; the Omohundro drive to self-protection, at work in a collective intelligence rather than in an AI. It also overlooks the existence of ideological minorities who think that LW is failing at rationality in some way, but who hang around for various reasons.
However, these layered perspectives—which distinguish between different levels of dissent—may be useful in evaluating the ways in which one has incorporated LW-think into oneself. Of course, Less Wrong is not the Soviet Union; it’s a reddit clone with meetups that recruits through fan fiction, not a territorial superpower with nukes and spies. Any search for analogies with Turchin’s account, should look for differences as well as similarities. But the general idea, that one may disagree with one level of content but agree with a higher level, is something to consider.
Or not.
Could people list philosophy oriented internet forums with high concentration of smart people and no significant memetic overlap so that one could test this? I don’t know any and I think it’s dangerous.
I would love to see this as well
I’d suggest starting by reading up on “brainwashing” and developing a sense of what signs characterize it (and, indeed, if it’s even a thing at all).
Presumably this depends on how much new evidence they are providing relative to the last visitor accusing us of messianic groupthink, and whether you think you updated properly then. A dozen people repeating the same theory based on the same observations is not (necessarily) significantly more evidence in favor of that theory than five people repeating it; what you should be paying attention to is new evidence.
Note that your suggestions are all within the framework of the “accepted LW wisdom”. The best you can hope for is to detect some internal inconsistencies in this framework. One’s best chance of “deconversion” is usually to seriously consider the arguments from outside the framework of beliefs, possibly after realizing that the framework in question is not self-consistent or leads to personally unacceptable conclusions (like having to prefer torture to specks). Something like that “worked” for palladias, apparently. Also, I once described an alternative to the LW epistemology (my personal brand of instrumentalism), but it did not go over very well.
Brainwashing (which is one thing drethlin asked about the probability of) is not an LW concept, particularly; I’m not sure how reading up on it is remaining inside the “accepted LW wisdom.”
If reading up on brainwashing teaches me that certain signs characterize it, and LW demonstrates those signs, I should increase my estimate that LW is brainwashing people, and consequently that I’m being brainwashed. And, yes, if I conclude that it’s likely that I’m being brainwashed, there are various deconversion techniques I can use to negate that.
Of course, seriously considering arguments from outside the framework of beliefs is a good idea regardless.
Being completely wrong, admittedly, (the other thing drethlin asked about the probability of) doesn’t lend itself to this approach so well… it’s hard to know where to even start, there.
Reading up on brainwashing can mean reading gwern’s essay which concludes that brainwashing doesn’t really work. Of course that’s exactly what someone who wants to brainwash yourself would tell you, wouldn’t it?
Sure. I’m not exactly sure why you’d choose to interpret “read up on brainwashing” in this context as meaning “read what a member of the group you’re concerned about being brainwashed by has to say about brainwashing,” but I certainly agree that it’s a legitimate example, and it has exactly the failure mode you imply.
For what it’s worth, gwern’s findings are consistent with mine (see this thread). I’d rather restrict “brainwashing” to coercive persuasion, i.e. indoctrinating prisoners of war or what have you, but Scientology, the Unification Church, and so forth also seem remarkably poor at long-term persuasion. It’s difficult to find comparable numbers for large, socially accepted religions, or for that matter nontheism—more of the conversion process plays out in the public sphere, making it harder to delineate, and ulterior motives (i.e. converting to a fiancee’s religion) are much more common—but if you read between the lines they seem to be higher.
Deprogramming techniques aren’t much better, incidentally—from everything I’ve read they range from the ineffective to the abusive, and often have quite a bit in common with brainwashing in the coercive sense. You couldn’t apply most of them to yourself, and wouldn’t want to in any case.
No argument there. What I alluded to is the second part, incremental “Bayesian” updating based on (independent) new evidence. This is more of an LW “inside” thing.
Ah! Yes, fair.
Sorry, I wasn’t trying to be nonresponsive, that reading just didn’t occur to me. (Coincidence? Or a troubling sign of epistemic closure?)
I will admit, the idea that I should update beliefs based on new evidence, but that repeatedly presenting me with the same evidence over and over should not significantly update my beliefs, seems to me nothing but common sense.
Of course, that’s just what I should expect it to feel like if I were trapped inside a self-reinforcing network of pernicious false beliefs.
So, all right… in the spirit of seriously considering arguments from outside the framework, and given that as a champion of an alternative epistemology you arguably count as “outside the framework”, what would you propose as an answer to drethlin’s question about how far they should update based on each new critic?
Hmm. My suspicion is that formulating the question in this way already puts you “inside the box”, since it uses Bayesian terms to begin with. Something like trying to detect problems in a religious moral framework after postulating objective morality. Maybe this is not a good example, but a better one eludes me at the moment. To honestly try to break out of the framework, one has to find a way to ask different questions. I suspect that am too much “inside” to figure out what they could be.
(nods) That’s fair.
And I can certainly see how, if we did not insist on framing the problem in terms of how to consistently update confidence levels based on evidence in the first place, other ways of approaching the “how can I tell if I’m being brainwashed?” question would present themselves. Some traditional examples that come to mind include praying for guidance on the subject and various schools of divination, for example. Of course, a huge number of less traditional possibilities that seem equally unjustified from a “Bayesian” framework (but otherwise share nothing in common with those) are also possible.
I’m not terribly concerned about it, though.
Then again, I wouldn’t be.
Wrong about what? Different subjects call for different probability.
The probability that the Bayes Theorem is wrong is vanishingly small. The probability that the UFAI risk is completely overblown is considerably higher.
LW “ideology” is an agglomeration in the sense that accepting (or not) a part of it does not imply acceptance (or rejection) of other parts. One can be a good Bayesian, not care about UFAI, and be signed up for cryonics—no logical inconsistencies here.
As long as the number is small, I wouldn’t update at all, because I already expect slow trickle of those people on my current information, so seeing that expectation confirmed isn’t new evidence. If LW achieved a Scientology-like place in popular opinion, though, I’d be worried.
No.
Less Wrong has some material on this topic :)
Seriously though, I’d love to see some applied-rationality techniques put to use successfully doubting parts of the applied rationality worldview. I’ve seen some examples already, but more is good.
The biggest weakness, in my opinion, with purely (or almost purely) probabilistic reasoning is the fact that it cannot ultimately do away with us relying on a number of (ultimately faith/belief based) choices as to how we understand our reality.
The existence of the past and future (and within most people’s reasoning systems, the understanding of these as linear) are both ultimately postulations that are generally accepted at face value, as well as the idea that consciousness/awareness arises from matter/quantum phenomena not vice versa.
In your opinion, is there some other form of reasoning that avoids this weakness?
That’s a very complicated question but I’ll try to do my best to answer.
In many ancient cultures, they used two words for the mind, or for thinking, and it is still used figuratively today. “In my heart I know...”
In my opinion, in terms of expected impact on the course of life for a given subject, generally, more important than their understanding of Bayesian reasoning, is what they ‘want’ … how they define themselves. Consciously, and unconsciously.
For “reasoning”, no I doubt there is a better system. But since we must (or almost universally do) follow our instincts on a wide range of issues (is everyone else p-zombies? am I real? Is my chair conscious? Am I dreaming?), it is highly important, and often overlooked, that one’s “presumptive model” of reality and of themselves (both strictly intertwined psychologically) should be perfected with just as much effort (if not more) as we spend perfecting our probabilistic reasoning.
Probabilities can’t cover everything. Eventually you just have to make a choice as to which concept or view you believe more, and that choice changes your character, and your character changes your decisions, and your decisions are your life.
When one is confident, and subconsciously/instinctively aware that they are doing what they should be doing, thinking how they should be thinking, that their ‘foundation’ is solid (moral compass, goals, motivation, emotional baggage, openness to new ideas, etc.) they then can be a much more effective rationalist, and be more sure (albeit only instinctively) that they are doing the right thing when they act.
Those instinctive presumptions, and life-defining self image do have a strong quantifiable impact on the life of any human, and even a nominal understanding of rationality would allow one to realize that.
Maximise your own effectiveness. Perfect how your mind works, how you think of yourselves and others (again, instinctive opinions, gut feelings, more than conscious thought, although conscious thought is extremely important). Then when you start teaching it and filling it with data you’ll make a lot less mistakes.
All right. Thanks for clarifying.
I think the discussion about the value of Bayseanism is good (this post and following).
You’d be crazy not to ask. The views of people on this site are suspiciously similar. We might agree because we’re more rational than most, but you’d be a fool to reject the alternative hypothesis out of hand. Especially since they’re not mutually exclusive.
Use culture to contrast with culture. Avoid being a man of a single book and get familiar with some past or present intellectual traditions that are distant from the LW cultural cluster. Try to get a feel for the wider cultural map and how LW fits into it.
I’d say you should assign a very high probability for your beliefs being aligned in the direction LessWrong’s are, even in cases where such beliefs are wrong. It’s just how the human brain and human society works; there’s no getting around it. However, how much of that alignment is due to self-selection bias (choosing to be a part of LessWrong because you are that type of person) or brainwashing is a more difficult question.