Well, if posting on LW is no longer fun, shouldn’t we try to go more meta and fix the problem?
Of course, this shouldn’t be Eliezer’s top priority. And generally, it shouldn’t be left to Eliezer to fix every single detail.
I think it would be good to have some kind of psychological task force on LessWrong. By which I mean people who actually study and apply the stuff, in the same way we have math experts here.
The next step in the Art could be to make rationality fun. And I don’t mean “do funny things that signal your membership in LW community” but rather invent systematic ways how to make instrumentally rational things feel better, so you alieve they are good.
More generally, to overcome the disconnection between what we believe and how we feel. I think many people are doing the reversed stupidity here. We have learned that letting our emotions drive our thoughts is wrong. So the solution was to disconnect emotions from thoughts. That is a partial solution which works, but has a costly impact on motivation. Eliezer wrote that it is okay to accept some emotions, if they are compatible with the rational thoughts. But the full solution would be to let our thoughts drive our emotions. Not merely to accept the rational feeling, if it happens to exist, but to engineer it, by changing our internal and external environments. (On the other hand, this is just another way how insufficiently rational people could hurt themselves.)
I linked to this a few days ago. I’ve been experimenting with using the technique described over the past few days, and it seems to work pretty well. For example, trying to spend all of my mental bandwidth noticing good things (re-noticing good things I already noticed was allowed) seemed to get me out of a depressive funk in an hour or two. The technique also has some other interesting benefits: some of the positive things I notice are good things that I did, which has the effect of reinforcing those behaviors, and by noticing the good things that are going on in social interactions, I enjoy myself more and become more relaxed and fun to be around (in theory at least—only limited experience with this one thus far), and sometimes I get valuable ideas through realizing that something that initially seemed bad actually has a hidden upside (reminds me of research I’ve read about lucky people).
At this point, I’m left wondering why humans evolved to be so gosh-darn negative all the time. It feels like there must be some hidden upside to being negative that just hasn’t occurred to me.
At this point, I’m left wondering why humans evolved to be so gosh-darn negative all the time. It feels like there must be some hidden upside to being negative that just hasn’t occurred to me.
Some guesses:
Compared with the rest of the nature, and even with large parts of humankind, we live incredibly lucky lives. Our monkey brains were not designed for this, they are probably designed to keep a certain level of unhappiness, so they invent some if they don’t enough from outside. Similarly how our immune systems in absence of parasites develop alergies. Our mechanisms for fighting problems do not have an off switch, because in nature there was no reason to evolve one.
There is probably also some status aspect in this. If you are low status, you better don’t express too much happiness in front of higher status monkeys, because they will punish you just to teach you where is your place. That’s probably because low status itself makes people unhappy, so if you are not unhappy enough, it seems like you are claiming higher status.
I would expect many people to provide a rationalization: “But if I will be happy, that will make me less logical! And I will not be motivated to improve things.” (But I think that is nonsense, because unhappiness is also an emotion, and also interferes with logic. And unhappy people probably have less “willpower” to improve things.)
I’ll use the term “threat” for a problem where avoidance and/or submission is a good way of dealing it.
If a tiger is known to live in a particular part of a forest, that is a threat: Avoiding that part of the forest is a good way of dealing with the problem. If I take part in a hunting expedition and I don’t do my part because I’m too much of a coward, that is also a threat: If I act as if nothing happened and eat as much food as I want, etc. then my fellow tribespeople will think I’m an obnoxious jerk and I’ll be liable to get kicked out. So submission is a good way of dealing with this problem.
If I’m hungry or sleepy or I have homework to do or I need to get a job, those are not threats, even though they have potentially dire consequences: ignoring these problems is not going to make them go away.
Hypothesis: the EEA was full of threats according to my definition; the modern world has fewer such threats. However, we’re wired to assume our environment is full of threats. We’re also wired to believe that if a problem is a serious one, it’s likely a threat. So we’re more likely to exhibit the avoidance behavior for serious problems like finding a job than for trivial ones like solving a puzzle.
(I like the idea of co-opting the word “threat” because then you can repeat phrases like “this is not a threat” in your internal monologue to reassure yourself, if you’ve checked to see if something is a threat and it doesn’t seem to be.)
This seems correct. In a jungle, the cost of failure is frequently death. In our society, when you live an ordinary life (so this does not apply to things like organized crime or playing with explosives), the costs are much smaller, and there is much fun to be gained. But our brains are biased to believe they are in the jungle; they incorrectly perceive many things as tiger equivalents.
This is kind of nitpicky, but “the cost of failure is frequently death” is not the same as “avoidance and/or submission is a good way of dealing with the problem”. It’s not enough to show that in the EEA things could kill you… you have to show that they could kill you, and that trying hard not to think about them was the best way to avoid having them kill you.
I found some interesting thoughts in the book Learned Optimism about the evolutionary usefulness of pessimism:
The benefits of pessimism may have arisen during our recent evolutionary history. We are animals of the Pleistocene, the epoch of the ice ages. Our emotional makeup has most recently been shaped by one hundred thousand years of climactic catastrophe: waves of cold and heat; drought and flood; plenty and sudden famine. Those of our ancestors who survived the Pleis- tocene may have done so because they had the capacity to worry incessantly about the future, to see sunny days as mere prelude to a harsh winter, to brood. We have inherited these ancestors’ brains and therefore their ca- pacity to see the cloud rather than the silver lining.
...
Pessimism produces inertia rather than activity in the face of setbacks.
If the weather is very cold and your brain’s probability estimate of finding any game in the frost is low, maybe inactivity really is the best approach. But if I, as a modern human, am not calorie-constrained, then inactivity seems less wise.
At this point, I’m left wondering why humans evolved to be so gosh-darn negative all the time. It feels like there must be some hidden upside to being negative that just hasn’t occurred to me.
It’s not so much that there’s an upside to negativity as that continued positivity is evolutionarily useless. Evolution wants you to “chase the dragon” of steep, exciting highs rather than maintain a reasonably happy steady-state or, worse yet from Its perspective, “go full transhuman” and rewrite your own mind-design to bring Being Happy and Doing the Right Things into perfect alignment (which we can’t do yet, but probably will be able to someday).
For a community-scale solution, this article seems correct.
I expect spats, arguments, occasional insults, and even inevitable grudges. We’ve all done that. But in the end, I expect you to act like a group of friends who care about each other, no matter how dumb some of us might be, no matter what political opinions some of us hold, no matter what games some of us like or dislike.
One of the first things I learned when I began researching discussion platforms two years ago is the importance of empathy as the fundamental basis of all stable long term communities.
Hate is easy to recognize. Cruelty is easy to recognize. You do not tolerate these in your community, full stop. But what about behavior that isn’t so obviously corrosive? What about behavior patterns that seem sort of vaguely negative, but … nobody can show you exactly how this behavior is directly hurting anyone?
Disagreement is fine, even expected, provided people can disagree in an agreeable way. But when someone joins your community for the sole purpose of disagreeing, that’s Endless Contrarianism. If all a community member can seem to contribute is endlessly pointing out how wrong everyone else is, and how everything about this community is headed in the wrong direction – that’s not building constructive discussion – or the community.
Axe-Grinding is when a user keeps constantly gravitating back to the same pet issue or theme for weeks or months on end. This rapidly becomes tiresome to other participants who have probably heard everything this person has to say on that topic multiple times already.
Griefing is when someone goes out of their way to bait a particular person for weeks or months on end. By that I mean they pointedly follow them around, choosing to engage on whatever topic that person appears in, and needle the other person in any way they can, but always strictly by the book and not in violation of any rules… technically.
In any discussion, there is a general expectation that everyone there is participating in good faith – that they have an open mind, no particular agenda, and no bias against the participants or the topic. While short term disagreement is fine, it’s important that the people in your community have the ability to reset and approach each new topic with a clean(ish) slate. When you don’t do that, when people carry ill will from previous discussions toward the participants or topic into new discussions, that’s a grudge. Grudges can easily lead to every other dark community pattern on this list. I cannot emphasize enough how important it is to recognize grudges when they emerge so the community can intervene and point out what’s happening, and all the negative consequences of a grudge.
Perhaps it would be best to learn from psychology. Psychology has shown that there’s very little you can do to make yourself ‘more rational.’ Knowing about biases does little to prevent them from happening, and you can’t force yourself to enjoy something you don’t enjoy. Further, it takes a lot of conscious, slow effort to be rational. In the face of real-life problems, true rationality is often pretty much impossible as it would take more computing power than available in the universe. It’s pretty clear that our irrationality is a mechanism to cope with the information overload of the real world by making approximate guesses.
It’s because of things like this that I think maybe LW has gone severely overboard with the instrumental rationality thing. Note that knowing about biases is a noble goal that we should strive towards, but trying to fix them often backfires. The best we can usually hope for is to try to identify biases in our thinking and other people’s.
But anyway, a lot of the issues of this site could simply be a matter of technical fixes. It was never really a good idea to base a rationality forum on a reddit template. Instead of the ‘everyone gets to vote’ system, I prefer the system where there are a handful of moderators. Moderators could be selected by the community and they would not be allowed to moderate discussions they themselves are participating in. This is the system that slashdot follows and I think it seems to work extremely well.
you can’t force yourself to enjoy something you don’t enjoy
This particular point is demonstrably false, at least as a general one: people acquire taste for foods and activities they previously disliked all the time.
Knowing about biases does little to prevent them from happening
There are plenty of (anecdotal) examples to the contrary. I find myself thinking something like “am I being biased in assuming...” all the time, now that I have been on this forum for years. I heard similar sentiments from others, as well.
it takes a lot of conscious, slow effort to be rational
That’s true enough. But it is also true in general for almost every System 2-type activity (like learning to drive), until it gets internalized in System 1.
In the face of real-life problems, true rationality is often pretty much impossible as it would take more computing power than available in the universe.
Indeed it is impossible to get a perfectly optimal solution, and one of the biases is the proverbial “analysis paralysis”, where an excuse for doing nothing is that anything you do is suboptimal. However, an essential part of being instrumentally rational is figuring out the right amount of computing power to dedicate to a particular problem before acting.
a lot of the issues of this site could simply be a matter of technical fixes
Indeed a different template could have worked better. Who knows. However, a decision had to be made within the time and budget constraints, and, while suboptimal, it was good enough to let the site thrive. See above about bounded rationality.
This is the system that slashdot follows and I think it seems to work extremely well.
Except Reddit is clearly winning, in the “rationalists must win” sense, and Slashdot has all but disappeared, or at least has been severely marginalized compared to its late 90s heydays .
This particular point is demonstrably false, at least as a general one: people acquire taste for foods and activities they previously disliked all the time.
I’ve done this a lot. Each time I did, it wasn’t because I forced myself, it was because I saw some new attractive thing in those foods or activities that I didn’t see before. Perception and enjoyment aren’t constant. People are more likely to try new activities when they are in a good mood (for instance). Mood alters perception. In that sense I actually agree with Villiam_Bur. You can get more people to become ‘rationalists’ through engaging and fun activities. But you have to ask yourself what the ultimate goal is and if it can succeed for making people more rational.
However, an essential part of being instrumentally rational is figuring out the right amount of computing power to dedicate to a particular problem before acting.
The most powerful ‘subsystem’ in the brain is the subconscious system 1 part. This is the part that can bring the most computational power to bear on a problem. Making an effort to focus your system 2 cognition on solving a problem (rather than simply doing what comes instinctively) can backfire. But it gets worse. There’s no ‘system monitor’ for the brain. And even if there was, if you go even more meta, optimizing resource allocation for solving problem X may itself be a much harder problem than solving X using the first method that comes to mind.
Except Reddit is clearly winning, in the “rationalists must win” sense, and Slashdot has all but disappeared, or at least has been severely marginalized compared to its late 90s heydays .
I know it’s an extremely subjective opinion, but it seems to me that the slashdot system reduces spread of misinformation and reduces downvote fights (and overall flamewars). As for why slashdot has shrunk as a community, I suppose it’s partly because reddit has grown, and reddit seems to have grown because of the ‘digg exodus’ (largely self-inflicted by digg) and the subreddit idea. Remember that there used to be many news aggregators (like digg) that have all but disappeared.
The idea here shouldn’t be “let’s adopt the most popular forum system”, it should be “let’s adopt the forum system that is most conducive to the goals of the community.” And we have at least one important data point (Eliezer) indicating the contrary.
The idea here shouldn’t be “let’s adopt the most popular forum system”, it should be “let’s adopt the forum system that is most conducive to the goals of the community.”
Disregarding your use of the word “community” for what’s best described as an online social club, who’s to say that we’re not doing this already? The “forum system that is most conducive” to our goals might well be a combination of one very open central site (LessWrong itself) supplemented by a variety of more private sites that discuss rationality in different ways, catering to a variety of niches. Not just Eliezer’s Facebook page, but including things like MoreRight, Yvain’s blog, Overcoming Bias, Give Well etc.
The “forum system that is most conducive” to our goals might well be a combination of one very open central site (LessWrong itself) supplemented by a variety of more private sites that discuss rationality in different ways, catering to a variety of niches. Not just Eliezer’s Facebook page, but including things like MoreRight, Yvain’s blog, Overcoming Bias, Give Well etc.
This makes me a little suspicious as a solution, only because there doesn’t seem to be anything particularly special about it besides being precisely the system that is already in place.
Because, y’know, communities actually exist, like, in the real world. More relevantly, they have a fairly important goal in protecting real, actual people from bodily harm and providing a nurturing environment for them to thrive in. Since this does not apply to virtual, Internet sites, calling them “communities” is quite misleading and can have bad side-effects if the metaphor is taken seriously, either by accident or through sneaking connotations. So I think it’s better if folks are sometimes encouraged to taboo this particular term.
you can’t force yourself to enjoy something you don’t enjoy
Perhaps “force” isn’t the right approach (and the whole “willpower” is just a red herring). But don’t we have many examples where people changed their emotions because of an external influence? Charismatic people can motivate others. People sometimes like something because their friends like it. Conditioning.
I believe with a strategic approach people can make themselves enjoy something more. It may not be fast or 100% reliable or sufficiently cheap, but there is a way. A rational person should try finding the best way to enjoy something, if enjoying that thing is desirable. (For example, people from Vienna meetup are going to gym together after the next meetup, so they can convert enjoying a rationalist community into enjoying exercise.)
Charismatic people can motivate others. People sometimes like something because their friends like it. Conditioning.
Now that’s slightly better, and I agree. But again, you have to ask yourself what the ultimate purpose is and if it’s going to backfire or not.
For example, people from Vienna meetup are going to gym together after the next meetup, so they can convert enjoying a rationalist community into enjoying exercise.
That sounds like an interesting idea, if perhaps slightly naive. I get what the goal is: Channel the enjoyment of a rationality meeting to start exercising, then hope that after a while the enjoyment of exercise will itself act as a positive feedback loop. But then you have to ask the question: Why weren’t they already exercising in the first place? And if they hope to achieve something positive by exercising, wasn’t that enough to get them start exercising? It’s possible that after the initial good feelings wear off (“Yay, the rationality community is exercising together!”) the root causes of exercise avoidance will kick in again and dissolve the entire idea. Or worse: get them to do extremely unenjoyable exercises just for the sake of the community, which will ultimately get them to resent exercise even more than before.
Why weren’t they already exercising in the first place? And if they hope to achieve something positive by exercising, wasn’t that enough to get them start exercising?
I think that humans usually are not strategic goal seekers. That’s how an ideal rational being should be, but ordinary humans are not like that. We do have goals, and sometimes even strategies, but most things are decided emotionally or by habit.
So the answer to “why weren’t they already exercising” could well be: a) Because they didn’t have a habit of exercising. When you are doing something for the first time, there is a lot of logistic overhead; you must decide when and where to exercise, which specific exercises are you going to do, et cetera; while the next time you can simply decide to do the same thing you did yesterday. b) Because they didn’t have positive memories connected with exercising in past, so while their heads are thinking that it would be good to exercise and become more fit and healthy, their hearts try to avoid the whole thing.
If this model is correct (well, that’s questionable, but I suppose it is) the next time there is an advantage that you can follow the strategy of doing the same thing as the last time, and you already have some positive memories. And this could be enough for some people to change the balance. And may be not enough for others. In this specific case, we will later have experimental data.
Speaking for myself, many people I know who exercise or do sport regularly, do it with their friends. If those were my friends, I would be also tempted to join. But I am rather picky about choosing my friends. And the people who pass my filter are usually just as lazy as I am, or too individualistic do agree on doing something together. A few times I went to gym, it was incredibly boring. (I imagine having there someone to talk with would change that. Or if I would just remember to always bring a music player, perhaps with an audio book.) I do some small exercise at home. I imagine that if I had an exercise machine at home, I would use it, because the largest inconvenience for me is to go somewhere outside.
get them to do extremely unenjoyable exercises just for the sake of the community, which will ultimately get them to resent exercise even more than before
That would be obviously wrong, I agree. I just don’t expect this to happen. But it is better to mention it explicitly.
Psychology has shown that there’s very little you can do to make yourself ‘more rational.’
Citation needed.
Not to mention that what an average person can or can not do isn’t particularly illuminating for non-representative subsets like LW.
maybe LW has gone severely overboard with the instrumental rationality thing
I am not sure that is possible. Instrumental rationality is just making sure that what you are doing is useful in getting to wherever you want to go. What does “severely overboard” mean in this context?
‘as the scientists note, “people who were aware of their own biases were not better able to overcome them.” This finding wouldn’t surprise Kahneman, who admits in “Thinking, Fast and Slow” that his decades of groundbreaking research have failed to significantly improve his own mental performance. “My intuitive thinking is just as prone to overconfidence, extreme predictions, and the planning fallacy”—a tendency to underestimate how long it will take to complete a task—“as it was before I made a study of these issues,” he writes. ’
Not to mention that what an average person can or can not do isn’t particularly illuminating for non-representative subsets like LW.
In fact it is; there is no substantial difference when it comes to trying to control biases between highly educated and non-educated people.
I am not sure that is possible. Instrumental rationality is just making sure that what you are doing is useful in getting to wherever you want to go. What does “severely overboard” mean in this context?
There is nothing wrong with ‘making sure that what you are doing is useful in getting to wherever you want to go’. The problem is the idea of trying to ‘fix’ your behavior through self-imposed procedures, trial & error, and self-reporting. Experience shows that this often backfires, as I said. It’s pretty amazing that “I tried method X, and it seemed to work well, I suggest you try it!” (look at JohnMaxwellIV’s comment below for just one example) is taken as constructive information on a site dedicated to rationality.
First, rationality is considerably more than just adjusting for biases.
Second, in your quote Kahneman says (emphasis mine): “My intuitive thinking is just as prone...”. The point isn’t that your System 1 changes much, the point is that your System 2 knows what to look for and compensates as best as it can.
In fact it is; there is no substantial difference when it comes to trying to control biases between highly educated and non-educated people.
Sigh. Citation needed.
The problem is the idea of trying to ‘fix’ your behavior through self-imposed procedures, trial & error, and self-reporting.
And what it the problem, exactly? I am also not sure what the alternative is. Do you want to just assume your own behaviour is immutable? Magically determined without you being able to do anything about it? Do you think you need someone else to change your behaviour for you? What?
‘as the scientists note, “people who were aware of their own biases were not better able to overcome them.” This finding wouldn’t surprise Kahneman, who admits in “Thinking, Fast and Slow” that his decades of groundbreaking research have failed to significantly improve his own mental performance. “My intuitive thinking is just as prone to overconfidence, extreme predictions, and the planning fallacy”—a tendency to underestimate how long it will take to complete a task—“as it was before I made a study of these issues,” he writes. ’
I’m not talking about the bias blind spot. I agree that more educated people are better able to discern biases in their own thoughts and others. In fact that’s exactly what I said, not once but two times.
I’m talking about the ability to control one’s own biases.
I agree that more educated people are better able to discern biases in their own thoughts and others...I’m talking about the ability to control one’s own biases.
Huh? So what are more intelligence—and more educated—people doing, exactly, if not controlling their biases?
Well, if posting on LW is no longer fun, shouldn’t we try to go more meta and fix the problem?
Of course, this shouldn’t be Eliezer’s top priority. And generally, it shouldn’t be left to Eliezer to fix every single detail.
I think it would be good to have some kind of psychological task force on LessWrong. By which I mean people who actually study and apply the stuff, in the same way we have math experts here.
The next step in the Art could be to make rationality fun. And I don’t mean “do funny things that signal your membership in LW community” but rather invent systematic ways how to make instrumentally rational things feel better, so you alieve they are good.
More generally, to overcome the disconnection between what we believe and how we feel. I think many people are doing the reversed stupidity here. We have learned that letting our emotions drive our thoughts is wrong. So the solution was to disconnect emotions from thoughts. That is a partial solution which works, but has a costly impact on motivation. Eliezer wrote that it is okay to accept some emotions, if they are compatible with the rational thoughts. But the full solution would be to let our thoughts drive our emotions. Not merely to accept the rational feeling, if it happens to exist, but to engineer it, by changing our internal and external environments. (On the other hand, this is just another way how insufficiently rational people could hurt themselves.)
I linked to this a few days ago. I’ve been experimenting with using the technique described over the past few days, and it seems to work pretty well. For example, trying to spend all of my mental bandwidth noticing good things (re-noticing good things I already noticed was allowed) seemed to get me out of a depressive funk in an hour or two. The technique also has some other interesting benefits: some of the positive things I notice are good things that I did, which has the effect of reinforcing those behaviors, and by noticing the good things that are going on in social interactions, I enjoy myself more and become more relaxed and fun to be around (in theory at least—only limited experience with this one thus far), and sometimes I get valuable ideas through realizing that something that initially seemed bad actually has a hidden upside (reminds me of research I’ve read about lucky people).
At this point, I’m left wondering why humans evolved to be so gosh-darn negative all the time. It feels like there must be some hidden upside to being negative that just hasn’t occurred to me.
I like that link!
Some guesses:
Compared with the rest of the nature, and even with large parts of humankind, we live incredibly lucky lives. Our monkey brains were not designed for this, they are probably designed to keep a certain level of unhappiness, so they invent some if they don’t enough from outside. Similarly how our immune systems in absence of parasites develop alergies. Our mechanisms for fighting problems do not have an off switch, because in nature there was no reason to evolve one.
There is probably also some status aspect in this. If you are low status, you better don’t express too much happiness in front of higher status monkeys, because they will punish you just to teach you where is your place. That’s probably because low status itself makes people unhappy, so if you are not unhappy enough, it seems like you are claiming higher status.
I would expect many people to provide a rationalization: “But if I will be happy, that will make me less logical! And I will not be motivated to improve things.” (But I think that is nonsense, because unhappiness is also an emotion, and also interferes with logic. And unhappy people probably have less “willpower” to improve things.)
I’ll use the term “threat” for a problem where avoidance and/or submission is a good way of dealing it.
If a tiger is known to live in a particular part of a forest, that is a threat: Avoiding that part of the forest is a good way of dealing with the problem. If I take part in a hunting expedition and I don’t do my part because I’m too much of a coward, that is also a threat: If I act as if nothing happened and eat as much food as I want, etc. then my fellow tribespeople will think I’m an obnoxious jerk and I’ll be liable to get kicked out. So submission is a good way of dealing with this problem.
If I’m hungry or sleepy or I have homework to do or I need to get a job, those are not threats, even though they have potentially dire consequences: ignoring these problems is not going to make them go away.
Hypothesis: the EEA was full of threats according to my definition; the modern world has fewer such threats. However, we’re wired to assume our environment is full of threats. We’re also wired to believe that if a problem is a serious one, it’s likely a threat. So we’re more likely to exhibit the avoidance behavior for serious problems like finding a job than for trivial ones like solving a puzzle.
(I like the idea of co-opting the word “threat” because then you can repeat phrases like “this is not a threat” in your internal monologue to reassure yourself, if you’ve checked to see if something is a threat and it doesn’t seem to be.)
This seems correct. In a jungle, the cost of failure is frequently death. In our society, when you live an ordinary life (so this does not apply to things like organized crime or playing with explosives), the costs are much smaller, and there is much fun to be gained. But our brains are biased to believe they are in the jungle; they incorrectly perceive many things as tiger equivalents.
This is kind of nitpicky, but “the cost of failure is frequently death” is not the same as “avoidance and/or submission is a good way of dealing with the problem”. It’s not enough to show that in the EEA things could kill you… you have to show that they could kill you, and that trying hard not to think about them was the best way to avoid having them kill you.
I found some interesting thoughts in the book Learned Optimism about the evolutionary usefulness of pessimism:
...
If the weather is very cold and your brain’s probability estimate of finding any game in the frost is low, maybe inactivity really is the best approach. But if I, as a modern human, am not calorie-constrained, then inactivity seems less wise.
It’s not so much that there’s an upside to negativity as that continued positivity is evolutionarily useless. Evolution wants you to “chase the dragon” of steep, exciting highs rather than maintain a reasonably happy steady-state or, worse yet from Its perspective, “go full transhuman” and rewrite your own mind-design to bring Being Happy and Doing the Right Things into perfect alignment (which we can’t do yet, but probably will be able to someday).
For a community-scale solution, this article seems correct.
Perhaps it would be best to learn from psychology. Psychology has shown that there’s very little you can do to make yourself ‘more rational.’ Knowing about biases does little to prevent them from happening, and you can’t force yourself to enjoy something you don’t enjoy. Further, it takes a lot of conscious, slow effort to be rational. In the face of real-life problems, true rationality is often pretty much impossible as it would take more computing power than available in the universe. It’s pretty clear that our irrationality is a mechanism to cope with the information overload of the real world by making approximate guesses.
It’s because of things like this that I think maybe LW has gone severely overboard with the instrumental rationality thing. Note that knowing about biases is a noble goal that we should strive towards, but trying to fix them often backfires. The best we can usually hope for is to try to identify biases in our thinking and other people’s.
But anyway, a lot of the issues of this site could simply be a matter of technical fixes. It was never really a good idea to base a rationality forum on a reddit template. Instead of the ‘everyone gets to vote’ system, I prefer the system where there are a handful of moderators. Moderators could be selected by the community and they would not be allowed to moderate discussions they themselves are participating in. This is the system that slashdot follows and I think it seems to work extremely well.
This particular point is demonstrably false, at least as a general one: people acquire taste for foods and activities they previously disliked all the time.
There are plenty of (anecdotal) examples to the contrary. I find myself thinking something like “am I being biased in assuming...” all the time, now that I have been on this forum for years. I heard similar sentiments from others, as well.
That’s true enough. But it is also true in general for almost every System 2-type activity (like learning to drive), until it gets internalized in System 1.
Indeed it is impossible to get a perfectly optimal solution, and one of the biases is the proverbial “analysis paralysis”, where an excuse for doing nothing is that anything you do is suboptimal. However, an essential part of being instrumentally rational is figuring out the right amount of computing power to dedicate to a particular problem before acting.
Indeed a different template could have worked better. Who knows. However, a decision had to be made within the time and budget constraints, and, while suboptimal, it was good enough to let the site thrive. See above about bounded rationality.
Except Reddit is clearly winning, in the “rationalists must win” sense, and Slashdot has all but disappeared, or at least has been severely marginalized compared to its late 90s heydays .
I’ve done this a lot. Each time I did, it wasn’t because I forced myself, it was because I saw some new attractive thing in those foods or activities that I didn’t see before. Perception and enjoyment aren’t constant. People are more likely to try new activities when they are in a good mood (for instance). Mood alters perception. In that sense I actually agree with Villiam_Bur. You can get more people to become ‘rationalists’ through engaging and fun activities. But you have to ask yourself what the ultimate goal is and if it can succeed for making people more rational.
The most powerful ‘subsystem’ in the brain is the subconscious system 1 part. This is the part that can bring the most computational power to bear on a problem. Making an effort to focus your system 2 cognition on solving a problem (rather than simply doing what comes instinctively) can backfire. But it gets worse. There’s no ‘system monitor’ for the brain. And even if there was, if you go even more meta, optimizing resource allocation for solving problem X may itself be a much harder problem than solving X using the first method that comes to mind.
I know it’s an extremely subjective opinion, but it seems to me that the slashdot system reduces spread of misinformation and reduces downvote fights (and overall flamewars). As for why slashdot has shrunk as a community, I suppose it’s partly because reddit has grown, and reddit seems to have grown because of the ‘digg exodus’ (largely self-inflicted by digg) and the subreddit idea. Remember that there used to be many news aggregators (like digg) that have all but disappeared.
The idea here shouldn’t be “let’s adopt the most popular forum system”, it should be “let’s adopt the forum system that is most conducive to the goals of the community.” And we have at least one important data point (Eliezer) indicating the contrary.
Disregarding your use of the word “community” for what’s best described as an online social club, who’s to say that we’re not doing this already? The “forum system that is most conducive” to our goals might well be a combination of one very open central site (LessWrong itself) supplemented by a variety of more private sites that discuss rationality in different ways, catering to a variety of niches. Not just Eliezer’s Facebook page, but including things like MoreRight, Yvain’s blog, Overcoming Bias, Give Well etc.
This makes me a little suspicious as a solution, only because there doesn’t seem to be anything particularly special about it besides being precisely the system that is already in place.
What do you see as being the distinction between a “community” and a mere “online social club”? Genuinely confused.
Because, y’know, communities actually exist, like, in the real world. More relevantly, they have a fairly important goal in protecting real, actual people from bodily harm and providing a nurturing environment for them to thrive in. Since this does not apply to virtual, Internet sites, calling them “communities” is quite misleading and can have bad side-effects if the metaphor is taken seriously, either by accident or through sneaking connotations. So I think it’s better if folks are sometimes encouraged to taboo this particular term.
Perhaps “force” isn’t the right approach (and the whole “willpower” is just a red herring). But don’t we have many examples where people changed their emotions because of an external influence? Charismatic people can motivate others. People sometimes like something because their friends like it. Conditioning.
I believe with a strategic approach people can make themselves enjoy something more. It may not be fast or 100% reliable or sufficiently cheap, but there is a way. A rational person should try finding the best way to enjoy something, if enjoying that thing is desirable. (For example, people from Vienna meetup are going to gym together after the next meetup, so they can convert enjoying a rationalist community into enjoying exercise.)
Now that’s slightly better, and I agree. But again, you have to ask yourself what the ultimate purpose is and if it’s going to backfire or not.
That sounds like an interesting idea, if perhaps slightly naive. I get what the goal is: Channel the enjoyment of a rationality meeting to start exercising, then hope that after a while the enjoyment of exercise will itself act as a positive feedback loop. But then you have to ask the question: Why weren’t they already exercising in the first place? And if they hope to achieve something positive by exercising, wasn’t that enough to get them start exercising? It’s possible that after the initial good feelings wear off (“Yay, the rationality community is exercising together!”) the root causes of exercise avoidance will kick in again and dissolve the entire idea. Or worse: get them to do extremely unenjoyable exercises just for the sake of the community, which will ultimately get them to resent exercise even more than before.
I think that humans usually are not strategic goal seekers. That’s how an ideal rational being should be, but ordinary humans are not like that. We do have goals, and sometimes even strategies, but most things are decided emotionally or by habit.
So the answer to “why weren’t they already exercising” could well be: a) Because they didn’t have a habit of exercising. When you are doing something for the first time, there is a lot of logistic overhead; you must decide when and where to exercise, which specific exercises are you going to do, et cetera; while the next time you can simply decide to do the same thing you did yesterday. b) Because they didn’t have positive memories connected with exercising in past, so while their heads are thinking that it would be good to exercise and become more fit and healthy, their hearts try to avoid the whole thing.
If this model is correct (well, that’s questionable, but I suppose it is) the next time there is an advantage that you can follow the strategy of doing the same thing as the last time, and you already have some positive memories. And this could be enough for some people to change the balance. And may be not enough for others. In this specific case, we will later have experimental data.
Speaking for myself, many people I know who exercise or do sport regularly, do it with their friends. If those were my friends, I would be also tempted to join. But I am rather picky about choosing my friends. And the people who pass my filter are usually just as lazy as I am, or too individualistic do agree on doing something together. A few times I went to gym, it was incredibly boring. (I imagine having there someone to talk with would change that. Or if I would just remember to always bring a music player, perhaps with an audio book.) I do some small exercise at home. I imagine that if I had an exercise machine at home, I would use it, because the largest inconvenience for me is to go somewhere outside.
That would be obviously wrong, I agree. I just don’t expect this to happen. But it is better to mention it explicitly.
Citation needed.
Not to mention that what an average person can or can not do isn’t particularly illuminating for non-representative subsets like LW.
I am not sure that is possible. Instrumental rationality is just making sure that what you are doing is useful in getting to wherever you want to go. What does “severely overboard” mean in this context?
Read Dan Kahneman’s work. He’s spent his entire lifetime studying this and won a nobel prize for it too. A good summary is given in http://www.newyorker.com/tech/frontal-cortex/why-smart-people-are-stupid Here’s an excerpt:
‘as the scientists note, “people who were aware of their own biases were not better able to overcome them.” This finding wouldn’t surprise Kahneman, who admits in “Thinking, Fast and Slow” that his decades of groundbreaking research have failed to significantly improve his own mental performance. “My intuitive thinking is just as prone to overconfidence, extreme predictions, and the planning fallacy”—a tendency to underestimate how long it will take to complete a task—“as it was before I made a study of these issues,” he writes. ’
In fact it is; there is no substantial difference when it comes to trying to control biases between highly educated and non-educated people.
There is nothing wrong with ‘making sure that what you are doing is useful in getting to wherever you want to go’. The problem is the idea of trying to ‘fix’ your behavior through self-imposed procedures, trial & error, and self-reporting. Experience shows that this often backfires, as I said. It’s pretty amazing that “I tried method X, and it seemed to work well, I suggest you try it!” (look at JohnMaxwellIV’s comment below for just one example) is taken as constructive information on a site dedicated to rationality.
First, rationality is considerably more than just adjusting for biases.
Second, in your quote Kahneman says (emphasis mine): “My intuitive thinking is just as prone...”. The point isn’t that your System 1 changes much, the point is that your System 2 knows what to look for and compensates as best as it can.
Sigh. Citation needed.
And what it the problem, exactly? I am also not sure what the alternative is. Do you want to just assume your own behaviour is immutable? Magically determined without you being able to do anything about it? Do you think you need someone else to change your behaviour for you? What?
Disagree. See comments in http://lesswrong.com/lw/d1u/the_new_yorker_article_on_cognitive_biases/
I’m not talking about the bias blind spot. I agree that more educated people are better able to discern biases in their own thoughts and others. In fact that’s exactly what I said, not once but two times.
I’m talking about the ability to control one’s own biases.
Are you distinguishing between “control one’s own biases” and “adjusting and compensating for one’s own biases”?
Huh? So what are more intelligence—and more educated—people doing, exactly, if not controlling their biases?
Can confirm. OMG LOL.