#: I feel like an elephant in the room is the question of whether the reason that those of you who believe that Less Wrong and SIAI doing things of the highest level of importance is because you’re a part of these groups.
Even if the most ardent Less Wrong and SIAI supporters are mostly right about their beliefs, # is almost certainly at least occasionally present and I think that the community would benefit from a higher level of vigilance concerning the possibility #.
# refers to a pattern of incorrect (intuitive) reasoning. This pattern is potentially dangerous specifically because it leads to incorrect beliefs. But if you are saying that there is no significant distortion in beliefs (in particular about the importance of Less Wrong or SIAI’s missions*), doesn’t this imply that the role of this potential bias is therefore unimportant? Either # isn’t important, because it doesn’t significantly distort beliefs, or it does significantly distort beliefs and therefore important.
* Although I should note that I don’t remember there being a visible position about the importance of Less Wrong.
Either # isn’t important, because it doesn’t significantly distort beliefs, or it does significantly distort beliefs and therefore important.
There’s no single point at which distortion of beliefs becomes sufficiently large to register as “significant”—it’s a gradualist thing
Although I should note that I don’t remember there being a visible position about the importance of Less Wrong.
Probably I’ve unfairly conflated Less Wrong and SIAI. But at this post Kevin says “We try to take existential risk seriously around these parts. Each marginal new user that reads anything on Less Wrong has a real chance of being the one that tips us from existential Loss to existential Win.” This seemed to me to carry the connotation of ascribing extremely high significance to Less Wrong and I (quite possibly incorrectly) interpreted the fact that nobody questioned the statement or asked for clarification as an indication that the rest of the community is in agreement with the idea that Less Wrong is extremely significant. I will respond to the post asking Kevin to clarify what he was getting at.
“We try to take existential risk seriously around these parts. Each marginal new user that reads anything on Less Wrong has a real chance of being the one that tips us from existential Loss to existential Win.” This seemed to me to carry the connotation of ascribing extremely high significance to Less Wrong and I (quite possibly incorrectly) interpreted the fact that nobody questioned the statement or asked for clarification as an indication that the rest of the community is in agreement with the idea that Less Wrong is extremely significant.
Would you respond differently if someone else talked about every single person who becomes an amateur astronomer and searches for dangerous asteroids? There are lots of ways of potential existential threats. Unfriendly or rogue AIs are certainly one of them. Nuclear war is another. And I think a lot of people would agree that most humans don’t pay nearly enough attention to existential threats. So one aspect of improving rational thinking should be a net reduction in existential threats of all types, not just those associated with AI. Kevin’s statement thus isn’t intrinsically connected to SIAI at all (although I’d be inclined to argue that even given that Kevin’s statement is possibly a tad hyperbolic).
Would you respond differently if someone else talked about every single person who becomes an amateur astronomer and searches for dangerous asteroids?
The parallel is a good one. I would think it sort of crankish if somebody went around trying to get people to engage in amateur astronomy and search for dangerous asteroids on the grounds that any new amateur astronomer may be the one save us from being killed by a dangerous asteroid. Just because an issue is potentially important doesn’t mean that one should attempt to interest as many people as possible in it. There’s an issue of opportunity cost.
Sure there’s an opportunity cost, but how large is that opportunity cost? What if someone has good data that suggests that the current number of asteroid seekers is orders of magnitude below the optimum?
improving rational thinking should be a net reduction in existential threats of all types
Two points:
(1) It’s not clear that improving rational thinking matters much. The factors limiting human ability to reduce existential risk seem to me to have more to do with politics, marketing and culture rather than rationality proper. Devoting oneself to refining rationality may come at the cost of increasing one’s ability to engage in politics and marketing and influence culture. I guess what I’m saying is that rationalists should win and consciously aspiring toward rationality may interfere with one’s ability to win.
(2) It’s not clear how much it’s possible to improve rational thinking. It may be that beyond a certain point, attempts to improve rational thinking are self defeating (e.g. combating one bias may cause another bias).
Part of influencing culture should include the spreading of rationality. This is actually related to why I think that the rationality movement has more in common with organized skepticism than is generally acknowledged. Consider what would happen if the general public had enough epistemic rationality to recognize that homeopathy was complete nonsense. In the United States alone, people spend around three billion dollars a year on homeopathy (source). If that went away, and only 5% of that ended up getting spent on things that actually increase general utility, that means around $150 million dollars are now going into useful things. And that’s only a tiny example. The US spends about 30 to 40 billion dollars a year on alternative medicine much of which is also a complete waste. We’re not talking here about a Hansonian approach where much medicine is only of marginal use or only helps the very sick who are going to die soon. We’re talking about “medicine” that does zero. And many of the people taking those alternatives will take those alternatives instead of taking medicine that will improve their lives. Improving the general population’s rationality will be a net win for everyone. And if some tiny set of those freed resources goes to dealing with existential risk? Even better.
Part of influencing culture should include the spreading of rationality. This is actually related to why I think that the rationality movement has more in common with organized skepticism than is generally acknowledged. Consider what would happen if the general public had enough epistemic rationality to recognize that homeopathy was complete nonsense.
Okay, but now the rationality that you’re talking about is “ordinary rationality” rather than “extreme rationality” and the general public rather than the Less Wrong community. What is Less Wrong community doing to spread ordinary rationality within the general public?
The US spends about 30 to 40 billion dollars a year on alternative medicine much of which is also a complete waste [...] We’re talking about “medicine” that does zero.
Are you sure that the placebo effects are never sufficiently useful to warrant the cost?
Okay, but now the rationality that you’re talking about is “ordinary rationality” rather than “extreme rationality” and the general public rather than the Less Wrong community. What is Less Wrong community doing to spread ordinary rationality within the general public?
A lot of the aspects of “extreme rationality” are aspects of rationality in general (understanding the scientific method and the nature of evidence, trying to make experiments to test things, being aware of serious cognitive biases, etc.) Also, I suspect (and this may not be accurate) that a lot of the ideas of extreme rationality are ones which LWers will simply spread in casual conversation, not necessarily out of any deliberate attempt to spread them, but because they are really neat. For example, the representativeness heuristic is an amazing form of cognitive bias. Similarly, the 2-4-6 game is independently fun to play with people and helps them learn better.
Are you sure that the placebo effects are never sufficiently useful to warrant the cost?
I was careful to say that much, not all. Placebos can help. And some of it involves treatments that will eventually turn out to be helpful when they get studied. There are entire subindustries that aren’t just useless but downright harmful (chelation therapy for autism would be an example). And large parts of the alternative medicine world involve claims that are emotionally damaging to patients (such as claims that cancer is a result of negatives beliefs). And when one isn’t talking about something like homeopathy which is just water but rather remedies that involve chemically active substances the chance that actual complications will occur from them grows.
Deliberately giving placebos is of questionable ethical value, but if we think it is ok we can do it with cheap sugar pills delivered at a pharmacy. Cheaper, safer and better controlled. And people won’t be getting the sugar pills as an alternative to treatment when treatment is possible.
Anything we seek to do is a function of our capabilities and how important the activity is. Less Wrong is aimed mainly as a pointer towards increasing the capabilities of those who are interested in improving their rationality and Eliezer has mentioned in one of the sequences that there are many other aspects of the art that have to be developed. Epistemic rationality is one, luminosity as mentioned by Alicorn is another, so on and so forth.
Who knows that in the future, we may get many rational offshoots of lesswrong—lessshy, lessprocastinating, etc.
Now, getting back to my statement. Function of capabilities and Importance.
Importance—Existential risk is the most important problem that is not having sufficient light on it.
Capability—The singinst is a group of powerless, poor and introverted geeks who are doing the best, that they think they can do, to reduce existential risk. This may include things that improve their personal ability to affect the future positively. It may include charisma and marketing, also. For all the time that they have thought on the issue, the singinst people consider raising the sanity waterline as really important to the cause. Unless and until you have specific data that that avenue is not the best use of their time, it is a worthwhile cause to pursue.
Before reading the paragraph below, please answer this simple question—What is your marginal time unit, taking into account necessary leisure, being used for?
If your capability is great, then you can contribute much more than SIAI. All you need to see is whether on the margin, your contribution is making a greater difference to the activity or not. Even Singinst cannot absorb too much money without losing focus. You, as a smart person know that. So, stop contributing to Singinst when you think your marginal dollar gets better value when spent elsewhere.
It is not whether you believe that singinst is the best cause ever. Honestly assess and calculate where your marginal dollar can get better value. Are you better off being the millionth voice in the climate change debate or the hundredth voice in the existential risk discussion?
One other factor which influences how much goes into reducing existential risk is the general wealth level. Long term existential risk gets taken care of after people take care of shorter term risks, have what they consider to be enough fun, and spend quite a bit on maintaining status.
More to spare means being likely to have a longer time horizon.
Fact-checking in political discussions (i.e. senate politics), parenting and teaching methods, keeping a clean desk or being happy at work (see here), getting effective medical treatments rather than unproven treatments (sometimes this might require confronting your doctor), and maintaining budgets seem like decent examples (in no particular order, and of course these are at various heights but well within the reach of the general public).
Not sure if Vladimir would have the same types of things in mind.
There’s no single point at which distortion of beliefs becomes sufficiently large to register as “significant”—it’s a gradualist thing
But to avoid turning this into a fallacy of gray, you still need to take notice of the extent of the effect. Neither working on a bias, nor ignoring the bias, are “defaults”—it necessarily depends on the perceived level of significance.
# refers to a pattern of incorrect (intuitive) reasoning. This pattern is potentially dangerous specifically because it leads to incorrect beliefs. But if you are saying that there is no significant distortion in beliefs (in particular about the importance of Less Wrong or SIAI’s missions*), doesn’t this imply that the role of this potential bias is therefore unimportant? Either # isn’t important, because it doesn’t significantly distort beliefs, or it does significantly distort beliefs and therefore important.
* Although I should note that I don’t remember there being a visible position about the importance of Less Wrong.
There’s no single point at which distortion of beliefs becomes sufficiently large to register as “significant”—it’s a gradualist thing
Probably I’ve unfairly conflated Less Wrong and SIAI. But at this post Kevin says “We try to take existential risk seriously around these parts. Each marginal new user that reads anything on Less Wrong has a real chance of being the one that tips us from existential Loss to existential Win.” This seemed to me to carry the connotation of ascribing extremely high significance to Less Wrong and I (quite possibly incorrectly) interpreted the fact that nobody questioned the statement or asked for clarification as an indication that the rest of the community is in agreement with the idea that Less Wrong is extremely significant. I will respond to the post asking Kevin to clarify what he was getting at.
Would you respond differently if someone else talked about every single person who becomes an amateur astronomer and searches for dangerous asteroids? There are lots of ways of potential existential threats. Unfriendly or rogue AIs are certainly one of them. Nuclear war is another. And I think a lot of people would agree that most humans don’t pay nearly enough attention to existential threats. So one aspect of improving rational thinking should be a net reduction in existential threats of all types, not just those associated with AI. Kevin’s statement thus isn’t intrinsically connected to SIAI at all (although I’d be inclined to argue that even given that Kevin’s statement is possibly a tad hyperbolic).
The parallel is a good one. I would think it sort of crankish if somebody went around trying to get people to engage in amateur astronomy and search for dangerous asteroids on the grounds that any new amateur astronomer may be the one save us from being killed by a dangerous asteroid. Just because an issue is potentially important doesn’t mean that one should attempt to interest as many people as possible in it. There’s an issue of opportunity cost.
Sure there’s an opportunity cost, but how large is that opportunity cost? What if someone has good data that suggests that the current number of asteroid seekers is orders of magnitude below the optimum?
Two points:
(1) It’s not clear that improving rational thinking matters much. The factors limiting human ability to reduce existential risk seem to me to have more to do with politics, marketing and culture rather than rationality proper. Devoting oneself to refining rationality may come at the cost of increasing one’s ability to engage in politics and marketing and influence culture. I guess what I’m saying is that rationalists should win and consciously aspiring toward rationality may interfere with one’s ability to win.
(2) It’s not clear how much it’s possible to improve rational thinking. It may be that beyond a certain point, attempts to improve rational thinking are self defeating (e.g. combating one bias may cause another bias).
Part of influencing culture should include the spreading of rationality. This is actually related to why I think that the rationality movement has more in common with organized skepticism than is generally acknowledged. Consider what would happen if the general public had enough epistemic rationality to recognize that homeopathy was complete nonsense. In the United States alone, people spend around three billion dollars a year on homeopathy (source). If that went away, and only 5% of that ended up getting spent on things that actually increase general utility, that means around $150 million dollars are now going into useful things. And that’s only a tiny example. The US spends about 30 to 40 billion dollars a year on alternative medicine much of which is also a complete waste. We’re not talking here about a Hansonian approach where much medicine is only of marginal use or only helps the very sick who are going to die soon. We’re talking about “medicine” that does zero. And many of the people taking those alternatives will take those alternatives instead of taking medicine that will improve their lives. Improving the general population’s rationality will be a net win for everyone. And if some tiny set of those freed resources goes to dealing with existential risk? Even better.
Okay, but now the rationality that you’re talking about is “ordinary rationality” rather than “extreme rationality” and the general public rather than the Less Wrong community. What is Less Wrong community doing to spread ordinary rationality within the general public?
Are you sure that the placebo effects are never sufficiently useful to warrant the cost?
A lot of the aspects of “extreme rationality” are aspects of rationality in general (understanding the scientific method and the nature of evidence, trying to make experiments to test things, being aware of serious cognitive biases, etc.) Also, I suspect (and this may not be accurate) that a lot of the ideas of extreme rationality are ones which LWers will simply spread in casual conversation, not necessarily out of any deliberate attempt to spread them, but because they are really neat. For example, the representativeness heuristic is an amazing form of cognitive bias. Similarly, the 2-4-6 game is independently fun to play with people and helps them learn better.
I was careful to say that much, not all. Placebos can help. And some of it involves treatments that will eventually turn out to be helpful when they get studied. There are entire subindustries that aren’t just useless but downright harmful (chelation therapy for autism would be an example). And large parts of the alternative medicine world involve claims that are emotionally damaging to patients (such as claims that cancer is a result of negatives beliefs). And when one isn’t talking about something like homeopathy which is just water but rather remedies that involve chemically active substances the chance that actual complications will occur from them grows.
Deliberately giving placebos is of questionable ethical value, but if we think it is ok we can do it with cheap sugar pills delivered at a pharmacy. Cheaper, safer and better controlled. And people won’t be getting the sugar pills as an alternative to treatment when treatment is possible.
Anything we seek to do is a function of our capabilities and how important the activity is. Less Wrong is aimed mainly as a pointer towards increasing the capabilities of those who are interested in improving their rationality and Eliezer has mentioned in one of the sequences that there are many other aspects of the art that have to be developed. Epistemic rationality is one, luminosity as mentioned by Alicorn is another, so on and so forth.
Who knows that in the future, we may get many rational offshoots of lesswrong—lessshy, lessprocastinating, etc.
Now, getting back to my statement. Function of capabilities and Importance.
Importance—Existential risk is the most important problem that is not having sufficient light on it. Capability—The singinst is a group of powerless, poor and introverted geeks who are doing the best, that they think they can do, to reduce existential risk. This may include things that improve their personal ability to affect the future positively. It may include charisma and marketing, also. For all the time that they have thought on the issue, the singinst people consider raising the sanity waterline as really important to the cause. Unless and until you have specific data that that avenue is not the best use of their time, it is a worthwhile cause to pursue.
Before reading the paragraph below, please answer this simple question—What is your marginal time unit, taking into account necessary leisure, being used for?
If your capability is great, then you can contribute much more than SIAI. All you need to see is whether on the margin, your contribution is making a greater difference to the activity or not. Even Singinst cannot absorb too much money without losing focus. You, as a smart person know that. So, stop contributing to Singinst when you think your marginal dollar gets better value when spent elsewhere.
It is not whether you believe that singinst is the best cause ever. Honestly assess and calculate where your marginal dollar can get better value. Are you better off being the millionth voice in the climate change debate or the hundredth voice in the existential risk discussion?
EDIT : Edited the capability para for clarity
One other factor which influences how much goes into reducing existential risk is the general wealth level. Long term existential risk gets taken care of after people take care of shorter term risks, have what they consider to be enough fun, and spend quite a bit on maintaining status.
More to spare means being likely to have a longer time horizon.
On the level of society, there seems to be tons of low-hanging fruit.
What are some examples of this low-hanging fruit that you have in mind?
Fact-checking in political discussions (i.e. senate politics), parenting and teaching methods, keeping a clean desk or being happy at work (see here), getting effective medical treatments rather than unproven treatments (sometimes this might require confronting your doctor), and maintaining budgets seem like decent examples (in no particular order, and of course these are at various heights but well within the reach of the general public).
Not sure if Vladimir would have the same types of things in mind.
Factcheck, which keeps track of the accuracy of statements made by politicians and about politics, strikes me as a big recent improvement.
Just added that to my RSS feed.
But to avoid turning this into a fallacy of gray, you still need to take notice of the extent of the effect. Neither working on a bias, nor ignoring the bias, are “defaults”—it necessarily depends on the perceived level of significance.
I think I agree with you. My suggestion is that Less Wrong and SIAI are, at the margin, not paying enough attention to the bias (*).