No, in fact, I don’t think LessWrong should discuss any issues at all. The point should be that we discuss rationality and learn it and get better at it. Then we can go out and make our own decisions.
If we decide what’s rational by treating each subject in turn, we will never be done.
The reason to discuss it is that one of the strengths of a rationalist website is (more) rational discussion of issues in which there is a high degree of irrationality (e.g- politics). To the extent people on such a site are more rational they are more likely then politicians or political debaters to get accurate answers to the problems on which there is public debate.
The risk here is in several dimensions. First, I expect the rational answer is often “I don’t have enough information to have an informed opinion.” Second, it seems easier to develop rationality on simple, painless issues before moving to complex, painful issues. Telling people about the dangers of identity in the abstract, letting that sink in, then having them find examples in their life is one thing; introducing them to the concept with something fundamental to their identity is another entirely. I haven’t seen much talk about religion here- partially because the groups are self-segregating, and partly because it’s simply not a good idea to tell someone “ok, step 1 to being an atheist” instead of “ok, step 1 to weeding untruths out of your life.” Third, these things tend to be eternal discussions if you have influxes of new people (which, as a newbie, I consider likely / a good thing). Fourth, coming to conclusions is something individuals should do, not groups, as much as possible. For example, imagine that the official policy position of Less Wrong was “Everyone should sign up for cryonics”- even if rational considerations lead one person to that decision, it seems unlikely those are sufficient to convince everyone.
There could be a sequence leading up to the conclusion that cryonics is great, but then we run into the issue of dogma: if someone asks why Bayes’ theorem matters, we can point them to the sequence and it won’t feel dogmatic. If someone asks a question about cryonics, saying “oh, we hashed that out in 2010, look here” doesn’t seem like it’s valuable, and similarly having cryonics as a perpetually open issue doesn’t seem like it’s valuable. And, for most people, cryonics simply isn’t relevant enough to make it a particularly good rationality exercise.
To take the example of voting- I know how to divide. I know about gerrymandering. I still consider it worth the time to vote- why? Because I think the second order effects are worthwhile, even if there are no first order effects. I think that registering as a Republican, voting for the most liberty-friendly candidate in primaries, and voting for Libertarian party candidates in general elections shows support and increases the credibility of those people and that party. The effect is tiny- but it’s there, and it will have more effects than gambling on being the one vote that changes the tide.
Would a discussion of just the first-order effects be helpful? And is it worth going to second or third order on discussions where the group might lack sufficient experience? I haven’t run the numbers on how effective my support will be long-term, like I have on how much value I expect to get from the chance I change the current election. So maybe I have a pretty rationalization rather than rationality, but the expected value of doing that calculation is too low to follow through on.
No, in fact, I don’t think LessWrong should discuss any issues at all. The point should be that we discuss rationality and learn it and get better at it. Then we can go out and make our own decisions.
If we decide what’s rational by treating each subject in turn, we will never be done.
The reason to discuss it is that one of the strengths of a rationalist website is (more) rational discussion of issues in which there is a high degree of irrationality (e.g- politics). To the extent people on such a site are more rational they are more likely then politicians or political debaters to get accurate answers to the problems on which there is public debate.
The risk here is in several dimensions. First, I expect the rational answer is often “I don’t have enough information to have an informed opinion.” Second, it seems easier to develop rationality on simple, painless issues before moving to complex, painful issues. Telling people about the dangers of identity in the abstract, letting that sink in, then having them find examples in their life is one thing; introducing them to the concept with something fundamental to their identity is another entirely. I haven’t seen much talk about religion here- partially because the groups are self-segregating, and partly because it’s simply not a good idea to tell someone “ok, step 1 to being an atheist” instead of “ok, step 1 to weeding untruths out of your life.” Third, these things tend to be eternal discussions if you have influxes of new people (which, as a newbie, I consider likely / a good thing). Fourth, coming to conclusions is something individuals should do, not groups, as much as possible. For example, imagine that the official policy position of Less Wrong was “Everyone should sign up for cryonics”- even if rational considerations lead one person to that decision, it seems unlikely those are sufficient to convince everyone.
There could be a sequence leading up to the conclusion that cryonics is great, but then we run into the issue of dogma: if someone asks why Bayes’ theorem matters, we can point them to the sequence and it won’t feel dogmatic. If someone asks a question about cryonics, saying “oh, we hashed that out in 2010, look here” doesn’t seem like it’s valuable, and similarly having cryonics as a perpetually open issue doesn’t seem like it’s valuable. And, for most people, cryonics simply isn’t relevant enough to make it a particularly good rationality exercise.
To take the example of voting- I know how to divide. I know about gerrymandering. I still consider it worth the time to vote- why? Because I think the second order effects are worthwhile, even if there are no first order effects. I think that registering as a Republican, voting for the most liberty-friendly candidate in primaries, and voting for Libertarian party candidates in general elections shows support and increases the credibility of those people and that party. The effect is tiny- but it’s there, and it will have more effects than gambling on being the one vote that changes the tide.
Would a discussion of just the first-order effects be helpful? And is it worth going to second or third order on discussions where the group might lack sufficient experience? I haven’t run the numbers on how effective my support will be long-term, like I have on how much value I expect to get from the chance I change the current election. So maybe I have a pretty rationalization rather than rationality, but the expected value of doing that calculation is too low to follow through on.