On the practical question, I think eliminating politics was an inspired decision that should continue to be followed, and I think the lead article was not political; I also think it’s the best post in a good while. Nevertheless, I find the fact that we must avoid politics troubling. If we’re succeeding in making ourselves rational, this—one would think—would lead to a political convergence. This is a nice empirical test of the value and possibility of becoming more rational by the methods we employ, a perspective we should consider an empirical question. It’s a shame we can’t conduct this test.
I will be very impressed if we can get Aumann agreement on hot political issues.
I suspect that the result on many of them would be convergence to realizing that we don’t know what the best solution is, but that might be my prejudices talking.
Supposing that what this site does successfully improves rationality among its participants, then we should expect that someone like me who has only been here for a few months would be less rational than the folks who have been around for years and benefiting from the site.
But a discussion of politics here would not exclude me, so even in that scenario we would expect such a discussion not to lead to convergence.
The proper empirical test, I suppose, would be to identify cohorts based on their tenure here, and conduct a series of such conversations within each such cohort—say, once a year—and evaluate whether a given cohort comes closer to convergence from year to year.
If we’re succeeding in making ourselves rational, this—one would think—would lead to a political convergence.
Politics includes much which is a matter of preference, not just accurate beliefs about the world. For example “I like it when I get more money when X is done” is the core of many political issues. Perhaps more importantly, different preferences with respect to aggregation of human experiences can lead to genuine disagreement about political policy even among altruists. For example, an altruist who had values similar to those that Robin Hanson blogs about will inevitably have a political disagreement with me no matter how rational we both are.
Political beliefs should converge. And if that happens, whatever differences remain won’t be resolved by discussion, because there’s nothing left to discuss.
Indeed, but the trouble is of course that often the optimal strategy for promoting one’s preferences is to convince people that opposing them is somehow objectively wrong and delusional, rather than a matter of a fundamental clash of power and interest. (Which of course typically involves convincing oneself too, since humans tend to be bad at lying and good at sniffing out liars, and they appreciate sincerity a lot.)
That said, one of the main reasons why I find discussions on LW interesting is the unusually high ability of many participants to analyze issues in this regard, i.e. to separate correctly the factual from the normative and preferential. The bad examples where people fail to do so and the discourse breaks down tend to stick out unpleasantly, but overall, I’d say the situation is not at all bad, certainly by any realistic standards for human discourse in general.
On the practical question, I think eliminating politics was an inspired decision that should continue to be followed, and I think the lead article was not political; I also think it’s the best post in a good while. Nevertheless, I find the fact that we must avoid politics troubling. If we’re succeeding in making ourselves rational, this—one would think—would lead to a political convergence. This is a nice empirical test of the value and possibility of becoming more rational by the methods we employ, a perspective we should consider an empirical question. It’s a shame we can’t conduct this test.
I will be very impressed if we can get Aumann agreement on hot political issues.
I suspect that the result on many of them would be convergence to realizing that we don’t know what the best solution is, but that might be my prejudices talking.
It’s worth noting that “we” is ill-defined here.
Supposing that what this site does successfully improves rationality among its participants, then we should expect that someone like me who has only been here for a few months would be less rational than the folks who have been around for years and benefiting from the site.
But a discussion of politics here would not exclude me, so even in that scenario we would expect such a discussion not to lead to convergence.
The proper empirical test, I suppose, would be to identify cohorts based on their tenure here, and conduct a series of such conversations within each such cohort—say, once a year—and evaluate whether a given cohort comes closer to convergence from year to year.
Politics includes much which is a matter of preference, not just accurate beliefs about the world. For example “I like it when I get more money when X is done” is the core of many political issues. Perhaps more importantly, different preferences with respect to aggregation of human experiences can lead to genuine disagreement about political policy even among altruists. For example, an altruist who had values similar to those that Robin Hanson blogs about will inevitably have a political disagreement with me no matter how rational we both are.
Political beliefs should converge. And if that happens, whatever differences remain won’t be resolved by discussion, because there’s nothing left to discuss.
If we can distinguish between preference and accuracy claims, that would be quite a large step towards rationality.
Indeed, but the trouble is of course that often the optimal strategy for promoting one’s preferences is to convince people that opposing them is somehow objectively wrong and delusional, rather than a matter of a fundamental clash of power and interest. (Which of course typically involves convincing oneself too, since humans tend to be bad at lying and good at sniffing out liars, and they appreciate sincerity a lot.)
That said, one of the main reasons why I find discussions on LW interesting is the unusually high ability of many participants to analyze issues in this regard, i.e. to separate correctly the factual from the normative and preferential. The bad examples where people fail to do so and the discourse breaks down tend to stick out unpleasantly, but overall, I’d say the situation is not at all bad, certainly by any realistic standards for human discourse in general.