In the famous talk “You and Your Research”, Richard Hamming explained why physicists don’t spend much time on researching antigravity:
The three outstanding problems in physics, in a certain sense, were never worked on while I was at Bell Labs. By important I mean guaranteed a Nobel Prize and any sum of money you want to mention. We didn’t work on (1) time travel, (2) teleportation, and (3) antigravity. They are not important problems because we do not have an attack. It’s not the consequence that makes a problem important, it is that you have a reasonable attack.
We can talk productively here about topics like decision theory because we have an attack, a small foothold of sanity (established mostly by Eliezer and Wei) that gives us a firm footing to expand our understanding. As far as I can see, we have no such footholds in politics, or gender relations, or most of those other important topics you listed. I’ve been here for a long time and know that most of our interminable “discussions” of these controversial topics have been completely useless. Our rationality helps us maintain a civil tone, but not actually, you know, make progress.
Human understanding progresses through small problems solved conclusively, once and forever. The first step in any pre-paradigmatic field (like politics) is always the hardest: you need to generate a piece of insight that allows other people to generate new pieces of insight. It’s not a task for our argumentative circuitry, it’s a task for sitting down and thinking really hard. Encouraging wide discussion is the wrong step in the dance. If you don’t have a specific breakthrough, I’d rather we talked about math.
We can talk productively here about topics like decision theory because we have an attack, a small foothold of sanity (established mostly by Eliezer and Wei) that gives us a firm footing to expand our understanding. As far as I can see, we have no such footholds in politics, or gender relations, or most of those other important topics you listed. I’ve been here for a long time and know that all our interminable “discussions” of these controversial topics have been completely useless.
Therefore posts on such subjects should be made if and when such an attack is found? I would support that standard.
Yes, that’s what I’d like to see. Sadly my mind completely fails whenever I try to generate insight about social issues, so I can’t follow my own exhortation.
We can talk productively here about topics like decision theory because we have an attack, a small foothold of sanity (established mostly by Eliezer and Wei) that gives us a firm footing to expand our understanding. As far as I can see, we have no such footholds in politics, or gender relations, or most of those other important topics you listed.
To me, this sounds way too ambitious for a place that advertises itself as a public forum, where random visitors are invited with kind words to join and participate, and get upvoted as long as they don’t write anything outright stupid or bad-mannered.
You’re correct about the reasons why physicists don’t work on on anti-gravity, but you’ll also notice that they don’t work by opening web forums to invite ideas and contributions from the general public. A community focusing strictly on hard scientific and mathematical progress must set the bar for being a contributor way higher, so high that well over 90% of the present rate of activity on this website would have to be culled, in terms of both the number of contributors and the amount of content being generated. At that point, you might as well just open an invitation-only mailing list.
As for the softer (or as you call them, “pre-paradigmatic”) fields, many of them are subject to Trotsky’s famous (though likely apocryphal) maxim that you might not be interested in war, but war is interested in you. Even if it’s something like politics, where it’s far from certain (though far from impossible either) that insight into it can yield useful practical guidelines, by relinquishing thinking about it you basically resign to the role of a pawn pushed around by forces you don’t understand at all. Therefore, since you’ll have an opinion one way or another, it can’t hurt if it’s been subjected to a high-standard rational discussion, even if only for eliminating clear errors of fact and logic. Also, I don’t see anything wrong with discussing such things just for fun.
Moreover, the real problem with such discussions are the “who-whom?” issues and the corresponding feelings of group solidarity, not the inability to resolve questions of fact. In fact, when it comes to clearly defined factual questions, I think the situation is much better than in the hard fields. Progress in hard fields is tremendously difficult because all the low-hanging fruit was picked generations ago. In contrast, the present state of knowledge in softer fields is so abysmally bad, and contaminated with so much bias and outright intellectual incompetence, that a group of smart and unbiased amateurs can easily reach insight beyond what’s readily available from reputable mainstream sources about a great variety of issues. Of course, the tricky part is actually avoiding passions and biases, but that’s basically the point, isn’t it?
In contrast, the present state of knowledge in softer fields is so abysmally bad, and contaminated with so much bias and outright intellectual incompetence, that a group of smart and unbiased amateurs can easily reach insight beyond what’s readily available from reputable mainstream sources about a great variety of issues.
I’m afraid that if we accept this suggestion, most posts about softer fields will consist of seemingly plausible but wrong contrarian ideas, and since most of us won’t be experts in the relevant fields, it will take a lot of time and effort for us to come up with the necessary evidence to show that the ideas are wrong.
And if we do manage to identify some correct contrarian insight, it will have minimal impact on society at large, because nobody outside of LW will believe that a group of smart and unbiased amateurs can easily reach such insight.
That is undoubtedly true. However, it seems to me that my main objection to cousin_it’s position applies to yours too, namely that the ambitious goals you have in mind are incompatible with the nature of this website as a public forum that solicits participation from the wide general public and warmly welcomes anyone who is not acting outright stupid, trollish, or obnoxious. On the whole, the outcome you describe in the above comment as undesirable and falling short of your vision is in reality the very best that can be realistically achieved by a public forum with such a low bar for entry and participation.
I absolutely admire your ambitions to achieve progress in hard areas, but building a community capable of such accomplishments requires a radically different and far more elitist approach, as I explained in my other comments. There are good reasons why scientists don’t approach problems by opening web forums that solicit ideas from the public, and don’t try to find productive collaborators among random people who would gather at such forums. Or do you believe that LW might turn out to be the first example of such an approach actually working?
Or do you believe that LW might turn out to be the first example of such an approach actually working?
LW does seem to be working to some extent, in the core areas related to rationality. Presumably it’s because even though we’re technically amateurs, we all share enough interest and have enough background knowledge in those areas to spot wrongness relatively quickly.
Also, I believe Math Overflow has previously been cited as another such site, although I’m not personally familiar with it.
LW does seem to be working to some extent, in the core areas related to rationality.
What would be the concrete examples you have in mind, if by “working” we mean making progress in some hard area, or at least doing something that might plausibly lead to such progress (i.e. your above expressed benchmark of success)?
The only things I can think of are occasional threads on mathy topics like decision theory and AI cooperation, but in such cases, what we see is a clearly distinguished informal group of several people who are up to date with the relevant knowledge, and whose internal discussions are mostly impenetrable to the overwhelming majority of other participants here. In effect, we see a closely-knit expert group with a very high bar for joining, which merely uses a forum with a much wider membership base as its communication medium.
I don’t think this situation is necessarily bad, though it does generate frustration whenever non-expert members try joining such discussions and end up just muddling them. However, if the goal of LW is defined as progress in hard areas—let alone progress of wider-society-influencing magnitude—then it is an unavoidable conclusion that most of what actually happens here is sheer dead weight, imposed by the open nature of the forum that is inherently in conflict with such goals.
Also, I believe Math Overflow has previously been cited as another such site, although I’m not personally familiar with it.
I wouldn’t say that Math Overflow is a good counterexample to my claims. First, from what I understand, it’s a place where people exchange information about the existing mathematical knowledge, rather than a community of researchers collaborating on novel problems. Second, it requires extremely high qualifications from participants, and the discourse is rigorously limited to making technical points strictly pertinent to the topic at hand. That’s an extremely different sort of community than LW, which would have to undergo a very radical transformation to be turned into something like that.
In effect, we see a closely-knit expert group with a very high bar for joining, which merely uses a forum with a much wider membership base as its communication medium… most of what actually happens here is sheer dead weight, imposed by the open nature of the forum that is inherently in conflict with such goals.
I’d say the bar for joining isn’t very high (you only have to know the right kind of undergraduate math, a lot of which was even covered on LW), and the open forum is also useful for recruiting new members into the “group”, not just communication. Everytime I post some rigorous argument, I hope to interest more people than just the “regulars” into advancing it further.
Besides decision theory and AI cooperation, I mean things like better understanding of biases and ways to counteract them (see most posts in Top Posts). Ethics and other rationality-related philosophy (Are wireheads happy?). Ways to encourage/improve rational discussions. Ways to make probability/decision theory more intuitive/useful/relevant in practice.
It might be that we got into a misunderstanding because we mean different things when we speak about “soft” areas. To me, the topics you listed except for the first two ones, and the posts that exemplify them, look like they could be reasonably described as addressing (either directly or indirectly) various soft fields where the conventional wisdom is dubious, disorganized, and contradictory. Therefore, what you list can be seen as a subset of the soft topics I had in mind, rather than something altogether different.
To support this, I would note that most of the top posts bring up issues (including some ideologically sensitive ones) about which much has been written by prominent academics and other mainstream intellectual figures but in a pre-paradigmatic way, that ethics and philosophy are clear examples of soft fields, and that improvements in the understanding of biases achieved in LW discussions are extremely unlikely to be useful for people in hard fields who already use sophisticated and effective area-specific bias-eliminating methodologies, but they could lead to non-trivial insight in various soft topics (and the highest-scoring top posts have indeed applied them to soft topics, not hard ones).
So, on the whole, the only disagreement we seem to have (if any) is about what specific range of soft topics should be encouraged as the subject of discussions here.
To me, this sounds way too ambitious for a place that advertises itself as a public forum
contradicts the other part:
a group of smart and unbiased amateurs can easily reach insight beyond what’s readily available from reputable mainstream sources about a great variety of issues
so I’m not sure that your well-worded and well-upvoted comment even has a point I could respond to. Anyway. The politically charged discussions here have been useless to me (with one exception that sadly didn’t get the follow-through it deserved), so I’ll go on waiting for insight, and avoid talking when I have no insight, and encourage others to do the same.
The comment definitely wasn’t well-worded if it seems like there’s a contradiction there; in fact, my failure to convey the point suggests that the wording was quite awful. (Thus providing more evidence that people are way too generous with upvoting.) So please let me try once more.
I was trying to draw a contrast between the following:
Topics in math and hard science, in which any insight that can’t be found by looking up the existing literature is extremely hard to come by. It seems to me that a public web forum that invites random visitors to participate freely is, as a community, inherently unusable for achieving any such goal. What is required is a closely-knit group of dedicated researchers that imposes extremely high qualifications for joining and whose internal discussions will be largely incomprehensible to outsiders, the only exception being the work of lone geniuses.
Topics in softer fields, in which the present state of knowledge is not in the form of well-organized literature that is almost fully sound and extremely hard to improve on, but instead even the very basics are heavily muddled and biased. Here, in contrast, there is plenty of opportunity to achieve some new insight or at least to make some sense out of the existing muddled and contradictory information, even by casual amateurs, if the topics are just approached with a good epistemology and a clear and unbiased mind.
Of course, a web forum can serve for all other kinds of fun chit-chat and exchange of useful information, but when it comes to generating novel insight, that’s basically it.
Or do you really find it within the realm of the possible that a public forum that gets its membership by warmly inviting random readers might be up to standard for advancing the state of knowledge in some hard area?
Thanks, I understand your point now. It seems my original comment was unclear: I didn’t mean to demand that everyone shut up about soft topics. RobinZ expressed the intended meaning.
It seems my original comment was unclear: I didn’t mean to demand that everyone shut up about soft topics.
For what that’s worth, I didn’t understand your comment that way. I merely wanted to point out the inherent tension between the public and inviting nature of the forum and your vision of the goals it should ideally achieve.
What kind of follow-through do you think my post deserved? I’m pretty happy with the effect that it’s had on the LW community myself. People do seem to be more careful about giving and taking offense after that post. So I’m curious what you have in mind.
I hoped for more of the kind of analysis you demonstrated. Of course I don’t know what specific advances would happen! But I feel you were doing something methodologically right when writing that post, and the same approach could be applied to other problems.
Isn’t “progress” a bit of an over-ambitious notion?
Blogs aren’t generally a method of doing science (with the exception of a few collaborative projects in math that are ruthlessly on-topic and appeal to a tiny community.) Blogs and forums are great for keeping current with science, for speculating, and for letting off steam and having fun. Those are legitimate functions—why do you want to make this loftier than it is?
“Human understanding progresses through small problems solved conclusively, once and forever.” I don’t know about anyone else, but I find this gravely unsettling applied to politics and social issues. People have different values. Treating these things as problems to be “solved conclusively” is incompatible with pluralism. Politics is meant to be a lot of arguing and conflicts between different interests; I’m scared of anyone who wants it to be anything else. Talking politics on the internet is best done tipsy and in aggressive good humor. If we could do that here, I wouldn’t mind—but somehow I think we’d wind up taking it too seriously.
Our header image says “a community blog devoted to refining the art of human rationality”. I spend a lot of effort trying to improve our shared understanding and don’t consider it over-ambitious at all.
If you know for certain that most disagreements in politics are genuinely about terminal values of different people, rather than about factual questions (what measures would lead to what consequences), then you know more than I do, and I’d be very interested to hear how you established that conclusion. In fact this would be just the kind of progress I wish to see!
Isn’t it enough to show that there are at least some incompatible terminal values? If this is the case, then there can be no overall lasting agreement on politics without resorting to force (avoiding the naked use of which could be seen as the main point of politics in the first place).
Thanks for twisting my mind in the right direction. Phil Goetz described how values can be incompatible if they’re positional. Robin Hanson gave real-world data about positionality of different goods. This doesn’t seem to deal with terminality/instrumentality of values yet...
The political issues which might be discussed are not questions of organization or political structure (in general)--they are questions which belong to some other domain and which happen to have political implications. The fact that they are political changes the nature of discussion about the problem, not the nature of the problem.
I think one effect on the nature of discussion is to stop people from breaking off small problems which they can solve or assertions whose truth they can honestly assess. Instead conversations almost always expand until they are unmanageable. For example, starting out with the question “Should I vote for A or B” is exceptionally unlikely to yield useful discussion. Of course, I don’t know about the interminable discussions which have occurred here in the past or why they might not have been productive. I would guess that the problems discussed were much too large for significant progress to be plausible. This is not because we lack a foothold—it is because we tend to (and there are forces encouraging us to) tackle questions much too large.
I believe it is possible to bite off reasonably sized (ie tiny) questions which we can make genuine progress on. If the problem will be around for a long time, such progress might be very valuable. Maybe a small problem takes the form of a simplified hypothetical whose resolution cannot be black-boxed and used in future discussions (because of course that hypothetical will never occur) but which is likely to help us develop general arguments and standards which might be brought to bear on more complex hypotheticals. Maybe it takes the form of some very simple, concrete, assertion about the world whose answer is politically charged but which can be resolved decisively with a little research and care. Even if such assertions have limited implications, at least its starting to get you somewhere.
I am not in agreement with the original topic. I believe this tactic (of responding to wrongness directly regarding controversial issues) is likely to cause us to also bite off much too large a problem.
However, I do believe that the criterion for declaring something on-topic is a good one, and could increase the value of Less Wrong as a resource if it was used to discuss small issues which we could reasonably make progress on. If such small topics are motivated by general failures of the public discourse, it seems possible that we would eventually accumulate enough rigorous understanding to correct those failures “conclusively.” (at least, as conclusively as any discussion on Less Wrong establishes anything)
Of course, I don’t know about the interminable discussions which have occurred here in the past or why they might not have been productive.
For most part the policy was preemptive. We’ve known politics is the mind killer from before LW was created and didn’t want to go there. What forays into politically charged areas have occurred here haven’t done too much to dissuade me of that notion even though the conversations have been ‘less wrong’ than they perhaps may have been elsewhere.
I don’t want LW to change in that direction.
In the famous talk “You and Your Research”, Richard Hamming explained why physicists don’t spend much time on researching antigravity:
We can talk productively here about topics like decision theory because we have an attack, a small foothold of sanity (established mostly by Eliezer and Wei) that gives us a firm footing to expand our understanding. As far as I can see, we have no such footholds in politics, or gender relations, or most of those other important topics you listed. I’ve been here for a long time and know that most of our interminable “discussions” of these controversial topics have been completely useless. Our rationality helps us maintain a civil tone, but not actually, you know, make progress.
Human understanding progresses through small problems solved conclusively, once and forever. The first step in any pre-paradigmatic field (like politics) is always the hardest: you need to generate a piece of insight that allows other people to generate new pieces of insight. It’s not a task for our argumentative circuitry, it’s a task for sitting down and thinking really hard. Encouraging wide discussion is the wrong step in the dance. If you don’t have a specific breakthrough, I’d rather we talked about math.
Therefore posts on such subjects should be made if and when such an attack is found? I would support that standard.
Yes, that’s what I’d like to see. Sadly my mind completely fails whenever I try to generate insight about social issues, so I can’t follow my own exhortation.
cousin_it:
To me, this sounds way too ambitious for a place that advertises itself as a public forum, where random visitors are invited with kind words to join and participate, and get upvoted as long as they don’t write anything outright stupid or bad-mannered.
You’re correct about the reasons why physicists don’t work on on anti-gravity, but you’ll also notice that they don’t work by opening web forums to invite ideas and contributions from the general public. A community focusing strictly on hard scientific and mathematical progress must set the bar for being a contributor way higher, so high that well over 90% of the present rate of activity on this website would have to be culled, in terms of both the number of contributors and the amount of content being generated. At that point, you might as well just open an invitation-only mailing list.
As for the softer (or as you call them, “pre-paradigmatic”) fields, many of them are subject to Trotsky’s famous (though likely apocryphal) maxim that you might not be interested in war, but war is interested in you. Even if it’s something like politics, where it’s far from certain (though far from impossible either) that insight into it can yield useful practical guidelines, by relinquishing thinking about it you basically resign to the role of a pawn pushed around by forces you don’t understand at all. Therefore, since you’ll have an opinion one way or another, it can’t hurt if it’s been subjected to a high-standard rational discussion, even if only for eliminating clear errors of fact and logic. Also, I don’t see anything wrong with discussing such things just for fun.
Moreover, the real problem with such discussions are the “who-whom?” issues and the corresponding feelings of group solidarity, not the inability to resolve questions of fact. In fact, when it comes to clearly defined factual questions, I think the situation is much better than in the hard fields. Progress in hard fields is tremendously difficult because all the low-hanging fruit was picked generations ago. In contrast, the present state of knowledge in softer fields is so abysmally bad, and contaminated with so much bias and outright intellectual incompetence, that a group of smart and unbiased amateurs can easily reach insight beyond what’s readily available from reputable mainstream sources about a great variety of issues. Of course, the tricky part is actually avoiding passions and biases, but that’s basically the point, isn’t it?
I’m afraid that if we accept this suggestion, most posts about softer fields will consist of seemingly plausible but wrong contrarian ideas, and since most of us won’t be experts in the relevant fields, it will take a lot of time and effort for us to come up with the necessary evidence to show that the ideas are wrong.
And if we do manage to identify some correct contrarian insight, it will have minimal impact on society at large, because nobody outside of LW will believe that a group of smart and unbiased amateurs can easily reach such insight.
That is undoubtedly true. However, it seems to me that my main objection to cousin_it’s position applies to yours too, namely that the ambitious goals you have in mind are incompatible with the nature of this website as a public forum that solicits participation from the wide general public and warmly welcomes anyone who is not acting outright stupid, trollish, or obnoxious. On the whole, the outcome you describe in the above comment as undesirable and falling short of your vision is in reality the very best that can be realistically achieved by a public forum with such a low bar for entry and participation.
I absolutely admire your ambitions to achieve progress in hard areas, but building a community capable of such accomplishments requires a radically different and far more elitist approach, as I explained in my other comments. There are good reasons why scientists don’t approach problems by opening web forums that solicit ideas from the public, and don’t try to find productive collaborators among random people who would gather at such forums. Or do you believe that LW might turn out to be the first example of such an approach actually working?
LW does seem to be working to some extent, in the core areas related to rationality. Presumably it’s because even though we’re technically amateurs, we all share enough interest and have enough background knowledge in those areas to spot wrongness relatively quickly.
Also, I believe Math Overflow has previously been cited as another such site, although I’m not personally familiar with it.
Wei_Dai:
What would be the concrete examples you have in mind, if by “working” we mean making progress in some hard area, or at least doing something that might plausibly lead to such progress (i.e. your above expressed benchmark of success)?
The only things I can think of are occasional threads on mathy topics like decision theory and AI cooperation, but in such cases, what we see is a clearly distinguished informal group of several people who are up to date with the relevant knowledge, and whose internal discussions are mostly impenetrable to the overwhelming majority of other participants here. In effect, we see a closely-knit expert group with a very high bar for joining, which merely uses a forum with a much wider membership base as its communication medium.
I don’t think this situation is necessarily bad, though it does generate frustration whenever non-expert members try joining such discussions and end up just muddling them. However, if the goal of LW is defined as progress in hard areas—let alone progress of wider-society-influencing magnitude—then it is an unavoidable conclusion that most of what actually happens here is sheer dead weight, imposed by the open nature of the forum that is inherently in conflict with such goals.
I wouldn’t say that Math Overflow is a good counterexample to my claims. First, from what I understand, it’s a place where people exchange information about the existing mathematical knowledge, rather than a community of researchers collaborating on novel problems. Second, it requires extremely high qualifications from participants, and the discourse is rigorously limited to making technical points strictly pertinent to the topic at hand. That’s an extremely different sort of community than LW, which would have to undergo a very radical transformation to be turned into something like that.
I’d say the bar for joining isn’t very high (you only have to know the right kind of undergraduate math, a lot of which was even covered on LW), and the open forum is also useful for recruiting new members into the “group”, not just communication. Everytime I post some rigorous argument, I hope to interest more people than just the “regulars” into advancing it further.
Besides decision theory and AI cooperation, I mean things like better understanding of biases and ways to counteract them (see most posts in Top Posts). Ethics and other rationality-related philosophy (Are wireheads happy?). Ways to encourage/improve rational discussions. Ways to make probability/decision theory more intuitive/useful/relevant in practice.
It might be that we got into a misunderstanding because we mean different things when we speak about “soft” areas. To me, the topics you listed except for the first two ones, and the posts that exemplify them, look like they could be reasonably described as addressing (either directly or indirectly) various soft fields where the conventional wisdom is dubious, disorganized, and contradictory. Therefore, what you list can be seen as a subset of the soft topics I had in mind, rather than something altogether different.
To support this, I would note that most of the top posts bring up issues (including some ideologically sensitive ones) about which much has been written by prominent academics and other mainstream intellectual figures but in a pre-paradigmatic way, that ethics and philosophy are clear examples of soft fields, and that improvements in the understanding of biases achieved in LW discussions are extremely unlikely to be useful for people in hard fields who already use sophisticated and effective area-specific bias-eliminating methodologies, but they could lead to non-trivial insight in various soft topics (and the highest-scoring top posts have indeed applied them to soft topics, not hard ones).
So, on the whole, the only disagreement we seem to have (if any) is about what specific range of soft topics should be encouraged as the subject of discussions here.
This part:
contradicts the other part:
so I’m not sure that your well-worded and well-upvoted comment even has a point I could respond to. Anyway. The politically charged discussions here have been useless to me (with one exception that sadly didn’t get the follow-through it deserved), so I’ll go on waiting for insight, and avoid talking when I have no insight, and encourage others to do the same.
The comment definitely wasn’t well-worded if it seems like there’s a contradiction there; in fact, my failure to convey the point suggests that the wording was quite awful. (Thus providing more evidence that people are way too generous with upvoting.) So please let me try once more.
I was trying to draw a contrast between the following:
Topics in math and hard science, in which any insight that can’t be found by looking up the existing literature is extremely hard to come by. It seems to me that a public web forum that invites random visitors to participate freely is, as a community, inherently unusable for achieving any such goal. What is required is a closely-knit group of dedicated researchers that imposes extremely high qualifications for joining and whose internal discussions will be largely incomprehensible to outsiders, the only exception being the work of lone geniuses.
Topics in softer fields, in which the present state of knowledge is not in the form of well-organized literature that is almost fully sound and extremely hard to improve on, but instead even the very basics are heavily muddled and biased. Here, in contrast, there is plenty of opportunity to achieve some new insight or at least to make some sense out of the existing muddled and contradictory information, even by casual amateurs, if the topics are just approached with a good epistemology and a clear and unbiased mind.
Of course, a web forum can serve for all other kinds of fun chit-chat and exchange of useful information, but when it comes to generating novel insight, that’s basically it.
Or do you really find it within the realm of the possible that a public forum that gets its membership by warmly inviting random readers might be up to standard for advancing the state of knowledge in some hard area?
Thanks, I understand your point now. It seems my original comment was unclear: I didn’t mean to demand that everyone shut up about soft topics. RobinZ expressed the intended meaning.
cousin_it:
For what that’s worth, I didn’t understand your comment that way. I merely wanted to point out the inherent tension between the public and inviting nature of the forum and your vision of the goals it should ideally achieve.
What kind of follow-through do you think my post deserved? I’m pretty happy with the effect that it’s had on the LW community myself. People do seem to be more careful about giving and taking offense after that post. So I’m curious what you have in mind.
I hoped for more of the kind of analysis you demonstrated. Of course I don’t know what specific advances would happen! But I feel you were doing something methodologically right when writing that post, and the same approach could be applied to other problems.
Honestly I assumed something like that was being used for decision theory.
It is.
Isn’t “progress” a bit of an over-ambitious notion?
Blogs aren’t generally a method of doing science (with the exception of a few collaborative projects in math that are ruthlessly on-topic and appeal to a tiny community.) Blogs and forums are great for keeping current with science, for speculating, and for letting off steam and having fun. Those are legitimate functions—why do you want to make this loftier than it is?
“Human understanding progresses through small problems solved conclusively, once and forever.” I don’t know about anyone else, but I find this gravely unsettling applied to politics and social issues. People have different values. Treating these things as problems to be “solved conclusively” is incompatible with pluralism. Politics is meant to be a lot of arguing and conflicts between different interests; I’m scared of anyone who wants it to be anything else. Talking politics on the internet is best done tipsy and in aggressive good humor. If we could do that here, I wouldn’t mind—but somehow I think we’d wind up taking it too seriously.
Our header image says “a community blog devoted to refining the art of human rationality”. I spend a lot of effort trying to improve our shared understanding and don’t consider it over-ambitious at all.
If you know for certain that most disagreements in politics are genuinely about terminal values of different people, rather than about factual questions (what measures would lead to what consequences), then you know more than I do, and I’d be very interested to hear how you established that conclusion. In fact this would be just the kind of progress I wish to see!
Isn’t it enough to show that there are at least some incompatible terminal values? If this is the case, then there can be no overall lasting agreement on politics without resorting to force (avoiding the naked use of which could be seen as the main point of politics in the first place).
Thanks for twisting my mind in the right direction. Phil Goetz described how values can be incompatible if they’re positional. Robin Hanson gave real-world data about positionality of different goods. This doesn’t seem to deal with terminality/instrumentality of values yet...
The political issues which might be discussed are not questions of organization or political structure (in general)--they are questions which belong to some other domain and which happen to have political implications. The fact that they are political changes the nature of discussion about the problem, not the nature of the problem.
I think one effect on the nature of discussion is to stop people from breaking off small problems which they can solve or assertions whose truth they can honestly assess. Instead conversations almost always expand until they are unmanageable. For example, starting out with the question “Should I vote for A or B” is exceptionally unlikely to yield useful discussion. Of course, I don’t know about the interminable discussions which have occurred here in the past or why they might not have been productive. I would guess that the problems discussed were much too large for significant progress to be plausible. This is not because we lack a foothold—it is because we tend to (and there are forces encouraging us to) tackle questions much too large.
I believe it is possible to bite off reasonably sized (ie tiny) questions which we can make genuine progress on. If the problem will be around for a long time, such progress might be very valuable. Maybe a small problem takes the form of a simplified hypothetical whose resolution cannot be black-boxed and used in future discussions (because of course that hypothetical will never occur) but which is likely to help us develop general arguments and standards which might be brought to bear on more complex hypotheticals. Maybe it takes the form of some very simple, concrete, assertion about the world whose answer is politically charged but which can be resolved decisively with a little research and care. Even if such assertions have limited implications, at least its starting to get you somewhere.
I am not in agreement with the original topic. I believe this tactic (of responding to wrongness directly regarding controversial issues) is likely to cause us to also bite off much too large a problem.
However, I do believe that the criterion for declaring something on-topic is a good one, and could increase the value of Less Wrong as a resource if it was used to discuss small issues which we could reasonably make progress on. If such small topics are motivated by general failures of the public discourse, it seems possible that we would eventually accumulate enough rigorous understanding to correct those failures “conclusively.” (at least, as conclusively as any discussion on Less Wrong establishes anything)
For most part the policy was preemptive. We’ve known politics is the mind killer from before LW was created and didn’t want to go there. What forays into politically charged areas have occurred here haven’t done too much to dissuade me of that notion even though the conversations have been ‘less wrong’ than they perhaps may have been elsewhere.