Less Wrong Should Confront Wrongness Wherever it Appears
In a recent discussion about a controversial topic which I will not name here, Vladimir_M noticed something extremely important.
Because the necessary information is difficult to obtain in a clear and convincing form, and it’s drowned in a vast sea of nonsense that’s produced on this subject by just about every source of information in the modern society.
I have separated it from its original context, because this issue applies to many important topics. There are many topics where the information that most people receive is confused, wrong, or biased, and where nonsense drowns out truth and clarity. Wherever this occurs, it is very bad and very important to notice.
There are many reasons why it happens, many of which have been explicitly studied and discussed as topics here. The norms and design of the site are engineered to promote clarity and correctness. Strategies for reasoning correctly are frequently recurring topics, and newcomers are encouraged to read a large back-catalog of articles about how to avoid common errors in thinking (the sequences). A high standard of discourse is enforced through voting, which also provides rapid feedback to help everyone improve their writing. Since Well-Kept Gardens Die by Pacifism, when the occasional nutjob stops by, they’re downvoted into invisibility and driven away—and while you wouldn’t notice from the comment archives, this has happened lots of times.
Less Wrong has the highest accuracy and signal to noise ratio of any blog I’ve seen, other than those that limit themselves to narrow specialties. In fact, I doubt anyone here knows a better one. The difference is very large. While we are certainly not perfect, errors on Less Wrong are rarer and much more likely to be spotted and corrected than on any similar site, so a community consensus here is a very strong signal of clarity and correctness.
As a result, Less Wrong is well positioned to find and correct errors in the public discourse. Less Wrong should confront wrongness wherever it appears. Wherever large amounts of utility depend on clear and accurate information, it’s not already prevalent, and we have the ability to produce or properly filter that information, then we ought to do so and lots of utility depends on it. Even if it’s incompatible with status signaling, or off topic, or otherwise incompatible with non-vital social norms.
So I propose the following as a community norm. If a topic is important, the public discourse on it is wrong for any reason, it hasn’t appeared on Less Wrong before, and a discussion on Less Wrong would probably bring clarity, then it is automatically considered on-topic. By important, I mean topics where inaccurate or confused beliefs would cost lots of utility for readers or for humanity. Approaching a topic from a new and substantially different angle doesn’t count as a duplicate.
EDIT: This thread is producing a lot of discussion about what Less Wrong’s norms should be. I have proposed a procedure for gathering and filtering these discussions into a top-level post, which would have the effect of encouraging people to enforce them through voting and comments.
Less Wrong does not currently provide strong guidance about what is considered on topic. In fact, Less Wrong generally considers topic to be secondary to importance and clarity, and this is as it should be. However, this should be formally acknowledged, so that people are not discouraged from posting important things just because they think they might be off topic! Determining whether something is on topic is a trivial inconvenience of the worst sort.
When writing posts on these topics, it is a good idea to call out any known reasons why the public discourse may have gone awry, to avoid hitting the same traps. If there’s a related but different position that’s highly objectionable, call it out and disclaim against it. If there’s a standard position which people don’t want to or can’t safely signal disagreement with, then clearly label which parts are true and which aren’t. Do not present distorted views of controversial topics, but more importantly, do not present falsehood as truth in the name of balance; if a topic seems to have two valid opposing sides, it probably means you don’t understand it well enough to tell which is correct. If there are norms suppressing discussion, call them out, check for valid justifications, and if they’re unjustified or the issues can be worked around, ask readers not to enforce them.
I would like to add a list of past Less Wrong topics which had little to do with bias, except that the public discourse was impaired by it. These have already been discussed so they would be discouraged as duplicates rule (except for substantially new approaches), but they are good examples of the sorts of topics we should all be looking for. The accuracy of criminal justice (which we looked at in the particular case of Amanda Knox); religion, epistemology, and death; health and nutrition, akrasia, specific psychoactive drugs and psychoactive drugs in general; gender relations, racial relations, and social relations in general; social norms in general and the desirability of particular norms; charity in general and the effectiveness of particular charities, philosophy in general and the soundness of particular philosophies.
By inadequate public discourse, I mean that either (a) they’re complex enough that most information sources are merely useless and confusing, (b) social norms make them hard to talk about, or (c) they have excessive noise published about them due to bad incentives. Our job is to find more topics, not in this list, where correctness is important and where the public dialogue is substantially inadequate. Then write something that’s less wrong.
- Rationality Case Study—Ad-36 by 22 Sep 2010 18:32 UTC; 32 points) (
- 21 Jan 2011 11:26 UTC; 8 points) 's comment on Politics is a fact of life by (
- 21 Sep 2010 4:10 UTC; 4 points) 's comment on Less Wrong Should Confront Wrongness Wherever it Appears by (
I don’t want LW to change in that direction.
In the famous talk “You and Your Research”, Richard Hamming explained why physicists don’t spend much time on researching antigravity:
We can talk productively here about topics like decision theory because we have an attack, a small foothold of sanity (established mostly by Eliezer and Wei) that gives us a firm footing to expand our understanding. As far as I can see, we have no such footholds in politics, or gender relations, or most of those other important topics you listed. I’ve been here for a long time and know that most of our interminable “discussions” of these controversial topics have been completely useless. Our rationality helps us maintain a civil tone, but not actually, you know, make progress.
Human understanding progresses through small problems solved conclusively, once and forever. The first step in any pre-paradigmatic field (like politics) is always the hardest: you need to generate a piece of insight that allows other people to generate new pieces of insight. It’s not a task for our argumentative circuitry, it’s a task for sitting down and thinking really hard. Encouraging wide discussion is the wrong step in the dance. If you don’t have a specific breakthrough, I’d rather we talked about math.
Therefore posts on such subjects should be made if and when such an attack is found? I would support that standard.
Yes, that’s what I’d like to see. Sadly my mind completely fails whenever I try to generate insight about social issues, so I can’t follow my own exhortation.
cousin_it:
To me, this sounds way too ambitious for a place that advertises itself as a public forum, where random visitors are invited with kind words to join and participate, and get upvoted as long as they don’t write anything outright stupid or bad-mannered.
You’re correct about the reasons why physicists don’t work on on anti-gravity, but you’ll also notice that they don’t work by opening web forums to invite ideas and contributions from the general public. A community focusing strictly on hard scientific and mathematical progress must set the bar for being a contributor way higher, so high that well over 90% of the present rate of activity on this website would have to be culled, in terms of both the number of contributors and the amount of content being generated. At that point, you might as well just open an invitation-only mailing list.
As for the softer (or as you call them, “pre-paradigmatic”) fields, many of them are subject to Trotsky’s famous (though likely apocryphal) maxim that you might not be interested in war, but war is interested in you. Even if it’s something like politics, where it’s far from certain (though far from impossible either) that insight into it can yield useful practical guidelines, by relinquishing thinking about it you basically resign to the role of a pawn pushed around by forces you don’t understand at all. Therefore, since you’ll have an opinion one way or another, it can’t hurt if it’s been subjected to a high-standard rational discussion, even if only for eliminating clear errors of fact and logic. Also, I don’t see anything wrong with discussing such things just for fun.
Moreover, the real problem with such discussions are the “who-whom?” issues and the corresponding feelings of group solidarity, not the inability to resolve questions of fact. In fact, when it comes to clearly defined factual questions, I think the situation is much better than in the hard fields. Progress in hard fields is tremendously difficult because all the low-hanging fruit was picked generations ago. In contrast, the present state of knowledge in softer fields is so abysmally bad, and contaminated with so much bias and outright intellectual incompetence, that a group of smart and unbiased amateurs can easily reach insight beyond what’s readily available from reputable mainstream sources about a great variety of issues. Of course, the tricky part is actually avoiding passions and biases, but that’s basically the point, isn’t it?
I’m afraid that if we accept this suggestion, most posts about softer fields will consist of seemingly plausible but wrong contrarian ideas, and since most of us won’t be experts in the relevant fields, it will take a lot of time and effort for us to come up with the necessary evidence to show that the ideas are wrong.
And if we do manage to identify some correct contrarian insight, it will have minimal impact on society at large, because nobody outside of LW will believe that a group of smart and unbiased amateurs can easily reach such insight.
That is undoubtedly true. However, it seems to me that my main objection to cousin_it’s position applies to yours too, namely that the ambitious goals you have in mind are incompatible with the nature of this website as a public forum that solicits participation from the wide general public and warmly welcomes anyone who is not acting outright stupid, trollish, or obnoxious. On the whole, the outcome you describe in the above comment as undesirable and falling short of your vision is in reality the very best that can be realistically achieved by a public forum with such a low bar for entry and participation.
I absolutely admire your ambitions to achieve progress in hard areas, but building a community capable of such accomplishments requires a radically different and far more elitist approach, as I explained in my other comments. There are good reasons why scientists don’t approach problems by opening web forums that solicit ideas from the public, and don’t try to find productive collaborators among random people who would gather at such forums. Or do you believe that LW might turn out to be the first example of such an approach actually working?
LW does seem to be working to some extent, in the core areas related to rationality. Presumably it’s because even though we’re technically amateurs, we all share enough interest and have enough background knowledge in those areas to spot wrongness relatively quickly.
Also, I believe Math Overflow has previously been cited as another such site, although I’m not personally familiar with it.
Wei_Dai:
What would be the concrete examples you have in mind, if by “working” we mean making progress in some hard area, or at least doing something that might plausibly lead to such progress (i.e. your above expressed benchmark of success)?
The only things I can think of are occasional threads on mathy topics like decision theory and AI cooperation, but in such cases, what we see is a clearly distinguished informal group of several people who are up to date with the relevant knowledge, and whose internal discussions are mostly impenetrable to the overwhelming majority of other participants here. In effect, we see a closely-knit expert group with a very high bar for joining, which merely uses a forum with a much wider membership base as its communication medium.
I don’t think this situation is necessarily bad, though it does generate frustration whenever non-expert members try joining such discussions and end up just muddling them. However, if the goal of LW is defined as progress in hard areas—let alone progress of wider-society-influencing magnitude—then it is an unavoidable conclusion that most of what actually happens here is sheer dead weight, imposed by the open nature of the forum that is inherently in conflict with such goals.
I wouldn’t say that Math Overflow is a good counterexample to my claims. First, from what I understand, it’s a place where people exchange information about the existing mathematical knowledge, rather than a community of researchers collaborating on novel problems. Second, it requires extremely high qualifications from participants, and the discourse is rigorously limited to making technical points strictly pertinent to the topic at hand. That’s an extremely different sort of community than LW, which would have to undergo a very radical transformation to be turned into something like that.
I’d say the bar for joining isn’t very high (you only have to know the right kind of undergraduate math, a lot of which was even covered on LW), and the open forum is also useful for recruiting new members into the “group”, not just communication. Everytime I post some rigorous argument, I hope to interest more people than just the “regulars” into advancing it further.
Besides decision theory and AI cooperation, I mean things like better understanding of biases and ways to counteract them (see most posts in Top Posts). Ethics and other rationality-related philosophy (Are wireheads happy?). Ways to encourage/improve rational discussions. Ways to make probability/decision theory more intuitive/useful/relevant in practice.
It might be that we got into a misunderstanding because we mean different things when we speak about “soft” areas. To me, the topics you listed except for the first two ones, and the posts that exemplify them, look like they could be reasonably described as addressing (either directly or indirectly) various soft fields where the conventional wisdom is dubious, disorganized, and contradictory. Therefore, what you list can be seen as a subset of the soft topics I had in mind, rather than something altogether different.
To support this, I would note that most of the top posts bring up issues (including some ideologically sensitive ones) about which much has been written by prominent academics and other mainstream intellectual figures but in a pre-paradigmatic way, that ethics and philosophy are clear examples of soft fields, and that improvements in the understanding of biases achieved in LW discussions are extremely unlikely to be useful for people in hard fields who already use sophisticated and effective area-specific bias-eliminating methodologies, but they could lead to non-trivial insight in various soft topics (and the highest-scoring top posts have indeed applied them to soft topics, not hard ones).
So, on the whole, the only disagreement we seem to have (if any) is about what specific range of soft topics should be encouraged as the subject of discussions here.
This part:
contradicts the other part:
so I’m not sure that your well-worded and well-upvoted comment even has a point I could respond to. Anyway. The politically charged discussions here have been useless to me (with one exception that sadly didn’t get the follow-through it deserved), so I’ll go on waiting for insight, and avoid talking when I have no insight, and encourage others to do the same.
The comment definitely wasn’t well-worded if it seems like there’s a contradiction there; in fact, my failure to convey the point suggests that the wording was quite awful. (Thus providing more evidence that people are way too generous with upvoting.) So please let me try once more.
I was trying to draw a contrast between the following:
Topics in math and hard science, in which any insight that can’t be found by looking up the existing literature is extremely hard to come by. It seems to me that a public web forum that invites random visitors to participate freely is, as a community, inherently unusable for achieving any such goal. What is required is a closely-knit group of dedicated researchers that imposes extremely high qualifications for joining and whose internal discussions will be largely incomprehensible to outsiders, the only exception being the work of lone geniuses.
Topics in softer fields, in which the present state of knowledge is not in the form of well-organized literature that is almost fully sound and extremely hard to improve on, but instead even the very basics are heavily muddled and biased. Here, in contrast, there is plenty of opportunity to achieve some new insight or at least to make some sense out of the existing muddled and contradictory information, even by casual amateurs, if the topics are just approached with a good epistemology and a clear and unbiased mind.
Of course, a web forum can serve for all other kinds of fun chit-chat and exchange of useful information, but when it comes to generating novel insight, that’s basically it.
Or do you really find it within the realm of the possible that a public forum that gets its membership by warmly inviting random readers might be up to standard for advancing the state of knowledge in some hard area?
Thanks, I understand your point now. It seems my original comment was unclear: I didn’t mean to demand that everyone shut up about soft topics. RobinZ expressed the intended meaning.
cousin_it:
For what that’s worth, I didn’t understand your comment that way. I merely wanted to point out the inherent tension between the public and inviting nature of the forum and your vision of the goals it should ideally achieve.
What kind of follow-through do you think my post deserved? I’m pretty happy with the effect that it’s had on the LW community myself. People do seem to be more careful about giving and taking offense after that post. So I’m curious what you have in mind.
I hoped for more of the kind of analysis you demonstrated. Of course I don’t know what specific advances would happen! But I feel you were doing something methodologically right when writing that post, and the same approach could be applied to other problems.
Honestly I assumed something like that was being used for decision theory.
It is.
Isn’t “progress” a bit of an over-ambitious notion?
Blogs aren’t generally a method of doing science (with the exception of a few collaborative projects in math that are ruthlessly on-topic and appeal to a tiny community.) Blogs and forums are great for keeping current with science, for speculating, and for letting off steam and having fun. Those are legitimate functions—why do you want to make this loftier than it is?
“Human understanding progresses through small problems solved conclusively, once and forever.” I don’t know about anyone else, but I find this gravely unsettling applied to politics and social issues. People have different values. Treating these things as problems to be “solved conclusively” is incompatible with pluralism. Politics is meant to be a lot of arguing and conflicts between different interests; I’m scared of anyone who wants it to be anything else. Talking politics on the internet is best done tipsy and in aggressive good humor. If we could do that here, I wouldn’t mind—but somehow I think we’d wind up taking it too seriously.
Our header image says “a community blog devoted to refining the art of human rationality”. I spend a lot of effort trying to improve our shared understanding and don’t consider it over-ambitious at all.
If you know for certain that most disagreements in politics are genuinely about terminal values of different people, rather than about factual questions (what measures would lead to what consequences), then you know more than I do, and I’d be very interested to hear how you established that conclusion. In fact this would be just the kind of progress I wish to see!
Isn’t it enough to show that there are at least some incompatible terminal values? If this is the case, then there can be no overall lasting agreement on politics without resorting to force (avoiding the naked use of which could be seen as the main point of politics in the first place).
Thanks for twisting my mind in the right direction. Phil Goetz described how values can be incompatible if they’re positional. Robin Hanson gave real-world data about positionality of different goods. This doesn’t seem to deal with terminality/instrumentality of values yet...
The political issues which might be discussed are not questions of organization or political structure (in general)--they are questions which belong to some other domain and which happen to have political implications. The fact that they are political changes the nature of discussion about the problem, not the nature of the problem.
I think one effect on the nature of discussion is to stop people from breaking off small problems which they can solve or assertions whose truth they can honestly assess. Instead conversations almost always expand until they are unmanageable. For example, starting out with the question “Should I vote for A or B” is exceptionally unlikely to yield useful discussion. Of course, I don’t know about the interminable discussions which have occurred here in the past or why they might not have been productive. I would guess that the problems discussed were much too large for significant progress to be plausible. This is not because we lack a foothold—it is because we tend to (and there are forces encouraging us to) tackle questions much too large.
I believe it is possible to bite off reasonably sized (ie tiny) questions which we can make genuine progress on. If the problem will be around for a long time, such progress might be very valuable. Maybe a small problem takes the form of a simplified hypothetical whose resolution cannot be black-boxed and used in future discussions (because of course that hypothetical will never occur) but which is likely to help us develop general arguments and standards which might be brought to bear on more complex hypotheticals. Maybe it takes the form of some very simple, concrete, assertion about the world whose answer is politically charged but which can be resolved decisively with a little research and care. Even if such assertions have limited implications, at least its starting to get you somewhere.
I am not in agreement with the original topic. I believe this tactic (of responding to wrongness directly regarding controversial issues) is likely to cause us to also bite off much too large a problem.
However, I do believe that the criterion for declaring something on-topic is a good one, and could increase the value of Less Wrong as a resource if it was used to discuss small issues which we could reasonably make progress on. If such small topics are motivated by general failures of the public discourse, it seems possible that we would eventually accumulate enough rigorous understanding to correct those failures “conclusively.” (at least, as conclusively as any discussion on Less Wrong establishes anything)
For most part the policy was preemptive. We’ve known politics is the mind killer from before LW was created and didn’t want to go there. What forays into politically charged areas have occurred here haven’t done too much to dissuade me of that notion even though the conversations have been ‘less wrong’ than they perhaps may have been elsewhere.
My concern about repealing the ban on discussing politics is that it risks deteriorating the quality of the discussion here—not only by increasing the risk of flame wars, but more subtly by making other people aware of who they agree with or disagree with on politics, which might make some other discussions more polarized too.
Right now, while browsing a discussion here, I generally have no idea about the politics of those involved, and I kinda like that. If there was a lot of discussion on more divisive subjects, my judgement might be more clouded—“oh, he’s an asshole anyway, I won’t pay attention to him”.
Maybe I’m being too pessimistic about the capacity of the average LessWrong to be be honest with himself and others about his bias.
I would be in favour of opening a separate forum, where LessWrongers log in with different usernames but try to stick to LessWrong conventions. That would avoid the “reputation bleedback” to the main LessWrong, and it makes it easier to cut the limb off if it shows signs of gangrene.
Upvoted for this.
Agreed. And it works the other way too: I like that I can post here, knowing that people will judge my comments on their merit, without taking into account my politics and whether I favor this or that policy.
I never thought of it this way, but you are a counter-example—your values diverge from mine far more than those of liberals, conservatives and libertarians will, yet you still don’t get downvoted to oblivion.
However, this is mitigated by the fact that I don’t think everybody here believes you sincerely want to maximize the number of paperclips (sorry for breaking this difficult truth to you).
I imagine Clippy will be quite ok with that—it makes it all the easier to execute (yes, right word) his plan for paperclipification of all space when the time comes....
This is an argument for being very protective of the s/n ratio here, and to “confront wrongness wherever it appears” is the opposite of being protective.
I recommend our delaying the ambitious worthwhile goal you advocate till we have better mechanisms for protecting the quality of a conversation.
Did you have any mechanisms in mind?
Sure. I do not want to go into detail right now on what mechanisms strike me as the most likely to work, so let me just suggest that a better system looks more like the “procedural law” used by our courts. I will also say that Less Wrong does not use a technique (namely, refraining from pursuing unrestrained growth in membership) to protect the quality of the conversation that seems to have served well the group blog Hacker News.
I think it is rational to avoid taking on an ambitious goal at this point even you do not believe that I know of better mechanisms for protecting conversations than those used on Less Wrong.
ADDED. Before we talk about how to improve Less Wrong, we should talk about why Less Wrong is better than any other online conversation for some reasonable definition of “good”.
Delaying is one thing, but how we are going to know we had already got the better mechanisms? There must be some empirical testing too, maybe there should be one experimental post dealing with some really noisy topic (politics is what I have in mind now, but we may choose another) and see how it evolves. Then, close the discussion after one week, make some pause to cool down, and evaluate.
Empirical tests are important. In fact, I would probably alter the publicly-visible workings of the site to get better data that can help us discriminate between the competing hypotheses about how LW continues to have the high s/n ratio it does have and about how to protect it from various stresses (like discussion of mind-killers). The online rationality quiz hosted by darius at http://wry.me:7002/ is an example of the kind of thing I would add to the site.
As to my hypothesis that growing the membership too quickly risks LW becoming just another online community, that can be tested without altering the public face of LW: in particular, it can be tested by collecting statistics on the rate of the appearance of new commentators, posters and voters and by comparing that to the rate at which established voters stop voting.
I will admit that I stopped voting about 6 weeks ago in response to the increase in bad comments probably caused by influx of new participants from the Harry Potter fanfic and the (simultaneous with the fanfic) efforts by one of the SIAI visiting fellows to apply SEO techniques to the site. I am by disposition easily annoyed, and when I exercise my “critical / judgmental” mental state enough, I tend to get caught up in that state and cannot get myself out of it, which bad for my health and for my personal relationships, and it was just too annoying and too internally costly to pay enough attention to comments by new writers and the usual old writers of poor-quality comments to vote on them. So, you see, I already have one piece of empirical evidence (namely my observation of my own behavior) that LW is more stressed than it was 6 months ago, and consequently now is not the time to add a new stressor.
The only way I know of for LW to continue to have a positive effect on the world is for it to keep the quality of its conversation high. (Of course, the benefits it already has had on the world, e.g., in the form of increasing the rationality of readers, can compound themselves, e.g., by the more rational readers and former readers doing good in the world outside LW. But that would happen even if LW disappeared today.)
jimrandomh has identified a very potent way to improve the world: try to apply whatever processes make LW as good as it is to a greater fraction of the public discourse.
The strongest argument I know of against jimrandomh’s proposal is that it does not talk about why he thinks LW came to have the high s/r ratio it does have and how and under what conditions that quality is likely to be lost.
Up voted for this. We need to take a good hard look at our current user base and figure out why Lesswrong seems to be (mostly) working.
Quickly growing membership can indeed endanger the quality of the discourse and lower the s/n ratio; an important question is how quickly is too quickly. I suppose there are lots of rational people out there who are unaware of LW from whose membership the community may profit.
If the high quality is a mere result of high average rationality level of current members, it would be sufficient to filter newcomers according to their rationality. That can be done by karma system and the growth can be still pretty quick (I assume somebody repeatedly voted to negative total would lose interest in matter of days.)
If, on the other hand, the main reason for the high s/n ratio lies in some surplus quality which isn’t simple result of members’ individual rationality, but rather set of customs, unwritten laws, atmosphere, or hard to describe “spirit” of the community, then the acceptable growth rates would be much slower, as new members have to be acclimatised before they become a majority.
Then we need data of course. Are there any accessible membership statistics showing a systematically increasing rate of new registrations? Is there a real danger, or is the influx of new members only a temporary phenomenon? And why do you think jimrandomh’s proposal to broaden the scope of LW implies larger membership increase? (I also think so, but I would like to hear your reasons.)
Anyway, you seem to have some ideas about how to preserve the LW’s quality; perhaps you could write some summary thereof either as a comment here or on the top level.
IMHO a large temporary influx of new members like LW has had over the last 6 months is a real danger to the quality of the conversation on LW because once enough of the “pillars of the community” become discouraged and leave, nothing short of the establishment of a new web site or some kind of “hard reset” of LW would have a significant probability of bringing the quality back again.
We seem to be holding for now, but my impression is that we could easily go off course in a mere matter of months if we don’t tread carefully.
We need to analyze what the top contributors think and feel about the community and what motivates them to participate. We also need to figure out what the masses of registered posters are like, when did they arrive and for what periods they where active.
prase is not sure whether “the high quality [of LW] is a mere result of high average rationality level of current members” or whether “the main reason for the high s/n ratio lies in some surplus quality which isn’t simple result of members’ individual rationality, but rather set of customs, unwritten laws, atmosphere, or hard to describe “spirit” of the community”.
I am not sure either, but if it were easy for highly rational people to combine their rationality in organizations, associations or public discourses, I would expect organizations, associations and public discourses to have been more effective and less pernicious than they have. Consider organizations like the CIA, FBI, Harvard University and the White House that can pick their employees from a large pool of extremely committed, intelligent and well-educated applicants. Consider the public discourse conducted by elite journalists and editors during heyday of the New York Times, Washington Post, etc. Those elite journalists for example contributed greatly to the irrational witch hunt around sexual abuse at day-care centers and “recovered memory” sex-abuse trials in the 1980s.
You are probably right, and I tend now to favour slower membership growth.
But another issue comes to mind: we should have some more objective methods to measure the s/n ratio whether or not membership increases, because any community are in danger of falling prey to mutual reassurances how great and exceptional they are.
There is the danger that LW will become a mutual-admiration society, but if it does, the worst effect will probably be that people like you and I will have to find other places for discussion.
If SIAI becomes a mutual-admiration society, that is more serious, but LWers who are not SIAI insiders will have little control over whether that happens. (And the insiders I have gotten to know certainly seem able enough to prevent the possibility.)
So the question becomes, Is the risk that LW will become a mutual-admiration society higher than the risk that “confronting wrongness wherever it appears” (jimrandomh’s proposal in jimrandomh’s words) will change LW in such a way that the voters and commentators who have made it what it is will stop voting or commentating?
I haven’t meant it as a dilemma “mutual admiration society” vs. “indiscriminate battle against wrongness”, that would hardly make sense. I am even not really afraid of becoming mutual admiration society or cult or something like.
I only intended to ask a question (more or less unrelated to the original discussion): how reliably do we know that the s/n ratio is really high? There is a lot of room for bias here, since “this is an exceptionally rational community” is what we like to hear, while people with different opinion aren’t heard: why would they participate in a rationalist community, if they thought it weren’t so much rational after all? Put in another way, any community which values rationality—independently on how do they define it and whether they really meet their needs—is likely to produce such self-assuring statements.
So when I hear about how LW is great, I am a little bit worried that my (and everybody else’s) agreement may be biased. As always, a good thing would be to have either an independent judge, or a set of objective criteria and tests. That could also help to determine whether the LW standards are improving or deteriorating in time.
Several years ago, I used to go to predominantly Christian conservative forums and get in arguments. It was mostly for entertainment, but it was also kind of an eye-opening experience. I did the same sort of thing, in different forums, for nuclear power. Most big controversial issues are so steeped in the Dark Arts that you’ll just get overshouted if you try to calmly lay out a reasonable and well-organized argument. As the saying goes, they’ll drag you down to their level and beat you with experience.
It’s possible to be frightfully effective in such an argument, but the methods you have to use feel pretty dirty, because they are. If you can force people to defend indefensible implications of their position, you score points. If you can trick a bunch of opponents into saying something obviously stupid, you score points. If you can find a loud idiot and make a fool of them in public, so that the lurkers don’t want to be associated with him, you score big. And often pure typing speed helps; if you’re answering the same arguments again and again, you can fire off a lot of replies very fast, flying on autopilot. A handful of vocal people using such tactics can change the default views of a whole forum. It works.
...But none of this feels right. I feel like there ought to be a better way to influence people’s views, but I just don’t see it. This is part of why I like Less Wrong: people are actually looking for the truth instead of trying to fling a conclusion at the world. I don’t know if it’s even possible to have that kind of discussion in most places.
That’s the kind of methods we should get good at calling out here. I’m not sure we are yet. The skills for group rationality are different from those for individual rationality, which are those that have been discussed the most here.
Listing the dirty, dark-arts techniques is a good first step towards fighting them.
The bulk of this work was done a couple of centuries ago, with the same intent, by Schopenhauer in The Art of Being Right. Mass-media and Internet bulletin boards offer new techniques, but a lot hasn’t changed from the days of handwritten letters and literary salons.
There are three things, as I see it, that LW could be, and posts seem to revolve around one of these three categories.
An all-purpose discussion forum, about all sorts of things, designed for people with a roughly technical bent who like a high quality of discussion. Topics that seem to draw a lot of attention are non-mainstream controversies, philosophy, politics, and self-help.
A blog about bias, rationality, and how to exercise more of the latter and less of the former. Topics can be technical (drawing on cognitive sciences and probability theory) or non-technical (illustrations of common fallacies and how to evade them.)
A blog about decision theory, FAI, and existential risk.
Personally, I would enjoy 1 or 2, and I’d be less interested in 3. I wouldn’t mind LW going in a looser, more casual direction; alternatively, I wouldn’t mind it going in a more focused, technical direction. I am interested in the intersection between, roughly, statistics and cognition, and I think that we could get a lot of interesting speculation done about how people think. I’d probably quit if LW became all decision theory, all the time, because that’s not really one of my interests, but I can see that it would be useful for people who are focused on that.
The other option is maintaining the status quo: balancing aims 1, 2, and 3, and using downvotes to eliminate anything wildly off message. Something for everyone.
I like a good political rant as much as the next person, but I do think it’s nice to have the taboo against it here. What I’m particularly concerned about (apart from the usual bias/mind-killer stuff) is that I’ve seen a lot of posts to the effect that the general public is “wrong” on a lot of topics, that we could “solve” problems here, or come to “consensus.” It seems likely that we’d start to get people claiming that “any sufficiently advanced rationalist would agree with me politically.” Especially because we have a lot of utilitarians here, there’s a real danger of getting dogmatic.
If we do open up the floor to a little random fun, I hope it won’t be politics. Health and nutrition, good books, psychoactive drugs, productivity and social skills tips, current science and tech—that sounds good. Politics could get ugly.
Yeah, I could definitely see that happening. I’d probably be the one to do it, too.
Bravo!
I would want to go even further, and strike out (perceived) “importance” as a barrier. Thinking in terms of “importance” will tend to cause our minds to stay within certain topic clusters, when what we actually want is more variety of topics. Rationality lessons are often most illuminating when applied in situations we don’t stereotypically think of as illustrating rationality lessons. People may have pet topics or specialized areas of expertise that they would like to post on, but don’t because of a fear that their subject isn’t “important enough” (which in practice tends to mean being about the topics most commonly discussed here). This is unfortunate, because rationality literally applies everywhere; and I think an aspiring rationalist should seek out as many diverse opportunities for honing their general rationality skills as possible. This will prove useful when it comes to the “important” topics.
On the other hand,
I actually wouldn’t want to restrict duplicates to new approaches to the subject itself; I think a new specific lesson on rationality should suffice. Familiarity has its advantages too. (For example, there are a number of Bayesian lessons that I have learned from my study of the Knox case since the original discussion, and I would hope to be able to post in the future on some subset of these, using this particular vivid illustration, without too much objection on the grounds that the topic “has already been done”.)
I would like to know how widely agreed-on this attitude is on LW; I have been specifically resisting writing posts about random things which are of interest to me but don’t correlate to any of the site’s major common themes (rationality in the abstract, AI, health, philosophy). I’d be happy to write posts applying rational principles to everyday circumstances, but I’d want a stronger signal that that the community would appreciate it first.
I for one am at least interested in the concept. Whether individual posts would be worthwhile or not is another matter. May I suggest you use the open threads to provide a first cut at topics you mean to address, refine it based on feedback, then post it and see how it goes? Remember that top level posts need a certain karma level to appear on the front page, so if the community doesn’t like the post, you won’t be pushing other topics off the front page.
If the backlash would be great against a top level post, you should be able to ascertain that from the open thread first.
That’s a good point; thank you for reminding me of that option.
FWIW, by way of codified guidelines, we have this on the About page:
and this in the FAQ, under “When should I make a top-level post”:
Combining those, I see how I got the idea that I shouldn’t bother making a top-level post unless I had something particularly new and clever to say about rationality itself.
You’re misreading jimrandomh, who is proposing that we discuss these non-rationality topics not because (and not only when) they illustrate principles of rationality, but because they are important in their own right. I say that if we adopt a policy like that, we apply it only to the most important topics—mainly existential risk. Or if the point is to draw people here, to anything sufficiently shiny.
I’m saying that there really aren’t any “non-rationality topics” (i.e. a post on any topic can be a post about rationality) and that, insofar as one believes (as I do) that raising the general level of sanity is among the most important goals there are, it is to our benefit (with respect to that important goal) to encourage a wide range of contributions and not be too restrictive or cliquey about topics.
To exactly what extent jimrandomh agrees, I don’t know, but this thought was prompted by his post.
A post on any topic can be a post about rationality if you put in specific work to make it so; I read jimrandomh as saying there’s no need to put in such work.
Your reading may be the intended one; we’ll have to await jimrandomh’s clarification. Meanwhile, the following paragraph is what led me to believe that jimrandomh is not actually proposing a change in topic policy:
The posting policies of Less Wrong are effectively determined by up- and downvoting and by commenting, which means that everyone can have their own ideas about what those policies are. Whether my proposal agrees with the current policy or not depends on what you think the current policy is, and that’s unclear. My intention is to clarify it in the way I think gives the highest utility. I don’t think it substantially disagrees with the policies currently enforced as indicated by voting.
Not exactly; I ask people to call out the problems commonly impairing the rationality of other discussions of the topic, to help avoid them, and including that explanation would make any article at least a little bit “about rationality”. However, the justification for including the connection to rationality is to protect the quality of the conversation, not to satisfy an “is on-topic” requirement.
Fair enough; still, I feel most people underestimate how radical a change it is from “a community blog devoted to refining the art of human rationality” to “a community blog devoted to providing correct analysis on important subjects, informed by previous writings on the art of human rationality, and maybe refining that art a little bit as a side effect”.
We seem to have a mix of pure and applied reasoning topics. It would be bad if we lost one of those categories, or if the ratio got too far out of wack. Since we accept posts on lots of random important subjects, any effects that letting random important subjects in might have have already happened, and they clearly weren’t disastrous.
To be honest at one point I considered talking about how rationality could be used to improve AI, game play and gaming skills in computer and video games, especially the strategy genre.
Would you consider such a discussion too trivial for rationality insights? Or would its usefulness be limited just to outreach and popularization of rationality?
PS Is anyone here a Civlization player?
The thing with those games is that there are so many approaches you can go with, and you need to be uncommonly rational in choosing. For example, the usual advice in the early game is to build new cities as quickly as you can, because the sooner you build them, the sooner they can start expanding and producing resources—it’s a straightforward example of exponential growth and compound interest. But what if the map is set up with mostly water, and you start on an island without much space? You’ve got to adapt your play-style, and you’ve got to do it decisively. You expand as much as you can, then focus your efforts on improving the cities you do have, until you get boats and can start expanding again, this time backed by a nice industrial base and plenty of population. Because you’re on an island, you can leave cities undefended without too much risk, so you can devote more of your crucial early resources to things that will yield compound interest, like terrain enhancements.
It feels like an exercise in min-maxing, and more importantly, figuring out what to focus on and what to neglect—and having the audacity to go through with a plan that feels crazy but is actually very sane. I think that’s the main rationality habit you can take from playing Civilization.
I think the main rationality habit you can take from playing Civilization is “Don’t play Civilization if you value your time at all”.
Not that I intend to actually follow that advice once Civ 5 comes out.… oh crap, that’s today isn’t it? Why did you have to remind me?
No More Turns!
I never played Civ IV, quite deliberately.
Then I suggest you don’t play sword of the stars either.
That was just cruel! ;)
So about a month has passed now since this thread was originally posted… just wondering, have you had a chance to play civ5 yet? Just curious to hear another LWers take on the new game in the franchise.
Do you have any particular topics or categories of topics in mind?
While I am generally impressed with the level of rationality in discourse here, I really doubt that we have the quantity of information necessary to really come to a well-informed consensus on any controversial subject.
I’m not sure exactly what kinds of issues the OP had in mind, but I worry that we wouldn’t do particularly well here on topics like AGW, or what went wrong with the housing market, or how much of IQ is genetic, or nuclear power, or any number of other fascinating and important topics where a dose of rationality might do some good.
We wouldn’t do well on these topics, in spite of our rationality, because doing well on these topics requires information, and we don’t have any inside track to reliable information.
Perplexed:
There are indeed such topics, but in my opinion, none of the specific ones you mention are among them. In all of these, the public information available at the click of a mouse (or, in the worst case, with a visit to a university library) is almost as good as anyone in the world has, and the truly difficult part is how to sort out correct insight from bullshit using general rules of good epistemology.
Well, it certainly might be an interesting experiment. I think it would be a good idea to try it out on one test case, then follow up with a serious post-mortem to come up with groundrules so as to do it even better next time.
The difficulty is to choose a topic which grabs the attention and engages the intellects of a large fraction of the people here. But something which is fresh enough that no one already has quasi-religious commitments to a position on the subject.
One idea that comes to mind is space exploration, in particular the exploration and exploitation of our own solar system in the absence of an AI singularity. There are sub-issues related to history, economics, psychology, and engineering. What ought to be the next step? Moon, Mars, asteroids, earth orbit? Do we send men, or robots? Do we dare to build a space elevator? Should we terraform Mars? Can we terraform Mars? Should we instead focus on colonizing and exploring the oceans of earth? What kinds of propulsion systems should we develop? Tethers? Fusion? Was Project Orion insanity or a missed opportunity which we should not miss a second time?
This is one area where I suspect we can acquire information that is as good as exists anywhere. It is an area that many of us have some interest in. But I’m not quite sure how many of us have enough time and interest to dig deep enough into the available information to really come up with viewpoints that are new, interesting, and also correct.
Re: Should we instead focus on colonizing and exploring the oceans of earth?
That is step 2 - according to Marshall Savage—and I agree.
One problem that springs to mind is the origin of life. There’s mountains of evidence relating to this topic—and I expect a rational agent could figure out many of the important details if given the existing public evidence. I expect LessWrong would do badly on this topic—due to lack of knowledge.
Another topic is how—in practice—to implement an oracle—a.k.a. a resource-limited Solomonov induction agent. That plays to LessWrong strengths—and is based on a “neat” pure maths problem.
Yes, there are topics for which most of us don’t have sufficient information to write about them or discuss them. However, people with that information almost always do exist. Therefore, we should encourage them to come here and share that information. Maximizing the chance that those people do write here means carefully avoiding things that might deter them, and fear of being seen as off-topic is such a deterrent.
I believe that if Less Wrong users changed the posts they made and the posts/comments they upvoted to try and “confront wrongness wherever it appears,” as you suggest, that the value of Less Wrong to human society would be significantly increased.
Contrary to the principles behind Less Wrong (or what I perceive to be the principles behind Less Wrong), I doubt that most Less Wrong users can get anywhere near as much value by thinking about being rational for a minute as they could by applying their rationality for a minute. Trying to develop consensus among Less Wrong users on difficult issues seems like it is almost certainly the most effective way we can apply our rationality in the context of making posts on Less Wrong, because consensus among Less Wrong users is a valuable thing to reference when making decisions which are actually important about issues which are not, strictly speaking, relevant topics for Less Wrong.
Whether the correct response is to stop reading/posting on Less Wrong or to confront wrongness on Less Wrong is unclear (you can tell from my post count what I have done in the past).
[In general I enjoy Less Wrong and believe its existence is good, but I also tend to agree with some recent posts that for most people it is a distraction rather than a resource and should be treated as such.]
This.
I agree, there are very few disruptors and trolls here.
I’m inclined to agree with your proposal, but I wonder if there are supplementary community norms that, if made explicit, might make it easier to venture into confusing and polarizing topics without losing LW’s usual level of accuracy and of signal to noise. (I assume fear of filling the blog with nonsense, and thereby losing some good readers/commenters, is much of what currently keeps e.g. political discussion off of LW.)
Maybe it would help to have heuristics such as “if you don’t have anything clear and obviously correct to say, don’t say anything at all”, that could be reiterated and enforced when tricky topics come up.
Far from everything on LW is obviously correct.
Yes. I’d meant to suggest a more stringent standard when attempting discussion on topics that are both confusing and polarizing.
Is “obviously” a more stringent standard?
We can confine such topics to appropriate subreddits by an explicit moderation policy. Each such subreddit may have additional, more specific rules and policies (shown in the sidebar, for example.)
Other such suggestions in the comments here.
About half a million comments have appeared so far on LW. How many of those have you voted up or down, Anna?
I am not saying that you should spend a lot of your time voting on LW, but I am more interested in whether those who have voted a lot agree with jimrandomh’s proposal.
Like I said a few days ago, I stopped voting about 6 weeks ago, in response to an increase in bad comments. Now I scroll past comments by writers I do not recognize or writers that have annoyed me too often in the past, and my ex-girlfriend no longer complains of my being overly critical or judgmental. (I am pretty sure that the task of judging the “voteworthiness” of comments is what pulled me into the critical/judgmental state, which I find hard to get out of. Perhaps I am unusual in this regard, but the distaste expressed by many academics for reviewing papers suggests that I am not.)
I fear this would reduce LessWrong to referencing research papers. Perhaps there is more value in applying rigor as disagreements emerge. I.e. a process of going from two people flatly disagreeing to establishing criteria to choose between them. I.e. a norm concerning a process for reaching reasonable conclusions on a controversial topic. In this way there would be greater emphasis on turning ambiguous issues into reasonable ones. Which I view as one of the main benefits of rationality.
“Less Wrong Should Confront Wrongness Wherever it Appears”
As an economist I can tell you that most public discussions of economics contain a huge amount of wrongness. Do we want hundreds of econ posts?
Economic policy is a prime example (and maybe a rare example) where learning a small amount of factual information—basically, the content of Econ 101 -- is enough to change people’s views, to the point that there are areas of rough normative consensus among everybody who knows Econ 101.
Economists are already popularizing Econ 101, very well in my opinion, and we don’t have to do it here.
The other thing I want to point out is that we should not expect everything in the world to be like economics. It is not always the case that a little evidence and a little rational thought will create wide agreement. It is not always the case that opinions divide neatly into the “smart” and the “stupid.” I really don’t want LW to develop the attitude that “with a little rationality we can do better than the rest of the world!” on normative issues.
Especially since that often seems to involve confusing terminal values with ‘rationality’.
Sometimes much of the rest of the world is really, really stupid. We already do much better than some people on normative issues by not getting our morality from religious dogma.
I, for one, would love to have a place where a rational and open-minded discussion of economics would be possible with people who have some knowledge of the subject. In my experience, and with very few honorable exceptions, economists are extremely difficult to reason with as soon as one starts questioning the logical and empirical soundness of some basic concepts in modern economics, or pointing out seemingly bizarre and illogical things found in the mainstream economic literature. You quickly run into an authoritative and stonewalling attitude of the sort that you never get by posing similar questions to, say, physicists.
I would venture to say that a radical re-examination of several basic economic concepts is probably the lowest-hanging fruit when it comes to valuable insight that could potentially be gained by a group of smart amateur contrarians. The whole field is certainly long overripe for the sort of treatment that Robin Hanson metes out to medicine.
I suggest an Economics Open Thread.
I’ve been confused by economics in this way too.
I’m not confident enough to say that it means something’s wrong with economists, but I can never tell where the assumptions come from. The way I was taught in school, if the professor was a Rational Expectations guy, he would teach the course as though Keynes had never been born; and vice versa. It was like the blind men and the elephant. Very disappointing. I could have used an aerial view.
Paul Krugman has made similar criticisms.
I’m sure this wasn’t your intent, but this comes across to me like a situation where you’ve been having a high-level conversation about economics and then switch to asking the conomist to explain or justify the basic premises of the field to you. While the idea that you need to be convinced of the truth of the basics before productively discussing the complexities is sound, I’m less hasty to assume that the economists’ refusal is due to a disregard for rationality. They just may be less interested in teaching you lower-level concepts of the field than they were in having the high-level conversation about it. If that were the case, the mismatch would be social, not rational.
I certainly don’t know that the above is what actually happened, but it fits my model of human behavior better than your explanation does.
Relsqui:
The thing is, if you ask a physicist to answer a critical question you have about some fundamental thing in physics, he’ll likely be able to point you to the literature where your specific conundrum is resolved clearly and in great detail, or provide such an answer himself. I don’t know what would happen if you came up with an entirely novel question (I sure never did), but from what I’ve observed, I would expect that it would be met with genuine curiosity. Moreover, good introductory literature in physics often anticipates and preemptively answers many objections to the basic concepts that a smart critical student of the subject might come up with. Of course, if you’re being block-headed and impervious to arguments, that’s a different story, but that’s not what I’m talking about.
In contrast, in economics one rarely sees anything like this. The concepts are presented with an air of high authority, and various more or less straightforward questions about their validity that occur to me after some thinking are often left unaddressed. Mathematical models are typically discussed in a bizarre blinkered way that bears no resemblance to the ingenious modes of thought that I’ve learned to know and love from mathematicians and physicists. Even more maddeningly, one sometimes runs into literature written by prominent insiders in the field that points out such problems, but instead of provoking debate, these works are languishing in obscurity. There are many other bizarre things I’ve found in my amateur forays into the field, which could be the subject of a long essay.
Curiously, this is approximately opposite to the trend that Robin Hanson claimed to observe in popularizations. You’re not necessarily talking about exactly the same thing (Robin is talking about what the public prefers to read whereas you’re talking about how the professors react to questions) but it’s an interesting juxtaposition nevertheless. I don’t myself see quite the pure trend that Robin sees—I think plenty of economics popularizations lecture the reader from on high and I know several physics books that labor hard to explain and answer questions, but anyway, quoting Robin:
“Popular physics books, like Carroll’s, act easy and friendly, but still lecture from on high, sprinkled with reverent stories on the “human side” of the physics Gods who walk among us. They grasp for analogies to let mortals glimpse a shadow of the glory only physicists can see directly.”
“The recent popular econ book Superfreakanomics is also excellent, but very different in tone. Also easy and friendly, this is full of concrete stories about particular data patterns and what lessons you might draw from them, or you might not; hey it is always up to you the reader to judge. Such books avoid asking readers to believe anything abstract or counter-intuitive based on the author’s authority.”
So, according to Robin, physics books lecture from on high (based on the author’s authority) while economics books do not.
Meanwhile, your experience is that economists present their concepts with an air of high authority (based on the economist’s authority) while physicists do not.
I haven’t read any popular book on economics, but what you (and Hanson) say about physics popular books sounds right. That’s the reason I don’t like popular books on physics (I am a physicist). The presented ideas are counterintuitive, and, at the best, the counterintuitiveness is countered by some arbitrary loose analogy, which may be reinterpreted in a completely different way without much difficulties. What is worse, people seem to like exactly this obscure quality of such books.
(Once I had read a popular book by Landau and Kitajgorodskij and I really liked it, but it was about Newtonian mechanics mainly, and I think it included several equations. A book about cosmology, quantum physics or even string theory without any equation at all—save the ever popular E=mc² - can hardly convey any meaningful information.)
Very well said, Vladimir_M—a comment I wish I could vote up twice.
That basically agrees with my experience (mentioned in the discussion you linked) that economists lack a Level 2 understanding of their speciality. That is, they cannot trace the inferential paths they rely on, all the way back to the layman level. In my estimation, this leads them to advocate truly absurd policies, since this poor understanding prevents them from identifying where a model no longer outputs policies justifiable through such an inferential path.
For example, they equate growing GDP with a good economy. And as a general rule, that’s a good measure. But you have to know where the rule breaks down, and this requires a deeper understanding than most economists have. A Level-2 economist would say something like,
“Yes, GDP generally correlates with good economic health, but in the wake of this hurricane, most of that spending is just rebuilding destroyed stuff. Now, it’s certainly better to rebuild, given the hurricane, but this is just restoring the previous level of economic health—the high GDP numbers you see can’t be taken to mean that the economy was somehow improved, in any sense that we care about, as a result of the hurricane striking.”
But we never hear anything like that.
As another example, the consensus seems to be that we have to make sub-zero interest rates to clumsy banks that just revealed themselves to be extremely incompetent, without asking whether those banks are actually satisfying genuine consumer desires better than such desires would be satisfied without such a policy.
In contrast, physicists can say, “Why do we make that assumption? Well, because you have to account for these observations, and most of that work is done by these models, which leaves you with …” That’s tracing back to the layman level, and so a Level 2 understanding. If they’re reluctant to do so, then yes, it could be a (less common) Level 1 physicist, but more likely, it’s because they realize it will take a long time to trace out the inferential path.
Unfortunately, high schools don’t show students this path very well, which I’m finding out as I “relearn” the basis of physics from some books I’ve been reading that specifically discuss how these models in physics were discovered (like Atom by Asimov).
SilasBarta:
Yes, that’s a very good remark. This summarizes my frustration with economic concepts very well.
Then you may be interested in this exchange I’m having with John Salvatier, a follower of Scott Sumner, a mainstream monetary economist (MME) who really aggravates me by how he advocates those stupid monetary policies I mentioned, and his (Sumner’s) very transparent lack of a Level 2 understanding. (This is touched on in my exchange with John, in which he agrees that Sumner’s model [though perhaps not his general understanding] would not be able to give the right recommendations in cases where nominal GDP drops for good reasons, but rather, would slavishly try to force it back up, steamrolling over good efficiencies.)
Unlike the popular MMEs, John is able to take the time to cross the (enormous) inferential distance between our positions on economic policy.
That clarifies sufficiently for me to work from the assumption that your interpretation is correct; thank you.
I think your earlier comment invoked an instinct of mine that when someone says “I was in such-and-such social situation, and the other person was doing it wrong!” they have often not examined the possibility of having made an error themselves. That doesn’t seem to be the case in this instance, but I don’t regret having checked. :)
I found the next sentence of Vladimir’s significant:
My model of human behavior includes examples of both the kind Vladimir described and your alternative explanation.
In particular I expect experts to behave as per Vladimir’s explanation whenever they are weighing in on topics that are at the fringes of the their field. We can reliably find that experts overestimate the breadth and depth of their expertise. (Professional gamblers are a notable exception to this rule.)
In the case of economists it is not unusual to find an economist declaring that something will operate in accordance with one of those basic concepts and yet be unable to engage in exploring just whether the assumptions in the model are satisfied in the instance (at risk of discovering that their model is irrelevant). That would amount to surrendering intellectual territory on behalf of the tribe—something that few with the cunning necessary to become considered an authority would do unless absolutely forced.
In the context of Vladimir’s response to James, this seems like a pretty reasonable thing to do. That is, if an economist condemns lay writing, the economist should be prepared to argue that economic theory applies.
To argue that it applies, certainly. I agree with you on that. But I also wouldn’t fault an economist for being unprepared to interrupt or uninterested in interrupting a high-level argument in order to lay the groundwork for the acceptance of that theory (which they presumably have spent some years learning and accepting themselves over the course of their education). One is a conversation about whatever particular situation was being discussed, and the other is a conversation about economics itself; it’s reasonable to me that the economist in question could just have been rejecting the change in topic.
But as I commented just now to Vladimir, I was just being wary of a common error which has more to do with social communication than logic; in practice, it does not now appear that the error was committed.
You’d think that with his economics knowledge he’d be able to do the same thing...
I once asked Robin Hanson a question that occurred to me when watching a video that was half accurate economics and half crankery. He said he didn’t know the answer because he didn’t really know much macroeconomics.
Incidentally, the question was this: If you add up everyone’s cash-equivalent assets (actual cash, checking accounts, certificates of deposit, etc.) and then subtract everyone’s debts (mortgage balances, corporate and government bonds, money a bank has to pay to depositors, etc.) is the total positive or negative? In other words, if every last dollar-denominated debt or other debt-like obligation was paid back, would there be any money left in the world?
Interesting, and indeed fundamental question—if your preferred economic system isn’t capable of handling everyone being debt free (or you don’t know how it would work that way), then there is a serious problem.
This is a different question from whether people should go into debt, or whether there can be Pareto-improvements through debt transactions; it’s asking whether you hit some kind of (small-s) singularity as a result of nobody being debt due to an extreme preference for such.
We know for a fact that there have been economies without this kind of debt—they’ve all been primitive, of course, but they sustained the existence of humans, without anything bizarre happening. So a Level 2 understanding of economics needs to be able to handle these kinds of economies without having to treat them as a super-special case.
My initial answer to the question is that, well, sure it has to be positive because the money has to be paid to someone, and some people are cash-positive after paying their debts. But then, it may not even be possible for everyone to pay off debts—it could be that attempting to engineer this will necessarily bankrupt numerous people.
And no doubt, if everyone were brainwashed by Dave Ramsey, the government would still probably try to protect banks’ inalienable right to a loan market, despite the fact that no one wants loans—and these policies would be promoted by the same brainwashed people!
I’m pretty sure it will be negative. Many long term debt instruments are ultimately backed by the promise of future productivity. Debt is essentially a mechanism for transferring wealth from the future to the present. This is why debt for investment in production is generally good for an economy and debt for consumption (most consumer debt) is less good.
Cash and cash-equivalent assets represent a small fraction of the actual wealth in the world. Wealth exists in the form of actual things of value (houses, factories, human capital, etc.). Cash is just a medium of exchange and there is no particular reason for the ratio of cash to wealth to be constant. Long term debt is backed by currently existing wealth (secured debt like mortgages) and the promise of future wealth (corporate bonds etc.). It would be a very screwed up economy where cash on hand exceeded the value of debt, in fact this would probably be a sign of the apocalypse.
[ETA] The US national debt already exceeds M2.
Good points, but I would hasten to add that there have been economies with money where that calculation would come out positive, but which weren’t headed for an apocalypse.
Also, the government doesn’t have to pay off its debt immediately, and money could change hands numerous times, each time yielding a taxable event.
True but the government also has to pay interest on its debt. Anyway, this gets away from the original question which is whether the total value of cash and cash equivalent assets exceeds the total face value of all debt (public and private). If you make this question slightly more specific and talk about the value of US$ denominated cash and debt then the answer is ‘no, not by a long shot’. Or in the phrasing of the original question, you get a very negative number: total debt ~$52 trillion, M2 ~8.7 trillion.
This is not really a problem though. The debt doesn’t all come due at the same time for a start. Some of it will default (well, probably a lot of it actually). Some of it will be refinanced. What is a problem (and what caused problems for banks during the financial crisis) is when a bank has a maturity mismatch between its assets and liabilities or when there is a sudden and unexpected change in the estimated probability of default of the debt it holds (and so a sudden change in the net present value of that debt).
Looking back, it seems that Crono was asking two questions that only appear to be the same:
One question is answered by comparing some accounting figures. The other is asking us to imagine a counterfactual in which everyone goes through and pays off debts. You’re answering the first one, while I was answering the second.
And in that case, there could be e.g. only $X in cash instruments, $10X in government debt, and yet it could still be paid off: Say, one year people earn $3X in wages, pay $0.5X in taxes, then the government pays the $0.5X of debt that just matured. Interest could kick in which is less than this amount. The people who get the money from the maturing bounds can’t making loans and so spend the money, generating wages, generating tax revenue, generating reduced government debt, and so on.
Oh, absolutely, I’m not disputing that the debt can theoretically be paid off in full—that’s exactly what I mean when I say it’s not really a problem that the accounting figure is negative. If that wasn’t the case we’d be in in real trouble. The idea that cash needs to match the face value of debt is a confusion in a debt based economy like ours.
Understood. So do you agree Crono really is asking two very separate questions, even though it may seem like they’re the same?
To be honest it didn’t even occur to me that he’d confuse those two issues but I’d agree that it looks like he may have done. It did puzzle me when I first saw the question why Robin Hanson wouldn’t have just said ‘negative’.
I would like posts about some common ways people go wrong about economics, as well as the psychological /social reasons why (along the lines of the stuff in Caplan’s “Myth of the Rational Voter”).
I’ve been fuming for the past two years about general irrationality in discussions of economic policy that reaches up to the very highest levels. The general failure mode I think is going on is a case of Lost Purposes, in which (even the elite) economists cannot ground their policies in how they help “the economy” in any sense that we actually care about.
Here’s my blog post on the theme of lost purposes in economics. Here’s a previous LW discussion started by cousin_it’s question and one of my rants on the topic.
So some of you may be interested that occasional poster “jsalvati” (John Salvatier) and I have been trying to Aumann-reconcile our disagreement on this as it pertains to monetary policy. It’s coming up on two months now!
Edit: I’ve been hesitant to direct these investigations toward a LW article because it’s an inherently political topic.
The thing is, I don’t trust a “random” blogger more than I trust elite economists, especially on somewhat technical subjects like monetary policy.
I’d rather know about more basic stuff that all economists agree on, rather than people challenging the orthodoxy, or arguing about which school of economics is the bestest.
Your discussion with jsalvati does look interesting.
I find myself in disagreement on both points:
But isn’t the internet already filled with people pointing out simple economic fallacies, based on consensus economics, which fails to convince people making these fallacies? I don’t think that’s a shortage LW needs to correct.
Also, in terms of the importance of the issues, debate about how to fix our current economic problems ranks very highly. Everyone is arguing about “how we can get people to start spending again”, while I claim that this very debate is predicated on a lost purposes fallacy. There are many people who struggle with the issue of, “gee, I’m not sure I want this, but shouldn’t I be spending to help the economy?”, and giving a sensible, well-grounded explanation of this lost purpose is exactly what they need, so it’s not just about policy, but about individual optimizing.
How consistently do you apply this? This position seems to mean that no one here should be trying to research a difficult, irrationality-laden issue and present a coherent exposition of it from a rationalist perspective—because they’re just a random blogger.
If the skills taught on this site don’t (at least potentially) allow you to identify failure modes in discourse on important issue, what good are they?
That’s not what I’m saying. It’s just that if some experts hold a position A, and I randomly meet someone on the internet who holds position B, and I myself don’t know enough about the subject to follow all the arguments—then I won’t update my own position towards B very much.
I’m trying to guard against the danger of giving too much importance to arguments “from close to us”, from family, friends, people we meet on the internet—I think that’s one major reason for people believing a lot of wrong things.
Of course, this doesn’t beat actually learning about the subject. But I’ll avoid explanations that start with “A lot of experts are wrong about this …”.
Do you think LWers should adopt a general policy of avoiding criticisms of expert opinion, even if well-researched (e.g. most of what Robin Hanson does)?
I don’t think we could (or should); many of the common opinions here are in opposition to widespread expert opinion. (e.g. the Copenhagen Interpretation still seems to be dominant among physicists; most scientific research uses frequentist statistics; most cryogenicists/cryobiologists reject cryonics (in public at least)...)
Certainly not!
However, for topics on which I myself am not informed enough to judge experts, I prefer criticism of non-expert opinion (i.e. “common misconceptions”), or explanation of expert opinion.
For example, in economics, interested in information on ways uninformed people (voters, journalists, some stockholders) are commonly mistaken, or explanation of various schools of economics and their points of agreement and disagrement. Which is what I meant in my answer James_Miller.
Arguments about why school of economics X is wrong are also possibly interesting, but I won’t gain as much from them until I’m more knowledgeable about economics. So I’m not as interested in seeing a bunch of those on LessWrong.
Understood, but I’m trying to find the distinction (if it exists) between your personal preferences vs. what you think is appropriate for LW in general.
(Some folks here have a hard time with the difference, as we’ve seen in the past. You probably don’t, but I want to make sure we’re on the same page anyway.)
Me too.
No, I want one econ post per correct, substantially unique angle found, which probably adds up to about a dozen, some of which have already been done. All things, including topics, in moderation.
Proposal: Have this be another site that everyone can read, with hidden user names and no effect on karma, but that only people with a certain amount of karma on LW can post on.
This way we’d get the filtering process of good LW users, while it being unable to affect LW back any way I can see. And my intuition say it had a few more advantages as well but forgot what those were.
Please expand and improve this proposal with more suggestion, subsuggestions and metasuggestions!
Comment that helped inspire this idea: http://lesswrong.com/lw/2qi/less_wrong_should_confront_wrongness_wherever_it/2nmw?c=1
You however loose a big part of the feedback on individual posts When I get down voted I mostly think hard about why I was downvoted and make a effort to improve and perhaps even correct.
Upvotes seem less informative, they can mean “good argument”,”right argument” or even just “I loled at that”.
This is a bit odd. The possibility of me improving seems to be higher when someone who knows what they are talking about says “do it like this” rather than “don’t do it like that” (without following it up with a “do it like this”).
This must be because posters infrequently down vote posts into the negative. They are far more liberal at up voting a post to 1, 2 or perhaps even 3 points (depends on how many eyes skim it).
Well this would already have only posters that have been on LW long enough to know these things already, and they’d still be on normal LW and keep getting feedback on things.
Risky. We could perhaps survive some discussion on public policy without any damage. But after a threshold would be crossed we would juts start fracturing into blue or green teams.
However what we need to do is analyze how many recruits we would loose by taking a stand on a certain issue (and even more damaging how many nonrationalists on “our” team we would attract) and compare the utility lost from future less rational behavior compared to what is gained.
You confuse having information with convincing other people to believe in your analysis.
No. Just no.
The only time you can ignore signaling is when your positions just happen to match good signals. You can afford a occasional action that sends bad signals, but things like policy or ideology or even positions need to be couched in nice signals.
Also there is no consensus on what is a vital social norm. I think we can agree that common random killing sprees would perhaps fall into this. But even that in itself if surrounded by the right memetic scaffolding could be made to work (perhaps a “kill the guy you hate day” could work). Social stability and prosperity is mostly polygenic.
Any talk of social norms is always a compromise between people of different values. The exact point of compromise depends heavily on the cost or benefit it, however a debate about this can never be made without taking values into discussion. And people have an incentive to propose bad policy to the point of outright deception when it is in accordance to their values.
Kill likely to succeed AGI creators who haven’t created a sane goal system (when no other means will work to stop them). Although I know Tim doesn’t accept even that exception.
Me? Yes, those who go on a programmer-killing spree are unlikely to be viewed favourably by me. I don’t think all murder is impossible to justify—but prospective killers would need to be really, really convincing about their motivation in order to avoid being black-balled by the rest of society.
“I have this paranoid fantasy about their mechanical offspring taking over the world”—is the type of thing that would not normally be regarded as adequate justification for killing people.
How many, over the decades, have fallen under “likely to succeed”? e.g. according to scientists/”experts”, investors, project leaders, etc. Whose estimate gets used, anyway?
None.
Whoever is making the decision. That’s how decisions work. Said person would use whatever information is relevant to them. They will then decide whether they need to take action to prevent the destruction of all things good and light or whether they will take action to prevent someone who they believe to be intending to kill due to paranoia.
There’s too much error out there to confront it all.
Stick to cases where there’s something generally interesting to say.
I approve of this norm and voted the post up. That is a high form of praise since I evaluate all ‘should’ claims that apply to anything I care about to a standard that is an order of magnitude more rigorous.
Very well said.
This is generating a lot of discussion about what Less Wrong’s should be, and proposed norms. I like this one and this one, for example.
Well, the norms of Less Wrong are currently scattered between a large number of previous posts, comments, and wiki pages. It is very difficult for a newcomer to find out what they are, and easy for long-time participants to forget about some of them.
Therefore, I propose gathering all of Less Wrong’s policies and norms into a top-level post. I also think it’s very important that the contents of that post be representative of the opinions of the Less Wrong community. With that in mind, I’m going to make a thread for gathering proposed Less Wrong norms and links to pre-existing norms. One week later, someone (not necessarily me) will take every comment which is above a score threshold, which contains a concise description of a norm, and which has not had any serious objections raised against it, add some non-substantive introductory and connecting text, and post it for posterity.
I am going to wait until Thursday evening and then post the norm-gathering thread and selection criteria. I’m waiting because the comments on that thread are likely to be quoted somewhere important, and so I think they should be the refined products of discussions that have already happened, rather than first drafts.
So, discuss away, here and in the meta thread. And know that these discussions are not hypothetical.
It is extremely unlikely that I will approve of such a subject nearly as much as I approved of this post. In fact it is not unlikely that I will wholeheartedly oppose it. Creating a list of rules like this is a step in totally the wrong direction. Further, discussions in some thread will operate by a different mechanism than looking at how all the norms fit together in a non ideal context. Signalling concerns usually prevent such rules from being practical. That is, rules tend to be created that presume a naive understanding of the way systems work.
Whenever you create a system of rules (or “up-voted norm conversations posted for posterity”) they will be gamed.
I have come up with an experiment (well, an article topic) which I hope will shed some light on whether Less Wrong can handle talking about politics, and what sort of approach to use when doing so. I’m going to delay asking people what they think norms should be until after the results of that have been seen. I think the important thing is to only pull sideways, and not argue for something that people can map to a particular party or entrenched position.
I’ll test in #lesswrong to make sure I haven’t misjudged its potential for controversy, and hopefully have it ready to post on Wednesday or Thursday evening.
Or that perhaps those sides are extremists while some optimal balance between the positions is required. E.g. “vitamin supplements are useless and evil” and “you will probably die unless you have some” are both unlikely to be “correct”.
(OTOH, perhaps you meant “which is correct” in a sense that includes other possibilities besides the two opposing ones, in which case, never mind, carry on. ;-) )
I think that we definitely should find some problems to tackle that are not minefields of bias and confusion, but are still challenging.
I think these should be within a field that is mature enough that there will be hard open problems, but also problems which look hard to a novice, but have definite answers to an expert.
To keep us interested, it should also be something that is more practical, so that fixing the big things or learning how to do the everyday things will help ourselves or others.
Fine, I’ll be the first to give directions to the minefield.
Is that the kind of post you wanted?
I don’t understand what would make you think that this post was intended to repeal the no current politics rule. None of the examples cited were specifically political.
This:
Politics is important (with high utility costs), the public discourse is wrong, it hasn’t appeared here before. It’s not entirely obvious that discussion on LW would bring clarity, but probably it would. Why do you think politics does not qualify as a topic endorsed by the OP?
Let’s say we talk about politics for a while and come up with a concrete proposal. The utility of our work is the product of (utility of proposal if implemented) and (probability of successfully implementing proposal). Even if the first number is very high, the second probability is vanishingly small.
This means that politics is not important as a Less Wrong topic.
Under your definition of importance, agreed. However, for reasons which are clear from the context, I have used the definition of the original post:
Just because there’s always been an explicit ban on it, OP didn’t propose repealing that ban, and none of the examples he cited were explicitly political.
Personally I’m undecided on whether political discussions would be a good idea (they should, at a minimum, be confined to a subreddit imho), I’m just confused why the commenter seemed to think thats what OP meant when he didn’t say anything about it.
Just because there’s always been an explicit ban on it, OP didn’t propose repealing that ban, and none of the examples he cited were explicitly political.
Personally I’m undecided on whether political discussions would be a good idea (they should, at a minimum, be confined to a subreddit imho), I’m just confused why the commenter seemed to think thats what OP meant when he didn’t say anything about it.
Personally, I don’t endorse political discussions either, however I find CronoDAS’s interpretation reasonable, for reasons I have explained in the parent. Do you dispute that wrong beliefs about politics have high utility costs, or that a discussion on LW would probably bring clarity? Or you think that anything short of explicit and specific demand to lift the ban cannot be interpreted as a proposal going in that direction?
No
Maybe.
If LW could come up with some way to counter the mind-killing effects of politics, and develop a mechanism for reliably, repeatably holding discussions on politically charged issues without them devolving into irrationality, well then that would be awesome—LW would have done a great service to mankind in developing such a system. I’m just skeptical as to whether that’s possible, since it’s been tried so many times before, and so far, never succeeded. I’m definitely not saying we should give up—it’s certainly a noble goal—I just think we should be very careful about letting a failed experiment in this direction have a corrupting influence on the successful elements that LW has been able to build so far.
I agree.
No
Maybe.
If LW could come up with some way to counter the mind-killing effects of politics, and develop a mechanism for reliably, repeatably holding discussions on politically charged issues without them devolving into irrationality, well then that would be awesome—LW would have done a great service to mankind in developing such a system. I’m just skeptical as to whether that’s possible, since it’s been tried so many times before, and so far, never succeeded. I’m definitely not saying we should give up—it’s certainly a noble goal—I just think we should be very careful about letting a failed experiment in this direction have a corrupting influence on the successful elements that LW has been able to build so far.
I agree that this conversation should happen on Less Wrong, but avoided mentioning it in the article because I think it should be separate. So bring it up in the open thread. Or bring it up in a top-level post, but only if you have an angle not present in the general public discourse, which clarifies rather than confusing, and says something important other than or in addition to just signaling affiliation with one party or the other.
Snap judgment: Rush Holt, no; effect of tax cuts on government budgets and national economy, yes.
But my instinct is to support rationality case studies.
… except when it comes to group-serving bias?
This post needs a counterpoint link to: someone is wrong on the internet {link updated}.
It is better to link to the original archive page than to the image, and better to link to the creator’s site than to an unaffiliated site.
The wrongularity—a point of infinite wrongness, from which right cannot emerge—must be near.
Yeah, it’s in or nearby Los Angeles, in fact.
Have you heard my new band, Wrongularity?
Different topics are differently important by many orders of magnitude, and differently easy to say something new and persuasive about. That suggests we should take great care to pick our battles, unless the point is to draw in new people by feeding off popular controversy—but that doesn’t seem to be your aim here.
from my observation and first time experience of a thread in a specific category. the article was noise and provided little signal. statements voted down by those who may lack objectivity or provoke voting down because it challenges participant’s own belief systems.
i call into question the entire thread’s bias which of and in itself challenges the premise of the less wrong objective.
less wrong is wrong in this category. i am not even an expert in this field and can see the fallacy which compels me to draw experts here to deal with this accordingly. nevertheless, it begs the question, how wrong is “less wrong” everywhere else.
less wrong pursues the objective to be clear, when confusion is what it facilitates in the relative instance. however, life is messy, and innovation and growth are built through creative destruction.
given the type of group think and self containerized thinking less wrong may consume itself into a black hole of never ending knowledge pursuit and implode upon itself hardly making the progress it hoped to achieve.
this comment is being made as an effort to provide feedback.
problems create new problems, [like from like]. human evolution according to Popper, problem solves thereby generating temporary solutions, which reveal errors and omissions, and a new set of problems.
Because of your posting style, I’m having trouble understanding what you’re trying to say. And looking over your old posts, I don’t think what you have to say is any different from gibberish. Have a nice day.