Why don’t these writers post or at least cross-post on LW? I would really prefer that they did, for these reasons
It would give their posts more visibility and hence more comments and discussions. (I often learn more from the comments sections of a LW post than the post itself.)
I don’t have to learn a new commenting system (get a new login and learn the markup/formatting, threading, and voting schemes).
I think the LW commenting system is generally better than that of any other blog I’ve seen.
If I comment on their posts, my comments can be backed up and searched along with all of my other LW comments.
I’m more motivated to comment (and to spend more effort on my comments) since my comments will be seen by more people, and I’m less worried about my comments disappearing when their blogs stop getting maintained.
Does it also have something to do with identity and affiliation? If so, maybe that’s another reason to try to make people think of LW in less identity-related ways, or perhaps make the LW identity smaller / more inclusive somehow? (I don’t know and I’d very much like to hear from one or more of these writers.)
Less Wrong requires no politics / minimal humor / definitely unambiguously rationality-relevant / careful referencing / airtight reasoning (as opposed to a sketch of something which isn’t exactly true but points to the truth.) This makes writing for Less Wrong a chore as opposed to an enjoyable pastime.
If LW feels like work then it’s creating an ugh field. I’ve noticed this when I saw someone treating reading HPMOR like homework instead of as reading for pleasure.
The main/discussion split was supposed to help alleviate this but I feel like too many are treating discussion as only appropriate for rough drafts of main material.
If LW feels like work then it’s creating an ugh field.
Not necessarily.
Posting to my own blog is kinda the same way for me sometimes. It hasn’t created an ugh field and I have no problem thinking about it but I end up with a huge backlog of posts with the main idea spelled out but not understandable to others.
A couple partial solutions come to mind:
1) When you see a post that looks like it belongs on LW, tell the author. This gets rid of a lot of the uncertainty about whether LW would appreciate it or whether it’s too off-topic/speculative/whatever.
2) If there are posts (or even unwritten posts) that aren’t yet LW ready but sound promising, offer to help edit it. This takes a lot of the burden off the author to do all the work, and often an extra perspective makes the process more efficient.
3) Make it somehow more rewarding. Make sure to upvote posts you like, make sure you don’t shame people whose posts you don’t like if they sincerely thought they were contributing something you’d like, make sure you treat them how you’d treat a friend that did you a favor rather than a paid worker, etc.
When you see a post that looks like it belongs on LW, tell the author. This gets rid of a lot of the uncertainty about whether LW would appreciate it or whether it’s too off-topic/speculative/whatever.
This. Being invited feels good.
Alternatively, if an author is not sure whether their article is a LW material, they can ask in the Open Thread.
Less Wrong requires no politics / minimal humor / definitely unambiguously rationality-relevant / careful referencing / airtight reasoning
I agree with the “no politics” bit, but I don’t think the rest are correct. I’ve certainly had “sketch of something that isn’t quite true but points in the right direction” posts with no references and unclear connections to rationality promoted before (example), as well as ones plastered with unnecessary jokes (example).
I agree. The overwhelming LW moderation focus seems to be on stifling bad content. There’s very little in place to encourage good content. Even the thumbs-up works against good content. Before the thumbs-up, on OB, people would leave appreciative comments. It’s much more rewarding to read appreciative comments than it is to look at your post’s number (and probably compare it unfavorably to other recent posts’ numbers...)
On social media sites like Digg or reddit, it’s not a big deal to submit something that people don’t end up liking ’cause it’ll get voted down/buried and consequentially become obscure. On LW, submitting something people don’t like amounts to publicly making a fool of yourself. Since it’s hard to predict what people will like, folks err on the side of not posting at all.
I think the ideal solution is probably something more like Huffington Post or Daily Kos. I’m not 100% sure how those systems work but they obviously work pretty well.
I agree that it is pretty rewarding to get appreciative comments, but it unfortunately also lowers the signal/noise ratio, since everyone ends up having to read said appreciative comments (rather than the target recipient).
I’d actually argue that in most cases. keeping signal noise ratio high is much more important than increasing the sheer number of good posts. Ideally of course we could do both...
I agree that it is pretty rewarding to get appreciative comments, but it unfortunately also lowers the signal/noise ratio, since everyone ends up having to read said appreciative comments (rather than the target recipient).
Yeah, this is a downside.
I’d actually argue that in most cases. keeping signal noise ratio high is much more important than increasing the sheer number of good posts.
Why’s that? Right now, for instance, it’s easy to create an arbitrarily high-signal version of LW by going to http://lesswrong.com/prefs/ and changing the thresholds for minimum post/comment scores. Wouldn’t it make more sense to let users use this mechanism to decide their own signal/noise threshold rather than enforcing one sitewide? (With a sensible default for non-logged-in users.)
If the absolute number of good posts goes up, that’s more knowledge for LW’s collective consciousness, that we can link to or mention to one another at times when it seems relevant. LW covers a wide variety of topics, so a greater number of absolute posts also means a greater number of posts that discuss topics relevant to any particular user.
As far as I can tell the sequences have a high number of good posts but a pretty low signal/noise ratio. My guess is that to get good posts, writing a lot and then seeing what sticks works well.
On the other hand, the current LW might work well for people who want to make important but difficult-to-understand points.
Right now, for instance, it’s easy to create an arbitrarily high-signal version of LW by going to http://lesswrong.com/prefs/ and changing the thresholds for minimum post/comment scores.
Uh, did not know that. Thank you!
I still have a caveat about posts which are extremely good at getting upvotes having a tendency to be shallow in content, but I think that overall, you are correct.
I think people like having a space where they can develop their own identity / brand. Maybe just encouraging the posting of links w/ block quotes to the discussion section of LW would be an adequate solution?
I think people like having a space where they can develop their own identity / brand.
Internalizing gains is desirable, so it’s not a surprise if people (such as me) might principally locate their posts on their personal sites or blogs and not post them here.
But it doesn’t necessary explain why they don’t cross-post. I haven’t found that to be much of an obstacle.
I wanted to make a few posts on my blog before linking other people to it (in case I bailed early, as part of the general pattern of doing things before talking about doing them, etc.) but I intend to cross-post most of my posts to LW discussion. (And I did link the blog itself to LW.)
I rather like the idea of people creating separate locations without cross-posting. I’d hate to see the rationality community become tied to a single possible point of reputational failure.
I find it interesting that several comments in the LW for Women threads, particularly those by Nancy Lebovitz indicate support for a wider range of rationality venues.
It’s also better for community cohesion if we all post in the same place.
Due to the fact that they could easily post here and receive all the benefits you listed and more, my perception of the problem is not that they do not realize that there are multiple benefits in posting on LessWrong rather than their own blogs, but that there’s something about LessWrong that makes it an undesirable place to post.
My top guess on that, based on the talk that I’ve seen going on around the place, is fear of criticism / rejection / negative karma. See also.
Is community cohesion something to aim for? Ideally, rationality should be the baseline, not the marker of our particular community. As recent threads on gender have suggested, there’s room for quite a few different communities which share core principles. Ideally, lesswrong would be one of many places where rationalists of different stripes feel comfortable. Of course, the question is whether it’s better off huddling together for the moment in hopes of reaching critical mass or if we’re already past that phase.
Community cohesion may be critical to the success of the rationalist movement.
That was a good question. The reason cohesion is important is because one wants to avoid a “divide and conquer” circumstance. If nobody feels comfortable posting on LessWrong and goes and posts on 100 other blogs instead, it’s possible that the movement will simply die out.
Of course it is also possible for that to have a benefit, too: If 100 blog authors successfully advertise their blogs, it’s possible that they’d get a lot more readers than if they were to post solely on LessWrong.
I also have a deeper point to make about community cohesion which I think will be hard to see due to two kinds of bias. To make the point easier to see, I will briefly mention those biases. The first is optimism bias. The second is mind projection fallacy.
Why optimism bias is relevant here:
Not only are people known to believe more optimistic things than they should, they also find it hard to adjust their perspective when their optimism is confronted, especially when it would be emotionally devastating to do so. This may be one of those ideas with the potential for devastation. Therefore, it might be especially hard to overcome optimism bias in this case and view the following point objectively.
Why mind projection fallacy is relevant here:
If you, yourself, have X ability level with something, you are likely to assume that most others have X ability as well, and may even find it hard to imagine or believe that most others do not have X ability level. Most people have not thoroughly researched the abilities of the average person, and have no idea what those are. I’ve met countless gifted people who, for instance, upon encountering someone with less ability than themselves in some area, frequently mistake that ignorance for laziness or malice or simply assume that the specific person was unusually stupid. This happens frequently, with gifted people, at times when the person in question actually has normal abilities. It’s likely that the average LessWronger is gifted. It’s also likely that something like half of them do not even realize they are gifted.
For those two reasons, a lot of people in this movement are likely to be vastly overestimating the average person’s ability to become rational and their interest level in doing so. To some extent, interest in rationality and ability to wield it are things that can be increased. However, it would be illogical to assume that just because interest level and ability can be increased that they’ll be able to be increased adequately in a large enough section of the population for rational thought to become the norm. The stated goal of “raising the sanity waterline” is, of course, both feasible and worthwhile—but part of that is because the phrasing implies no specific amount. Making rationality the norm is a completely different goal. It is extremely worthwhile, but is many times more difficult.
I am not aware of any studies that have been done to specifically determine the average person’s capacity for developing rationality. Nor am I aware of any studies on methods to teach rationality that are designed to determine how much difference the best teaching strategies make to one’s capacity for and interest in rationality. I doubt anybody knows where the average person’s limits are. I think it would be hard for most members of LessWrong to imagine having a brain that hurts when you try to think rationally because it is too hard, having no interest in rationality whatsoever, or being too irrational to even understand why rationality is good, but (as a person who has read a lot about psychology) I’m sure that all of those things do happen and that they’re fairly common. It would be folly, in my opinion, to assume that the average human:
1.) Has a brain capable of thinking rationally.
2.) Has a brain that learns rationality easily enough that the benefits outweigh the costs.
3.) Has a brain that rewards them for rational thought. (Mine rewards me for rational thought, but it seems to punish me for doing math, and it took about a decade for me to get to the point where it quit hurting when I attempted to spell correctly.)
4.) Has adequate time, energy, resources, discipline, sanity, opportunity, etc. to put themselves through the rigorous mental training it requires to achieve adequate results.
5.) Has the mental stamina capacity to reach a state of constant performance. I bet most are able to do things like detect a bias a single time, when asked, just like most can lift a five pound dumbbell, a single time, when asked. But to be rational requires detecting most biases most of the time, and that is quite another matter. It may be that it takes the mental equivalent of an Olympic champion to be able to develop the stamina to lift those weights all the time.
The reason I bring this up is not because I’m a cantankerous misanthrope. It’s because of what I know about human abilities due to the developmental psychology research I’ve done. For a quick glimpse into my perspective:
Learning language is not a trivial task, but we have been doing that since before civilization without schooling or a movement. Humans are prodigious among animal species when it comes to language development. Almost all of us gets to enjoy a huge vocabulary with many thousands of words. It’s just natural for our brains to learn a language. Language is a great example of what happens with humans when our brains are designed to do a particular task. We say things like “Humans are different because they’re capable of rational thought.” Then why don’t we learn to be rational as readily as we learn language? If it’s natural for all of us, why didn’t we learn to be rational most of the time thousands of years ago?
Some abilities are attributed to “humanity” because we don’t see any, or many, examples of them in other species… but that’s different from saying that they’re common to humans. Rocket science, for instance, is an ability that no animals have, but that most humans do not have, either. We say that humans are different because of things like rational thought, but rationality may still be similar to rocket science in that many humans are not able to do it.
[Edited to remove a certain paragraph.]
I don’t know whether this obstacle is best identified as “lack of interest” or “it’s unnatural to learn because a lot of people’s brains aren’t designed for it”, but there is obviously some obstacle that has been holding humanity back the entire time we’ve existed. There may be lots of different obstacles. It is quite possible, in my view, that the average human will find the obstacles insurmountable without something like brain augmentation or genetic engineering.
In the event that there are a limited number of people who will be both interested in and successful at attaining a rational state, the last thing we want to do is have them divided up all over the place. To have the greatest possible impact, we need to stick together.
That’s why community cohesion is likely to be critical to the rationalist movement.
For archival purposes, the paragraph was:
Even with ethics—which is something that most people can learn—it has taken thousands of years of civilized living just to get to the point where slavery is abolished, women can vote and gay people have begun making progress on getting rights. I am not even sure that thinking rationally most of the time is a state that is attainable to most people, but even if it is, it’s possible that establishing a norm of rational thought among humans would require a time period similar to that of mass ethical behavior...
Some people will dislike LW for various reasons. For example, they don’t like talking about superintelligent machines and Singularity, because it feels cultish. Or they thing that rational talking is cold; or that the LW environment is hostile to women. Or whatever else. So these people will ignore LW and everything that is here. So it would be nice if the good ideas are also available somewhere else.
(For similar reasons I think separating SI/MIRI and CFAR was a good idea. It you want to convince people about usefulness of rational thinking, starting with singularity is often not a good strategy.)
I agree, but make a distinction between thinking it’s bad that too few people are posting on LessWrong while many are posting in a billion other places instead versus thinking that there should not be multiple groups of rationalists.
I am all for multiple groups of rationalists. What I am not for is this community scattering across 100 different blogs.
My interpretation of the article is not that he’s saying that gauging moral progress is impossible, but that you can’t gauge it by comparing the past with the present. It has to be gauged against an ideal, but the ideal has to be carefully chosen or else the ideal may be useless or self-defeating. I’m sure such an ideal can be constructed (maybe someone has already constructed one that’s widely considered to be acceptable—in that case I’d be interested in finding out about it) but since I’m not aware of a widely accepted ideal against which progress can be gauged at the moment, I’d like to focus on other parts of this. [Edited last statement to remove a source of potential conversational derailment.]
This does inspire me to bring up interesting questions, though, like:
Do I know enough about our past history to know whether it was previously better or worse? What if most native American tribes respected women and gays and abhorred slavery before they were killed off by the settlers? Not to mention the thousands of civilizations that existed prior to these in so many places all over the world.
Might we be causing harm in new ways as well as ceasing to cause harm in other ways, moving backward overall? Even though Americans can’t keep slaves, they do get a lot of their goods from sweatshops. The prejudice against gays may be lessening, but has the prejudice against Middle Easterners increased to the point where it cancels out that progress? Women got the right to vote, but shortly before that, children were forced into the school system. The reason I view this school system as unethical are touched on (in order) here and here.
I wonder if anyone has done thorough research to determine whether we’re moving forward or backward. I would earnestly like to know. It’s a topic I am very interested in. If you have a detailed perspective on this, I’d be interested in reading it.
For archival purposes, the source of potential conversational derailment was:
One ideal against which we could gauge moral progress without it being useless or self-defeating if taken to the extreme would be “Causing less suffering and death is good.”
One ideal against which we could gauge moral progress without it being useless or self-defeating if taken to the extreme would be “Causing less suffering and death is good.”
Well, the most straightforward way to judge success along this metric is to compare the amount of suffering. The problem with this metric is that the contribution of technological progress will dominate any contribution from ethical progress.
Might we be causing harm in new ways as well as ceasing to cause harm in other ways, moving backward overall? Even though Americans can’t keep slaves, they do get a lot of their goods from sweatshops. The prejudice against gays may be lessening, but has the prejudice against Middle Easterners increased to the point where it cancels out that progress? Women got the right to vote, but shortly before that, children were forced into the school system.
Furthermore, it’s not a priori obvious that the contribution to less suffering is what you think it is any of the examples you listed. It’s possible that the people working in “sweatshops” are better off there than wherever they were before, this in fact seems likely since they chose to work there. It’s possible that our modern attitude towards gender roles and sexuality is causing more unhappy marriages and children growing up in bad homes and thus increases suffering; conversely, maybe our attitudes towards gender are correct and our prejudice towards (Muslim) Middle Easterners is encouraging them to adopt it and thus our prejudice is reducing suffering on net. As for the right to vote, well there’s a slight positive effect from making the women feel empowered, but the main effect is who wins elections, and whether they make better or worse decisions, which seems hard to measure.
My point is that doing these types of calculations is much harder than you seem to realize.
My point is that doing these types of calculations is much harder than you seem to realize.
I do realize that making these calculations is difficult. To be fair, when I first brought this up, I was talking about a completely different subject, in a comment that was already long enough and absolutely did not need a long tangent about the complexities of this added in. Then, I began exploring some of the complexities, hoping that you’d expand on them and you instead chose to view my limited engagement in the topic as a sign that doing these kinds of calculations is harder than I realize. This is frustrating for two reasons. The first reason is that no matter what I said, it would not be possible for me to cover the topic in entirety, especially not in a single message board comment. The second reason is that instead of continuing my discussion and adding to it, you changed the direction of the conversation each of the last two times you replied to me.
It might be that you’d make an excellent conversation partner to explore this with, but I am not certain you are interested in that. Are you interested in exploring this topic or were you just hoping to convince me that I don’t realize how complicated this is?
Then, I began exploring some of the complexities, hoping that you’d expand on them and you instead chose to view my limited engagement in the topic as a sign that doing these kinds of calculations is harder than I realize.
Sorry about that, your examples pattern matched to what someone who wanted to question contemporary practices without actually questioning contemporary ethics would write.
Thanks, Eugine. I can see in hindsight why I would look like that to you, but before hand, I didn’t expect anyone to jump on examples that weren’t elaborated upon to the degree you appear to have been expecting. I’m interested in continuing this discussion for reasons unrelated to the comment that originally spurred this off, as I’ve been thinking a lot lately about how to measure the ethical behavior of humans. I’m still wondering if you’re interested in talking about this. Are you?
One ideal against which we could gauge moral progress without it being useless or self-defeating if taken to the extreme would be “Causing less suffering and death is good.”
I’m afraid once you take even that ideal to the extreme you will get something horrific. An effective way to minimize suffering and death is to minimize things that can experience suffering and death. ie. Taking this ideal to the extreme kills everyone!
Watching what happens when a demigod of “Misguided Good” alignment actually implements this ideal forms the basis of the plot for Summer Knight, where Harry Dresden goes head to head against a powerful Fey who is just too damn sensitive and proactively altruistic for the world’s good.
An effective way to minimize suffering and death is to minimize things that can experience suffering and death. ie. Taking this ideal to the extreme kills everyone!
Um if you didn’t happen to notice, killing everyone qualifies as “death” and is therefore out of bounds for reaching that particular ideal.
Um if you didn’t happen to notice, killing everyone qualifies as “death” and is therefore out of bounds for reaching that particular ideal.
Out of bounds? The ideal in question (“Causing less suffering and death is good”) doesn’t seem to have specified any bounds. That’s precisely the problem with this and indeed most forms of naive idealism. If you go and actually implement the ideal and throw away the far more complex and pragmatic restraints humans actually operate under you end up with something horrible. While all else being equal causing less suffering and death is good, actually optimizing for less suffering and death is a lost purpose.
Almost any optimizer with the goal “cause less suffering and death” that is capable of killing everyone (comparatively) painlesslessly will in fact choose to do so. (Because preventing death forever and is hard and not necessarily possible, depending on the details of physics.)
I was not talking about this in the context of building an optimizer. I was talking about this as a simple way to as humans gauge whether we had made ethical progress or not. I still think your specific concern about my specific ideal was not warranted:
Since killing everyone qualifies as “death” I don’t see how it could possibly qualify as in-bounds as a method for reaching this particular ideal. Phrased differently, for instance as “Suffering and death are bad, let’s eliminate them.” the ideal could certainly lead to that. But I phrased it as “Causing less suffering and death is good.”
I used the wording “cause less” which means the people enacting the ideal would not be able to kill people in order to prevent people from dying. You could argue that if they kill someone that might have had four children, that four deaths were saved—however, I’d argue that the four future deaths were not originally caused by the particular idealist in question, so killing the potential parent of those potential four children would not be a way for that particular person to cause less deaths. They would instead be increasing the number of deaths that they personally caused by one, while reducing the number of deaths that they personally caused by absolutely nothing.
It does not use the word “eliminate” which is important because “eliminate” and “lessen” would result in two totally different strategies. Total elimination practically requires the death of all, as the only way for it to be perfect is for there to be nobody to experience suffering or death. “Lessen” gives you more leeway, by allowing the sort of “as good as possible” type implementation that leaves living things surviving in order to experience the lessened suffering and death.
Can you think of a way for the idealist to kill everyone in order to personally cause less death, without personally causing more death, or a reason why lessening suffering would force the idealist to go to the extreme of total elimination?
I used the wording “cause less” which means the people enacting the ideal would not be able to kill people in order to prevent people from dying.
The wording doesn’t prevent that, but your elaboration here does. You’ve (roughly speaking) replaced a simple consequentlialist moral with a rather complex deontological one. The problems and potential failure modes change accordingly. Neither are an ideal against which I would gauge moral progress.
Would you agree or disagree that no matter what anybody had proposed as a potential way of gauging moral progress, you most likely would have disagreed with it, and there most likely would have been the potential for practically endless debate?
What would be most constructive is to be told “Here is this other ideal against which to gauge progress that would be a better choice.” What I feel like I’m being told, instead, is “This is not perfect.” That is a given, and it’s not useful.
I would earnestly like to know whether humanity has made progress. If you want to have that discussion with me, would you mind contributing to the continuation of the conversation instead of merely kicking the conversation down?
Would you agree or disagree that no matter what anybody had proposed as a potential way of gauging moral progress, you most likely would have disagreed with it
I disagree with that hypothesis. I further note that I evaluate claims about value metrics “if taken to the extreme” differently to proposals advocating a metric to be used for a given purpose. In the latter case I consider whether it will be useful, in the former case I actually consider the extremes. In a forum where issues like lost purpose and the complexity and fragility of value are taken seriously and the consequences of taking simple value systems to the extremes areoftenconsidered this should come as little surprise.
Can you propose an ideal that would work for this purpose?
In a forum where...
Alright, next time I want to talk about something that might involve this kind of ethical statement, if I’m not interested in posting a 400 page essay to clarify my every phoneme and account for every possible contingency, I will say something like “[insert perfect ethical statement here]”.
Edits the comment that started this to prevent further conversation derailment.
I actually consider the extremes...
I haven’t exactly dedicated my existence to composing an ideal useful for gauging human progress against or anything, I just started thinking about this yesterday, but I did consider the extremes.
I still don’t see anything wrong with this one and you didn’t give me a specific objection. I only got a vague “the same problems as...” From my point of view, it’s a bit prickly to imply that I haven’t considered the extremes. Have I considered them as much as you have? Probably not. But if you want me to see why this particular statement is wrong, I hope you realize that you’ll have to give me some specific refutation that reveals it’s uselessness or destructiveness.
I would earnestly like to know whether humanity has made progress. If you want to have that discussion with me, would you mind contributing to the continuation of the conversation instead of merely kicking the conversation down?
This was not responded to at all, and that’s frustrating.
It’s great that you care about this, and I know you have an interest in (and possibly a passion for?) this sort of reasoning, but I’ve been wondering since the comment where you disagreed with me about this in the first place what purpose you are hoping to serve. Lacking direct knowledge of that, all I have is this feeling of being sniped by a “Somebody on the internet is wrong!” reflex.
Regardless of motives, I feel negatively affected by this approach. I’m feeling all existentially angsty now, wondering whether there is any way at all to have any clue whether humanity is moving forward or backward and tracing the cause back to this conversation here where I am the only one trying to build ideas about this and my respondents seem intent on tearing them down.
What I really wanted to get out of this was to get some ideas about how to gather data on ethics progress. Maybe somebody has already constructed an analysis. In that case, a book recommendation or something would have been great. If not, I was looking for additional ideas for going through the available information to get a gist of this. You’ve obviously thought about this, so I figure you must have something worthwhile to contribute toward constructive action here.
I mention this because it does not appear to have occurred to you that maybe I was doing an initial scouting mission, not setting out to solve a philosophical problem once and for all: I don’t need a perfect way to gauge this right this instant—a gist of things and a casual exploration of the scope involved is all that I’m realistically willing to invest time into at the present moment, so that would be satisfactory. I may dive into it later, but, as they say: “baby steps”.
If you have some idea of what ideal could be used to gauge progress against, I would appreciate it if you’d tell me what it is. If not, then is there some way in which you’re interested in continuing the exploration in a constructive manner that does not consist of me building ideas and you tearing them down?
I used the wording “cause less” which means the people enacting the ideal would not be able to kill people in order to prevent people from dying.
The wording doesn’t prevent that, but your elaboration here does. You’ve (roughly speaking) replaced a simple consequentlialist moral with a rather complex deontological one. The problems and potential failure modes change accordingly. Neither are an ideal against which I would gauge moral progress.
Why don’t these writers post or at least cross-post on LW? I would really prefer that they did, for these reasons
It would give their posts more visibility and hence more comments and discussions. (I often learn more from the comments sections of a LW post than the post itself.)
I don’t have to learn a new commenting system (get a new login and learn the markup/formatting, threading, and voting schemes).
I think the LW commenting system is generally better than that of any other blog I’ve seen.
If I comment on their posts, my comments can be backed up and searched along with all of my other LW comments.
I’m more motivated to comment (and to spend more effort on my comments) since my comments will be seen by more people, and I’m less worried about my comments disappearing when their blogs stop getting maintained.
Does it also have something to do with identity and affiliation? If so, maybe that’s another reason to try to make people think of LW in less identity-related ways, or perhaps make the LW identity smaller / more inclusive somehow? (I don’t know and I’d very much like to hear from one or more of these writers.)
Less Wrong requires no politics / minimal humor / definitely unambiguously rationality-relevant / careful referencing / airtight reasoning (as opposed to a sketch of something which isn’t exactly true but points to the truth.) This makes writing for Less Wrong a chore as opposed to an enjoyable pastime.
If LW feels like work then it’s creating an ugh field. I’ve noticed this when I saw someone treating reading HPMOR like homework instead of as reading for pleasure.
The main/discussion split was supposed to help alleviate this but I feel like too many are treating discussion as only appropriate for rough drafts of main material.
How can we fix this?
Not necessarily.
Posting to my own blog is kinda the same way for me sometimes. It hasn’t created an ugh field and I have no problem thinking about it but I end up with a huge backlog of posts with the main idea spelled out but not understandable to others.
A couple partial solutions come to mind:
1) When you see a post that looks like it belongs on LW, tell the author. This gets rid of a lot of the uncertainty about whether LW would appreciate it or whether it’s too off-topic/speculative/whatever.
2) If there are posts (or even unwritten posts) that aren’t yet LW ready but sound promising, offer to help edit it. This takes a lot of the burden off the author to do all the work, and often an extra perspective makes the process more efficient.
3) Make it somehow more rewarding. Make sure to upvote posts you like, make sure you don’t shame people whose posts you don’t like if they sincerely thought they were contributing something you’d like, make sure you treat them how you’d treat a friend that did you a favor rather than a paid worker, etc.
This. Being invited feels good.
Alternatively, if an author is not sure whether their article is a LW material, they can ask in the Open Thread.
I agree with the “no politics” bit, but I don’t think the rest are correct. I’ve certainly had “sketch of something that isn’t quite true but points in the right direction” posts with no references and unclear connections to rationality promoted before (example), as well as ones plastered with unnecessary jokes (example).
I agree. The overwhelming LW moderation focus seems to be on stifling bad content. There’s very little in place to encourage good content. Even the thumbs-up works against good content. Before the thumbs-up, on OB, people would leave appreciative comments. It’s much more rewarding to read appreciative comments than it is to look at your post’s number (and probably compare it unfavorably to other recent posts’ numbers...)
On social media sites like Digg or reddit, it’s not a big deal to submit something that people don’t end up liking ’cause it’ll get voted down/buried and consequentially become obscure. On LW, submitting something people don’t like amounts to publicly making a fool of yourself. Since it’s hard to predict what people will like, folks err on the side of not posting at all.
I think the ideal solution is probably something more like Huffington Post or Daily Kos. I’m not 100% sure how those systems work but they obviously work pretty well.
I agree that it is pretty rewarding to get appreciative comments, but it unfortunately also lowers the signal/noise ratio, since everyone ends up having to read said appreciative comments (rather than the target recipient).
I’d actually argue that in most cases. keeping signal noise ratio high is much more important than increasing the sheer number of good posts. Ideally of course we could do both...
Yeah, this is a downside.
Why’s that? Right now, for instance, it’s easy to create an arbitrarily high-signal version of LW by going to http://lesswrong.com/prefs/ and changing the thresholds for minimum post/comment scores. Wouldn’t it make more sense to let users use this mechanism to decide their own signal/noise threshold rather than enforcing one sitewide? (With a sensible default for non-logged-in users.)
If the absolute number of good posts goes up, that’s more knowledge for LW’s collective consciousness, that we can link to or mention to one another at times when it seems relevant. LW covers a wide variety of topics, so a greater number of absolute posts also means a greater number of posts that discuss topics relevant to any particular user.
As far as I can tell the sequences have a high number of good posts but a pretty low signal/noise ratio. My guess is that to get good posts, writing a lot and then seeing what sticks works well.
On the other hand, the current LW might work well for people who want to make important but difficult-to-understand points.
Uh, did not know that. Thank you!
I still have a caveat about posts which are extremely good at getting upvotes having a tendency to be shallow in content, but I think that overall, you are correct.
I was going to argue with you about humor until I noticed who’d posted the comment.
One-liners and wisecracks go over well here, but perhaps subtle long form humor doesn’t?
Very subtle, I like understated humor. :)
I think people like having a space where they can develop their own identity / brand. Maybe just encouraging the posting of links w/ block quotes to the discussion section of LW would be an adequate solution?
Internalizing gains is desirable, so it’s not a surprise if people (such as me) might principally locate their posts on their personal sites or blogs and not post them here.
But it doesn’t necessary explain why they don’t cross-post. I haven’t found that to be much of an obstacle.
I wanted to make a few posts on my blog before linking other people to it (in case I bailed early, as part of the general pattern of doing things before talking about doing them, etc.) but I intend to cross-post most of my posts to LW discussion. (And I did link the blog itself to LW.)
I rather like the idea of people creating separate locations without cross-posting. I’d hate to see the rationality community become tied to a single possible point of reputational failure.
I find it interesting that several comments in the LW for Women threads, particularly those by Nancy Lebovitz indicate support for a wider range of rationality venues.
It’s also better for community cohesion if we all post in the same place.
Due to the fact that they could easily post here and receive all the benefits you listed and more, my perception of the problem is not that they do not realize that there are multiple benefits in posting on LessWrong rather than their own blogs, but that there’s something about LessWrong that makes it an undesirable place to post.
My top guess on that, based on the talk that I’ve seen going on around the place, is fear of criticism / rejection / negative karma. See also.
Is community cohesion something to aim for? Ideally, rationality should be the baseline, not the marker of our particular community. As recent threads on gender have suggested, there’s room for quite a few different communities which share core principles. Ideally, lesswrong would be one of many places where rationalists of different stripes feel comfortable. Of course, the question is whether it’s better off huddling together for the moment in hopes of reaching critical mass or if we’re already past that phase.
Community cohesion may be critical to the success of the rationalist movement.
That was a good question. The reason cohesion is important is because one wants to avoid a “divide and conquer” circumstance. If nobody feels comfortable posting on LessWrong and goes and posts on 100 other blogs instead, it’s possible that the movement will simply die out.
Of course it is also possible for that to have a benefit, too: If 100 blog authors successfully advertise their blogs, it’s possible that they’d get a lot more readers than if they were to post solely on LessWrong.
I also have a deeper point to make about community cohesion which I think will be hard to see due to two kinds of bias. To make the point easier to see, I will briefly mention those biases. The first is optimism bias. The second is mind projection fallacy.
Why optimism bias is relevant here:
Not only are people known to believe more optimistic things than they should, they also find it hard to adjust their perspective when their optimism is confronted, especially when it would be emotionally devastating to do so. This may be one of those ideas with the potential for devastation. Therefore, it might be especially hard to overcome optimism bias in this case and view the following point objectively.
Why mind projection fallacy is relevant here:
If you, yourself, have X ability level with something, you are likely to assume that most others have X ability as well, and may even find it hard to imagine or believe that most others do not have X ability level. Most people have not thoroughly researched the abilities of the average person, and have no idea what those are. I’ve met countless gifted people who, for instance, upon encountering someone with less ability than themselves in some area, frequently mistake that ignorance for laziness or malice or simply assume that the specific person was unusually stupid. This happens frequently, with gifted people, at times when the person in question actually has normal abilities. It’s likely that the average LessWronger is gifted. It’s also likely that something like half of them do not even realize they are gifted.
For those two reasons, a lot of people in this movement are likely to be vastly overestimating the average person’s ability to become rational and their interest level in doing so. To some extent, interest in rationality and ability to wield it are things that can be increased. However, it would be illogical to assume that just because interest level and ability can be increased that they’ll be able to be increased adequately in a large enough section of the population for rational thought to become the norm. The stated goal of “raising the sanity waterline” is, of course, both feasible and worthwhile—but part of that is because the phrasing implies no specific amount. Making rationality the norm is a completely different goal. It is extremely worthwhile, but is many times more difficult.
I am not aware of any studies that have been done to specifically determine the average person’s capacity for developing rationality. Nor am I aware of any studies on methods to teach rationality that are designed to determine how much difference the best teaching strategies make to one’s capacity for and interest in rationality. I doubt anybody knows where the average person’s limits are. I think it would be hard for most members of LessWrong to imagine having a brain that hurts when you try to think rationally because it is too hard, having no interest in rationality whatsoever, or being too irrational to even understand why rationality is good, but (as a person who has read a lot about psychology) I’m sure that all of those things do happen and that they’re fairly common. It would be folly, in my opinion, to assume that the average human:
1.) Has a brain capable of thinking rationally.
2.) Has a brain that learns rationality easily enough that the benefits outweigh the costs.
3.) Has a brain that rewards them for rational thought. (Mine rewards me for rational thought, but it seems to punish me for doing math, and it took about a decade for me to get to the point where it quit hurting when I attempted to spell correctly.)
4.) Has adequate time, energy, resources, discipline, sanity, opportunity, etc. to put themselves through the rigorous mental training it requires to achieve adequate results.
5.) Has the mental stamina capacity to reach a state of constant performance. I bet most are able to do things like detect a bias a single time, when asked, just like most can lift a five pound dumbbell, a single time, when asked. But to be rational requires detecting most biases most of the time, and that is quite another matter. It may be that it takes the mental equivalent of an Olympic champion to be able to develop the stamina to lift those weights all the time.
The reason I bring this up is not because I’m a cantankerous misanthrope. It’s because of what I know about human abilities due to the developmental psychology research I’ve done. For a quick glimpse into my perspective:
Learning language is not a trivial task, but we have been doing that since before civilization without schooling or a movement. Humans are prodigious among animal species when it comes to language development. Almost all of us gets to enjoy a huge vocabulary with many thousands of words. It’s just natural for our brains to learn a language. Language is a great example of what happens with humans when our brains are designed to do a particular task. We say things like “Humans are different because they’re capable of rational thought.” Then why don’t we learn to be rational as readily as we learn language? If it’s natural for all of us, why didn’t we learn to be rational most of the time thousands of years ago?
Some abilities are attributed to “humanity” because we don’t see any, or many, examples of them in other species… but that’s different from saying that they’re common to humans. Rocket science, for instance, is an ability that no animals have, but that most humans do not have, either. We say that humans are different because of things like rational thought, but rationality may still be similar to rocket science in that many humans are not able to do it.
[Edited to remove a certain paragraph.]
I don’t know whether this obstacle is best identified as “lack of interest” or “it’s unnatural to learn because a lot of people’s brains aren’t designed for it”, but there is obviously some obstacle that has been holding humanity back the entire time we’ve existed. There may be lots of different obstacles. It is quite possible, in my view, that the average human will find the obstacles insurmountable without something like brain augmentation or genetic engineering.
In the event that there are a limited number of people who will be both interested in and successful at attaining a rational state, the last thing we want to do is have them divided up all over the place. To have the greatest possible impact, we need to stick together.
That’s why community cohesion is likely to be critical to the rationalist movement.
For archival purposes, the paragraph was:
Even with ethics—which is something that most people can learn—it has taken thousands of years of civilized living just to get to the point where slavery is abolished, women can vote and gay people have begun making progress on getting rights. I am not even sure that thinking rationally most of the time is a state that is attainable to most people, but even if it is, it’s possible that establishing a norm of rational thought among humans would require a time period similar to that of mass ethical behavior...
Don’t put all your eggs in one basket.
Some people will dislike LW for various reasons. For example, they don’t like talking about superintelligent machines and Singularity, because it feels cultish. Or they thing that rational talking is cold; or that the LW environment is hostile to women. Or whatever else. So these people will ignore LW and everything that is here. So it would be nice if the good ideas are also available somewhere else.
(For similar reasons I think separating SI/MIRI and CFAR was a good idea. It you want to convince people about usefulness of rational thinking, starting with singularity is often not a good strategy.)
I agree, but make a distinction between thinking it’s bad that too few people are posting on LessWrong while many are posting in a billion other places instead versus thinking that there should not be multiple groups of rationalists.
I am all for multiple groups of rationalists. What I am not for is this community scattering across 100 different blogs.
Re: ethics.
The problem with judging ethical progress, is that we have no independent way of verifying we’re progressing rather than regressing.
My interpretation of the article is not that he’s saying that gauging moral progress is impossible, but that you can’t gauge it by comparing the past with the present. It has to be gauged against an ideal, but the ideal has to be carefully chosen or else the ideal may be useless or self-defeating. I’m sure such an ideal can be constructed (maybe someone has already constructed one that’s widely considered to be acceptable—in that case I’d be interested in finding out about it) but since I’m not aware of a widely accepted ideal against which progress can be gauged at the moment, I’d like to focus on other parts of this. [Edited last statement to remove a source of potential conversational derailment.]
This does inspire me to bring up interesting questions, though, like:
Do I know enough about our past history to know whether it was previously better or worse? What if most native American tribes respected women and gays and abhorred slavery before they were killed off by the settlers? Not to mention the thousands of civilizations that existed prior to these in so many places all over the world.
Might we be causing harm in new ways as well as ceasing to cause harm in other ways, moving backward overall? Even though Americans can’t keep slaves, they do get a lot of their goods from sweatshops. The prejudice against gays may be lessening, but has the prejudice against Middle Easterners increased to the point where it cancels out that progress? Women got the right to vote, but shortly before that, children were forced into the school system. The reason I view this school system as unethical are touched on (in order) here and here.
I wonder if anyone has done thorough research to determine whether we’re moving forward or backward. I would earnestly like to know. It’s a topic I am very interested in. If you have a detailed perspective on this, I’d be interested in reading it.
For archival purposes, the source of potential conversational derailment was:
One ideal against which we could gauge moral progress without it being useless or self-defeating if taken to the extreme would be “Causing less suffering and death is good.”
Well, the most straightforward way to judge success along this metric is to compare the amount of suffering. The problem with this metric is that the contribution of technological progress will dominate any contribution from ethical progress.
Furthermore, it’s not a priori obvious that the contribution to less suffering is what you think it is any of the examples you listed. It’s possible that the people working in “sweatshops” are better off there than wherever they were before, this in fact seems likely since they chose to work there. It’s possible that our modern attitude towards gender roles and sexuality is causing more unhappy marriages and children growing up in bad homes and thus increases suffering; conversely, maybe our attitudes towards gender are correct and our prejudice towards (Muslim) Middle Easterners is encouraging them to adopt it and thus our prejudice is reducing suffering on net. As for the right to vote, well there’s a slight positive effect from making the women feel empowered, but the main effect is who wins elections, and whether they make better or worse decisions, which seems hard to measure.
My point is that doing these types of calculations is much harder than you seem to realize.
Edit: Also, what wedrifid said.
I do realize that making these calculations is difficult. To be fair, when I first brought this up, I was talking about a completely different subject, in a comment that was already long enough and absolutely did not need a long tangent about the complexities of this added in. Then, I began exploring some of the complexities, hoping that you’d expand on them and you instead chose to view my limited engagement in the topic as a sign that doing these kinds of calculations is harder than I realize. This is frustrating for two reasons. The first reason is that no matter what I said, it would not be possible for me to cover the topic in entirety, especially not in a single message board comment. The second reason is that instead of continuing my discussion and adding to it, you changed the direction of the conversation each of the last two times you replied to me.
It might be that you’d make an excellent conversation partner to explore this with, but I am not certain you are interested in that. Are you interested in exploring this topic or were you just hoping to convince me that I don’t realize how complicated this is?
Sorry about that, your examples pattern matched to what someone who wanted to question contemporary practices without actually questioning contemporary ethics would write.
Thanks, Eugine. I can see in hindsight why I would look like that to you, but before hand, I didn’t expect anyone to jump on examples that weren’t elaborated upon to the degree you appear to have been expecting. I’m interested in continuing this discussion for reasons unrelated to the comment that originally spurred this off, as I’ve been thinking a lot lately about how to measure the ethical behavior of humans. I’m still wondering if you’re interested in talking about this. Are you?
I’d be interested, although this should possibly be done in a different thread.
Alright. Choose a location.
I’m afraid once you take even that ideal to the extreme you will get something horrific. An effective way to minimize suffering and death is to minimize things that can experience suffering and death. ie. Taking this ideal to the extreme kills everyone!
Watching what happens when a demigod of “Misguided Good” alignment actually implements this ideal forms the basis of the plot for Summer Knight, where Harry Dresden goes head to head against a powerful Fey who is just too damn sensitive and proactively altruistic for the world’s good.
Um if you didn’t happen to notice, killing everyone qualifies as “death” and is therefore out of bounds for reaching that particular ideal.
Out of bounds? The ideal in question (“Causing less suffering and death is good”) doesn’t seem to have specified any bounds. That’s precisely the problem with this and indeed most forms of naive idealism. If you go and actually implement the ideal and throw away the far more complex and pragmatic restraints humans actually operate under you end up with something horrible. While all else being equal causing less suffering and death is good, actually optimizing for less suffering and death is a lost purpose.
Almost any optimizer with the goal “cause less suffering and death” that is capable of killing everyone (comparatively) painlesslessly will in fact choose to do so. (Because preventing death forever and is hard and not necessarily possible, depending on the details of physics.)
I was not talking about this in the context of building an optimizer. I was talking about this as a simple way to as humans gauge whether we had made ethical progress or not. I still think your specific concern about my specific ideal was not warranted:
Since killing everyone qualifies as “death” I don’t see how it could possibly qualify as in-bounds as a method for reaching this particular ideal. Phrased differently, for instance as “Suffering and death are bad, let’s eliminate them.” the ideal could certainly lead to that. But I phrased it as “Causing less suffering and death is good.”
I used the wording “cause less” which means the people enacting the ideal would not be able to kill people in order to prevent people from dying. You could argue that if they kill someone that might have had four children, that four deaths were saved—however, I’d argue that the four future deaths were not originally caused by the particular idealist in question, so killing the potential parent of those potential four children would not be a way for that particular person to cause less deaths. They would instead be increasing the number of deaths that they personally caused by one, while reducing the number of deaths that they personally caused by absolutely nothing.
It does not use the word “eliminate” which is important because “eliminate” and “lessen” would result in two totally different strategies. Total elimination practically requires the death of all, as the only way for it to be perfect is for there to be nobody to experience suffering or death. “Lessen” gives you more leeway, by allowing the sort of “as good as possible” type implementation that leaves living things surviving in order to experience the lessened suffering and death.
Can you think of a way for the idealist to kill everyone in order to personally cause less death, without personally causing more death, or a reason why lessening suffering would force the idealist to go to the extreme of total elimination?
The wording doesn’t prevent that, but your elaboration here does. You’ve (roughly speaking) replaced a simple consequentlialist moral with a rather complex deontological one. The problems and potential failure modes change accordingly. Neither are an ideal against which I would gauge moral progress.
I’m glad we’re now in the same context.
Would you agree or disagree that no matter what anybody had proposed as a potential way of gauging moral progress, you most likely would have disagreed with it, and there most likely would have been the potential for practically endless debate?
What would be most constructive is to be told “Here is this other ideal against which to gauge progress that would be a better choice.” What I feel like I’m being told, instead, is “This is not perfect.” That is a given, and it’s not useful.
I would earnestly like to know whether humanity has made progress. If you want to have that discussion with me, would you mind contributing to the continuation of the conversation instead of merely kicking the conversation down?
I disagree with that hypothesis. I further note that I evaluate claims about value metrics “if taken to the extreme” differently to proposals advocating a metric to be used for a given purpose. In the latter case I consider whether it will be useful, in the former case I actually consider the extremes. In a forum where issues like lost purpose and the complexity and fragility of value are taken seriously and the consequences of taking simple value systems to the extremes are often considered this should come as little surprise.
Can you propose an ideal that would work for this purpose?
Alright, next time I want to talk about something that might involve this kind of ethical statement, if I’m not interested in posting a 400 page essay to clarify my every phoneme and account for every possible contingency, I will say something like “[insert perfect ethical statement here]”.
Edits the comment that started this to prevent further conversation derailment.
I haven’t exactly dedicated my existence to composing an ideal useful for gauging human progress against or anything, I just started thinking about this yesterday, but I did consider the extremes.
I still don’t see anything wrong with this one and you didn’t give me a specific objection. I only got a vague “the same problems as...” From my point of view, it’s a bit prickly to imply that I haven’t considered the extremes. Have I considered them as much as you have? Probably not. But if you want me to see why this particular statement is wrong, I hope you realize that you’ll have to give me some specific refutation that reveals it’s uselessness or destructiveness.
This was not responded to at all, and that’s frustrating.
It’s great that you care about this, and I know you have an interest in (and possibly a passion for?) this sort of reasoning, but I’ve been wondering since the comment where you disagreed with me about this in the first place what purpose you are hoping to serve. Lacking direct knowledge of that, all I have is this feeling of being sniped by a “Somebody on the internet is wrong!” reflex.
Regardless of motives, I feel negatively affected by this approach. I’m feeling all existentially angsty now, wondering whether there is any way at all to have any clue whether humanity is moving forward or backward and tracing the cause back to this conversation here where I am the only one trying to build ideas about this and my respondents seem intent on tearing them down.
What I really wanted to get out of this was to get some ideas about how to gather data on ethics progress. Maybe somebody has already constructed an analysis. In that case, a book recommendation or something would have been great. If not, I was looking for additional ideas for going through the available information to get a gist of this. You’ve obviously thought about this, so I figure you must have something worthwhile to contribute toward constructive action here.
I mention this because it does not appear to have occurred to you that maybe I was doing an initial scouting mission, not setting out to solve a philosophical problem once and for all: I don’t need a perfect way to gauge this right this instant—a gist of things and a casual exploration of the scope involved is all that I’m realistically willing to invest time into at the present moment, so that would be satisfactory. I may dive into it later, but, as they say: “baby steps”.
If you have some idea of what ideal could be used to gauge progress against, I would appreciate it if you’d tell me what it is. If not, then is there some way in which you’re interested in continuing the exploration in a constructive manner that does not consist of me building ideas and you tearing them down?
The wording doesn’t prevent that, but your elaboration here does. You’ve (roughly speaking) replaced a simple consequentlialist moral with a rather complex deontological one. The problems and potential failure modes change accordingly. Neither are an ideal against which I would gauge moral progress.