Ideas on growth of the community
TLDR: I had idea to apply some tools I learned on coursera to our community in order to grow it better. I wanted to start some organized thinking about goals our community has, and offer some materials for people who are eager to work on it, but are maybe lost or need ideas.
Yesterday I did a course on coursera.org. It’s called ”Grow to Greatness: Smart Growth for Private Businesses, Part I”. (I play lectures often at x2.5 so I can do 5 weeks course in one day)
Though this course seems obvious, it’d say pretty worth 3 hours, so look it up. (It’s hard to say how much is hindsight and how much is actually too easy and basic) I got some ideas sorted, and I saw the tools. I’m not an expert now, obviously, but at least i can see when things are done in unprofessional manner, and it can help you understand what follows.
When growing anything (company, community, …) you have different options. You should not opt for everything, because you will be spread thin. You should grow with measure, so that people can follow, and so that you can do it right. This is the essence of the course. Rest is focused on ways of growing.
This was informative part of this article. Rest is some thoughts that just came to my mind that I would like to share. Hopefully I inspire some of you, and start some organized thinking about this community.
This community is some kind of organization, and it has a goal. To be precise, it probably has two goals, as I see it:
to make existing members more rational
to get more members.
Note that second focus is to grow.
I will just plainly write down some claims this course made:
In order to grow:
your people need to grow (as persons, to get more skills, to learn).
you need to create more processes regarding customers, in order to preserve good service
you often need better organization (to regulate processes inside the company)
you need to focus
you need a plan
if you need to stop the fire, stop the fire which has the greatest impact, and make a process out of it, so that people can do it on their own afterwards
1. I guess no-one is against this. After all, we are all here to grow.
2. My guess is that our customers could be defined as new members. So, first steps someone makes here are responsibility of this organization. After, when they get into rationality more, when they start working on themselves, they become employees. That’s at least how it works in my head. Book on sequences is a good step here since it helps to have it all organized in one pdf.
3. this is actually where it all started. We are just a bunch of people with common drive to be more rational. There are meetups, but that’s it. I guess some people see EY as some kind of leader, but even if he were one, that’s not an organization. My first idea is to create some kind of separation of topics, reddit-like. (With or without moderators, we can change that at any point if one option does not work.)
For example, I’m fed with AI topics. When i see AI, I literally stop reading. I don’t even think it’s rational to force that idea so much. I understand the core of this community is in that business, but:
One of the first lessons in finance is “don’t put all the eggs in one basket”. If there is something more important than AI we are fucked if no-one sees it. I guess “non-rational” people will see it (since they were not active on this forum, therefore not focused on AI) but then people of this forum lose attribute “rational” since “non-rationals” outperformed them simply by doing random stuff.
It may stop people from visiting the forum. They may disagree, they may feel “it’s not right”, but be unable to formulate it in “dont put all the eggs in one basket” (my example, kind of). The remaining choice is to stop visiting the site.
So, I would STRONGLY encourage new topics, and I would like to see some kind of classification. If I want to find out about AI, I want to know where to look, and if I don’t want to read about it, I want to know how to avoid it. If I want to read about self-improvement, I want to know where to find it. Who knows, after some rough classification people start to do finer ones, and discuss how to increase memory without being spammed with procrastination. I think this could help the first goal (to make existing members more rational) since it would give them some overview.
I also think this would reduce cult-ism, since it would add diversity, and loose the “meta”.
4. Understatement. Anyone who worked, or read anything about work knows how important plan is. It is OBLIGATORY. Essential. (See course https://www.coursera.org/learn/work-smarter-not-harder/outline )
5. I think this is not very important to us. There are lots of people here. Many enthusiasts. However, this should be some kind of guideline to make a good plan, and to tell us how much resources to devote to each problem.
In conclusion, I understand these things are big. But growth means change. (There is some EY quote on this, I think:not every change is improvement, but every improvement is a change, correct me if I’m wrong.) Humans did not evolve this far by being better, but by socializing and cooperating. So I think we should move from herd to organization.
1) A crazy idea—how about creating a “welcoming committee”; a group of people who would offer to spend some of their time welcoming the new LW members personally (on Skype). They would be volunteers who see the community aspect of LW as important, but also would have to be accepted by the LW vote as representatives of the community (instead of e.g. people who have incompatible ideas, and try to abuse LW for spreading their personal goals).
Now every new user would have an opportunity (and be encouraged) to request e.g. two 10-minutes talks with two members of the “welcoming committee”. The new user would provide a short introduction about themselves (hobbies, what do they expect from LW), and the committee would contact them and have a talk. There would be an (unenforceable) expectation that in return, the new user will write an article on LW, and generally, start being active in the community, if they are compatible with the community.
2) A part of the impression of LW decay could be an artifact of how article publishing works here. The popularity of an author grows gradually, but when a well-known author leaves, it is visible. If we imagine the article quality graph, it could be a curve with a lot of small growth (which we don’t notice) and an occasional sharp drop (noticed by many).
For example, someone new would come, post an article in Discussion with 5 karma, a month later another with 15 karma, yet a month later an article with 30 karma would get to Main, then five more articles in Main… and then the person would decide to start their own blog. What would be our impression of this whole process? Probably that LW is getting worse than before, because yet another important author has left. (We wouldn’t contrast the end situation with what was before the author came to LW.)
3) Related to recent article by Robin Hanson and the discussion below it: people often don’t read sources for information per se, but for information useful socially. For example, if you could read two articles in a newspaper, equally useful, but one published today and another published a year ago, you would prefer to read the article written today, because then you can go out and have a debate with people about it.
Analogically, LessWrong became “old news”. The great old articles (from Sequences, but also by Yvain, lukeprog, etc.) are old. Reading them now for the first time is lower-status than having them read when they were published. MIRI and CFAR themselves are “old news”; they exist, just like they have existed years before. There is no new exciting topic for the new readers. It is like joining an already huge pyramid scheme at the bottom.
This could potentially be helped by creating sub-communities on LW. The new members were not here when LW started, but they can still participate at starting some subgroup, and get status there. (Similarly how people who start a local LW meetup can get high status for that.)
I like the idea of a welcome committee and am willing to spearhead it.
One more rarely mentioned thing.
There is an expression (mostly in business): “take ownership of a problem”. If you take ownership of an issue, it is now yours—you’re responsible for it, it’s up to you to find ways to deal with it, fix it, keep working whatever needs to be working, etc. You cannot say “not my problem, somebody else will fix it” any more.
No one “owns” LW.
Actually, that’s mentioned everytime this comes up. Which is great, except when the problem is so large that the only person who could take responsibility for it is some kind of superman/woman.
That “one” need not be a single person. It could be a group, an organization.
One of the problems is that people who control the LW website are running it in pure maintenance mode. LW was put out to pasture—there have been no changes to functionality in ages.
Maybe it would be good if the control of LW would be handed over to someone who cares more about running it.
I never meet EY in person, so I don’t know to what extend he might be willing to hand LW over. It’s likely a conversation to be had private in person with him.
LW isn’t controlled by Eliezer, it’s controlled by MIRI.
From my perspective it’s hard to tell how their internal agreement is. Have you spoken to Eliezer or MIRI personally about who’s controlling LW?
No, but I know who owns the domain name and I rather doubt EY personally pays for hosting.
I think the hosting is still done by Trike Apps.
I could be wrong but I would estimate that changing control of LW is basically about convincing EY.
The last time I discussed this with Trike Apps, their position was that LW is owned by MIRI, and thus Nate Soares is the final authority. I nevertheless expect that convincing EY is a key component of any competent plan.
I predict there will be a in-person conversation about this in roughly a week.
Since it’s now been two weeks, I’m late for an update: many in-person conversations happened. There are a few follow-up ones to have online (at least one of which will have to wait until Burning Man is over), and then I’ll post about it here on LW.
And one person needs to take responsibility for the meta-task of creating and leading the group.
Actually, the usual start for such things is not a single person, but a conversation. Sometimes you need one leader, sometimes a few “founders” work well, and occasionally even a committee (gasp!) suffices.
Tyranny generally works faster than a democracy… Or oligarchy I guess in this case.
I don’t think “faster” is the overwhelming criterion in this case.
Agree. However, I did not post this to take ownership of a problem, but to facilitate someone who will.
The biggest problem as I see it is loss of members and lack of talented new ones. I’m willing to bet if you plotted the histogram of user activity, you’d see virtually all of the posts and comments coming from a very small number of members. The ‘Top Contributors’ section in the side panel probably contains most of them, and has been relatively stable in composition over the past two years. If the first step in instrumental rationality is to identify reality in an objective way, then we have to realize this site has become an echo chamber for a small few, with their own vocabulary and system of thought which is incompatible with the outside world. The barriers to entry (reading the sequences, reading the ‘seminal’ comment threads, etc.) are too high for most people. HPMOR offers a pathway for new members to start reading the rationality materials, but it doesn’t equip them to meaningfully contribute.
Another thing is that there is only so much ‘low-hanging fruit’ lying around. In terms of general rationality, we’ve covered most everything. There are only so many threads you can have about biases and logical inconsistencies. The topic has become quite stale.
I like AI because at least it offers the possibility for new material to arise every once in a while, leading to useful discussions. Other people might like other topics. I have a huge list of topics whose implications are quite relevant to the art of rationality and so would be quite compatible with the goals of this site:
Thermodynamics
Neuroscience
Neural Networks
Social organization & Forms of government
Human sexual dynamics
The problem is, the types of people likely to be knowledgeable about these things probably have no idea this site exists. And if they do it is unreasonable to expect them to learn the required information to be ‘on the same page’ as this site’s core users. And this is very bad, because it means that this site’s users will attempt to foray into these topics themselves without any help from actual experts.
I don’t know what the solution is. Maybe it’s already too late to do anything.
In regards to the article, the idea of subsections or seperations of topics has already been raised in this post. I personally think it would make more sense to be able to create personal sections based on query results. So, it would just be a saved search query. It would be a section that you can name that would be shown only to you and would run a search when you enter it. The search would just be a normal search that you can run now.
Some ideas I have had on improving the site are:
An announcements sub-section. One reason for why less people might be coming to the site is that the Main is clogged up with Meetup announcements.
Optional comment when liking or disliking. The like/dislike status is useful with helping to sort posts in terms of quality, but it doesn’t really help improve existing posts. If your post, gets disliked, then most of the time you don’t know why it was disliked. If your post was disliked with a comment, then you could spend effort to fix the post, reply to the comment and then the disliker would hopefully reneg the dislike. This would lead to you wanting to improve posts once they have been created. The comment would only be sent to article creator.
Section with post karma that does not affect your personal karma. This would help with new members who start posting lower quality posts and then give up because their user has poor karma. I think your ability to post in discussion or main should be based on the post and comment karma. This means that posts in the section without personal karma would, if liked enough, allow you to post in main.
In regards to the above comment, this is what I think of your points:
The barriers to entry (reading the sequences, reading the ‘seminal’ comment threads, etc.) are too high for most people—I agree. This is why I have started writing a rationality primer. I will post an article with more details on this maybe next week.
There is only so much ‘low-hanging fruit’ lying around. In terms of general rationality—while this is true, I think that there is still alot of area that could be covered in terms of applied general rationality. By this I mean practical advice. This also doesn’t require expert domain knowledge. You just need to go out and actually try something. I think that less wrong can improve in this area.
I like AI because at least it offers the possibility for new material to arise every once in a while—I think that there is alot of content that is tangentially related to rationality that could be extremely helpful. Once again, I think a more practical focus would get people thinking in more divergent directions. Personally, I plan to write some sequences around the idea of strategy it would touch on Mental models, Complexity theory, Systems dynamics, Boydian thinking and maybe some other stuff .
I’d go even further and make the comment mandatory. Or remove likes/dislikes altogether and use ratings like “irrational” or “off-topic” or “I personally disagree” instead, along with the ability to add some more explanation.
+5 insightful, +5 funny :-/
It’s not a new idea and it has been tried. Slashdot spent years experimenting with different karma models—that is valuable empirical data.
Irrational; off-topic; trite; redundant; attitude; poorly written/expressed; I personally disagree.
Rational; interesting/clever; positive vibe man!; I don’t really care, but feel that you have been unfairly downvoted.
Problem: I suspect that most people who down vote because “I personally disagree” might actually click “irrational”.
I think on LW a lot of people can distinguish “I personally disagree” from “you don’t provide a proper argument”.
My thought was that most people who do make the distinction between “I personally disagree” and “you don’t provide a proper argument” are also the people who are unlikely to vote down simply because they disagree. I could be wrong.
I think it depend on the context. There are cases on LW when voting down because you disagree makes sense and others where it doesn’t.
This is likely, and requiring a comment or some additional feedback might be a good idea for calling something irrational.
You can imagine that I clicked “I personally disagree” on your comment.
The problem is, unless I also explain why, I will seem like a jerk. But if I explain, it will cost me time, so naturally after writing a few careful explanations I will simply stop downvoting.
The suggestion for different types of rating is a particularly good idea. Why you were downvoted isn’t always obvious, and this makes it hard to get feedback from a downvote. I think too many people downvote based upon their agreement with a commenter’s position rather than how well it is justified, which isn’t how I think the system should work. I’ve upvoted a number of comments I disagree with but thought were argued well.
I find it amusing that the comment above this one was downvoted without explanation.
We should somehow survey the users to see if their alone-in-a-crowd attitude prevents them from actively posting, and then encourage them to post, because compartmentalization is not a good habit. And I think people who agree that you should help the epileptic however many people are also present, but not that you should speak out ‘what is obvious’ if there are other users with a history of commenting, compartmentalize.
(OTOH, maybe it’s me rationalizing asking much and answering little.)
One problem as I see it is that Self-help as a community driven thing never really took off here. I think a big reason why is the perceived high bar to post quality. Self-help involves engaging with lots of idle weird ideas that don’t work out before you hit on something useful. I’ll also note that most of the online communities that really seem to be “communities” have an off-topic area that attracts a lot of posts where people feel free to be more casual. When brought up in the context of LW people point to the IRC chatroom, but that is a very different type of interaction.
I would be in favour of an offtopic tab separate from Main and Discussion
And then why not do the whole job, and create tab for meetups, to avoid spam in main, tab for AI, tab for decision making, tab for overcoming biases, tab for… Well, we came to my solution. =)
Doesn’t the weekly open thread serve some of this purpose?
Before wanting to grow “the community” it makes sense to ask what “the community” happens to be. You can count Scott’s blog into the community or you can decide that the community is only what’s branded as LW.
I wouldn’t consider that to be the main goals.
For me one of the most important goals is developing the “art of rationality”. A lot of discussion on LW is not about simply applying existing techniques of how to be rational but to develop new concepts. A while ago someone complained that he read a LW post about how to estimate whether a woman will say yes when asked out for a date as probability.
If you think the goal is effective action, there are a lot of reasons why that’s not a good way to approach the subject of asking out a woman. If you on the other hand care about how probability estimates are made in emotionally charged real life situations the inquiry is a lot more interesting.
When it comes to gathering new members quality is more important than quantity. At our Berlin LW meetup we could trivially increase the attendance by putting it on meetup.com. We don’t and as a result have a meetup with the kind of people who find the event without having to check meetup.
So, what’s stopping you from posting new topics yourself?
Exactly the reason why I posted. Nobody wants to make a big community by destroying the quality. That’s the main topic of this course I recommended.
I don’t think that having more meta-threads on how the community can improved provides the kind of content that brings LW forward.
I don’t think that having an extra section for meta conversation about LW could be improved would be a move in the right direction. Especially for people with low karma.
Funny you say that. Because you want LW to go forward, no? I got that from your wording. However, you want to avoid talking about that, and you want to proceed doing what you feel you need to do, which is making posts on whatever you want to post about. You don’t want to do it deliberately, and you want to let it happen on its own. I think that if you thought for 5 minutes on this topic (improvement of LW), you would not have this opinion.
I will continue with position you want LW to improve. Do you claim that staying the same will improve LW or its members? Do we agree LW has to change, as well as its members in order to improve? Do we agree change has to be deliberately chosen to be an improvement? Do you claim you can do it with your gut feeling? Do you claim that course I offered as a resource is false in any way? If so, please refer me to the counterargument. Do you claim organization of the site would not be good for its members? Do you claim segregation of topics would not organize this site? Do you claim this solution is not feasible?
I tried to be rigorous towards arguments you offered, and not harsh to you. I love you, and i hope this conversation will do good to both of us.
Please stop this. Unusual familiarity in the context of disagreeing with someone, is condescending and an insult. And no, it doesn’t fool everyone, either.
If you don’t mind, explain, I honesty don’t get it. I don’t see how that can be an insult. If i wanted to insult him, I would do that much simpler. My reasoning was: There is no way he can see my face, or my mimics, therefore he can perceive me as aggressive, although I am not. I wanted to make an end to any idea of aggressiveness. I want to make an agreement, and not to have endless conversation just because I am perceived aggressive, and because I don’t look like someone you want to agree with. I wanted to show open palms or something, but I cannot do that on keyboard. So in order to express my attitude, i said that to point out my friendly attitude towards him in contrast to my rigorous attitude towards his arguments. Besides, it’s not even a lie. My vocabulary is maybe too small, and “love” has a bit wide meaning, but I do feel some kind of brother or sister love towards everyone.
Funny thing, the previous person with minimal earlier site presence who came in with “here’s how to start properly growing Less Wrong” was also kind of tone-deaf and annoying.
You are just rude now. You just straight up try to insult and discredit me, you did not even try to hide it.
I never said “this is how to …” I offered course specialized in that topic. I offered material. I don’t own that course, i just thought it would be useful to people who try to get more members here (I met few of them, so i expect there are more). I properly separated what is my idea, from that course.
LW doesn’t have a culture where trying to hide what you want to communicate is valued.
Insults don’t work that way. And I don’t think you seriously believe that just because someone’s insult could be simpler, it isn’t an insult.
Besides, just because the explanation for an insult isn’t simple doesn’t mean the insult isn’t simple.
The only way you can end aggressiveness is by not being aggressive. You can’t be aggressive and then add something at the end to make up for it.
It has a meaning based on context, and the meaning based on context is not “I love all human beings”. This ought to go in “geek social fallacies” if it isn’t already; “it’s literally true” is not an excuse. Words have connotations and implications, and it is your responsibility to understand them.
Edit: I believe that if it is plain there is no intention of insult, insult does not exist.
I plainly said that I wasn’t aggressive towards him, that i was afraid my words can be interpreted like that, and that i wanted to cut that possibility off with words “i love you”.
I have positive record with that tactic, people have understood my attitude like that in the past, and I expected it to work again. It didn’t, and that is fine. Maybe I need better technique to express myself, but that is different topic now, and not important since I expressed everything explicitly afterwards (and now again). Why are you still behaving like I wanted to insult him with these words? You clearly see I don’t have that intention.
Yeah, it is, but today i don’t understand. The only thing i see is related to “mother love” and sounding like I am above him, but I can’t make a clear case out of it. And understanding is not dirty dishes, I cannot just make a decision to understand them and do it in a few minutes. Maybe I’ll read something on that topic this year. I think it’s your responsibility that if you see what I expressed and what i wanted to express, you take what I wanted to express (you can even warn me that I expressed wrongly, if you feel so.)
You can’t add a disclaimer “Don’t interpret my words as aggressive” and then be able to say anything at all without it being interpreted as aggressive. After all, anyone who does have aggressive intentions and just wants to lie about it can use any disclaimer that you can use.
Even as a disclaimer, “I love you” is not very good.
The reason why it is insulting is that using excessive familiarity with an opponent is a type of insult. I’m sure I could find some explanation for it (sounding like you’re above him is pretty close), but really, you just need to know that certain things are insults. Understanding why they are insults is less important than understanding that they are insults.
Well, you can often interpret someone’s words many ways. Just because you cannot see the person, and you cannot get information about their emotional state. So, i think you can write something that can be understood many ways, and add a disclaimer, “it’s not the other thing”.
Regarding
I never saw this. Maybe we are from different areas, and this could be explained through cultural difference. First, I am thought to never approach opponent as an enemy, and to always keep in mind they are like me (meaning they are humans, with feelings, with ideas, with goals, with hobbies, with experiences) and not empty, emotionless, evil, etc. Furthermore, I approach discussion as a cooperative activity, since its purpose is to improve both me and the guy I discuss with, and give us both insight in something new. That’s why I never saw “familiarity” being looked upon, since that behavior highlights those two mindsets.
However, I acknowledge there are people with different background, who have different approach (different, not opposite). And now I acknowledge some people could perceive familiarity as insult. Would you mind explaining me how does that insult work? I don’t even have a feeling for it. The closest I ever encountered was where middle-high-class-old-lady meets some homeless person who says something along the lines of “we are the same” and then she stops him to say they are not the same, etc, but that is pretty far from this case.
Edit: formatting and spelling.
You can try, but it’s unlikely to work, because anyone who wants to be aggressive and lie about it can say exactly the same thing as you.
Yes, I’d mind. Because you can reply “I don’t understand your explanation (or I think your explanation is wrong), and since I don’t understand it, I can keep using the insult”.
That’s wrong. You need to stop using insults whether you understand why they are insults or not. You can’t use “I don’t understand the insult” or “nobody has explained it to me properly” as an excuse to keep saying it.
You don’t use some words only if you think the other guy would classify that as an insult (unless you want to insult him). If you dont know someone classifies something as an insult, you might use it on accident.
There is a set of rules that I use to describe an insult (which I have gotten from my culture). You have one, probably everybody has some set of rules. Some general set of rules. If my set of rules does not classify something as an insult, I will think it is safe to say that.
If it happens that I say something, which you consider an insult, and I don’t, unless I understand what is it about, I will need to remember “never tell to Jiro that you love him” (I simplify for the sake of shortness, but there are other parameters inside that statement). I assume there is an underlying explanation behind your rule. This is probably not the only thing you would consider an insult and i wouldn’t. Maybe you will consider an insult “I hate you”, “I like your dog”, “You love me”, or whatever, but I cannot deduce that based on the “never tell Jiro that you love him”.
Help me. I literary see chaos in your statements. I cannot deduce anything better than “Jiro (and maybe culture he is coming from) is quite different from the people (cultures) I faced already”. I don’t know if you can imagine that state of knowledge about something. It’s mostly empty with only one example.
Well, now you know.
What I am trying to avoid is
I explain why the statement is an insult
You think “that’s not a very good reason” or “that’s not my motivation”.
You decide that because you don’t think the reason is very good, you can keep using it. Or you decide that because your motivation doesn’t match the reason, you can keep using it.
If you think I’m the only person who sees such things as insulting, and that the cultures you have already faced do not, you haven’t been paying attention.
You suggested that I likely haven’t spend 5 minutes thinking about the subject which shows complete ignorance. That might be a honest belief but it certainly not rigorous.
Everybody regular of LW has thought about how to improve LW for more than 5 minutes. In my case I have even written tons of posts about the subject.
There are two ways to read this.
(1) You felt personally attacked and wanted to retaliate.
(2) You lack basic understanding.
It’s very hard to read what you wrote and not come to the conclusion that (1) played at least a part.
And you failed. You communicated in a way that doesn’t signal what you want to signal.
It’s interesting that you ask that question will ignoring my request to define what you mean with LW.
If you would search posts that I have written on the topic, would would see that I spent many hours thinking about this topic.
Basically the only way for you to hold this misconception is by being ignorant of prior discussions on the subject on LW. The fact that you haven’t read them suggests to me that you have thought relatively little compared to the amount of thought I put into the subject.
I think having too much segregation of topic is one of the reasons why the QS forum doesn’t work (where I’m a moderator but lost this argument).
I say that as someone who did moderate a big personal development forum for 4 years and who has been asked for advice by other people starting personal development forums.
How much experience do you have in shaping online communities?
No, I don’t believe in intelligent design.
On LW everybody is free to start a new threads without having a debate about starting it. Karma votes then either show that the community likes the new thread or that it doesn’t like it. It sometimes worthwhile to express arguments for why you vote the way you do, but no group design progress is needed at the start.
Do-ocracy is a decent organisational concept.
Karma shows to what extend people contribute to this community. People who don’t contribute to this community don’t have the same standing to tell other people on LW to do things to change LW than people who contribute.
Goals of a community aren’t supposed to be set by outsiders.
It says on the top of the page that this community exists for refining the art of rationality. You don’t care for that goal. You also don’t care for the AI discussions with are a main reason for which LW was founded.
If you want to have something different than what this community is about then why don’t you start something different?
Everyone who’s like me already knows that LW exists.
Do you want to bet?
It’s hard to define the terms of that bet. What am I pointing towards:
I did hear of LW in multiple different contexts online.
I heard it recommended at a CCC event.
I know two people who attended local LW meetups who I meet at QS events.
The week before the first LW Community camp a 99% match turned up on OkCupid. It was a woman who was in Berlin for the LW Community camp. If I wouldn’t have known about LW that’s also an event that would have made me check out LW.
Not having heard of LW would mean that I would have quite different ways to consume information and hang out with people.
You still need only one man outside LW to be like you to be wrong. Although i don’t know who you are (except the member of the LW), there are lot of people on this world (, andLW is not the only source of rationality).
That’s arguing with semantics instead of arguing with substance.
A person who lives in a village in Africa, might have similar genes as I have but they don’t live in the same culture as I. The fact that I discovered LW is a function of the culture to which I’m exposed.
Rationality isn’t the only think that make a person like me, to be like me.
That is arguing with substance. Say there is probability x someone is like you. Talking about your personality, not genes. N is number of people outside LW. In the first approximation, where “person is a member of LW” is independant of “person is like you.” you have (1-x)^N to be right. I have 1-(1-x)^N to be right. If N is big, my probability goes to 1, your goes to 0.
Now, you can say, my approximation is false, which it is. LW influenced you, etc, so there is a correlation. However, unless correlation is 1, there is still a probability for someone to be outside of LW and like you, and if there is a large number N, my probability is still going to 1 and yours is still approaching 0. Exponentially. Now, you can narrow the choice by demanding more similarities, and then this growth would not be strong enough to make up for the smallness of x. But we are talking about someone who could give equal contribution to LW as you(edit: and who would like to develop art of rationality), you can’t diminish x too much.
It is pretty shitty someone is down-voting you, you are just making a very common mistake of underestimating exponential growth. They could at least tell you what mistake did you make.
The problem is that you don’t focus on the intent of the statement. You try to find a meaning in the statement that’s wrong and then focus on that. That goes against the idea of “refuting the central point”. Instead of trying to understand where I’m coming from you assume that I haven’t thought about what I’m saying.
“Like you” is a very vague category.
There a good chance that you engage in the typical mind fallacy. Your personality is more or less normal and therefore there are a lot of people like you outside.
My own personality is not normal but shaped in contexts. It’s shaped by things like doing QS community building where I explained to journalist why QS is the new thing. It’s also shaped by Danis Bois perceptive pedagogy.
That’s not what “like me” means. A professor of psychology is in many ways not like me but he might still contribute to developing the art of rationality.
My argument doesn’t rest on the fact that LW influenced me. The QS community is not the LW community even when it’s no accident that I meet.
That’s still the kind of passive aggressive communication that Jiro complained about.
I’d also like to see targeted interaction and outreach to the academic research community.
GiveWell has a good model of validating and checking intuitions against prominent people in development, but seems to opt for public intellectuals over less famous experts in the field who’s thinking those public intellectuals may defer to. In the EA community, I feel this has lead to such confidence in deworming, when deworming is actually one of if not the most controversial topics in academic impact evaluation (nicknamed worm wars. And DALY’s are the pariah outside of specific subcommunities of impact analysis looking to the future, not immediate use.
There may be many similar misunderstandings in the rationality community which are taken for granted. But unlike the EA community, the rationalist community seems to be less transparent. MIRI technical research agenda is still secret, amongst other things..
By contrasts, I can go on GiveWell, which in some ways isn’t part of the EA community so much as the inspiration for it, and see how they think they think and motivating influences cleanly laid out, without even going into their methodology. Be warmed, ordinary readers, I’m playing the critic here. MIRI is much more technically complicated that GiveWell, I’m just trying to give criticism to be constructive. Path dependence and novelty of MIRI’s agenda, amongst other things, are obvious barriers to doing things the EA way in the rationalist community.
Btw, I think you’ve misspelled ‘community’. Some members of the community seem really neurotic about that sort of thing and it would be shame if you were downvoted or missed upvotes for something as trivial as that.
Thanks for the warning. I forgot to check the title. Grammar-Nazis always lurk for that fresh non-native-speaker flesh.
Grammar is the mind-killer. Once you know which spelling is correct, you must attack all words that appear incorrect; otherwise it’s like stabbing your teachers in the back—providing aid and comfort to the uneducated.
People go funny in the head when talking about grammar. The evolutionary reasons for this are so obvious as to be worth belaboring: In the ancestral environment, spelling was a matter of life and death. And sex, and wealth, and allies, and reputation… Writing “execute not, release!” as “execute, not release!” could let you kill your hated rival!
If you want to make a point about science, or rationality, then my advice is to not choose a contemporary language, if you can possibly avoid it. If your point is inherently verbal, then use French language from the Louis XVI’s era. Language is an important domain to which we should individually apply our rationality—but it’s a terrible tool to learn rationality, or discuss rationality, unless all the discussants are already rational.
/s
If you use French language from the Louis XVI era, nobody will understand you. It isn’t clear that avoiding politics will lead to a similar lack of understanding.
I think it is important to answer why people go to LessWrong and whether it is perceived to be primarily a place where you go to improve one’s rationality that happens to be an internet forum, or an internet forum where you can read interesting things, such as rationality (I think that experiencing an intellectual journey is somewhere in between, but probably closer to the latter). Because there are a lot of large forums where you can read a lot of interesting things—for example, r/askscience and r/askhistorians have hundreds of thousands subscribers and a lot of contributors who produce huge quantities of interesting content.
A place where people go to improve their rationality can take many forms. It doesn’t even have to be a blog, a forum, or a wiki. If I allowed myself to be a bit starry-eyed, I would say, that it would be really interesting, if, for example, LessWrong had its own integrated Good Judgement Project. Or if LW had its own karma (or cryptocurrency) denominated prediction markets. Of course, ideas like these would require a lot of effort to implement.
The signal-to-noise ratio there tends to be poor.
I made a post recently that Less Wrong Lacks Direction that seems very similar to point 3. Less Wrong has moderation, but it doesn’t have leadership. There is almost no concerted group action. Everyone has their own ideas of what Less Wrong needs to do next, and they are all different.
I think Why our kind can’t co-operate is an excellent article. Whenever you post an idea, you might get a few upvotes, but you’ll also get a lot of comments saying that something else is a better idea instead.
The reason why Less Wrong is getting less readers is because Less Wrong has much less content. Either 1) we need someone crazy like Scott Alexander who will solo producing huge amounts of content 2) we need some way to encourage the posting of new content. The ability to create separate sections of Less Wrong would be a great way to increase the amount of content posted. The programming wouldn’t even be that hard, the issue is that the moderators still haven’t commented on whether they’d turn it on if someone went out and did it.
To be honest, that’s basically why I post in open threads. I want other people to tell me something else is better, or that I’m missing something, etc. I don’t recall ever being offended by that, but I have been disappointed by how few replies I’ve received.
I was not talking about “this elevation is higher than yours”. I mean, if you got better idea that solves my problem more efficiently, thank you very much. I was talking about the ideas which are at the same level. You need hot chocolate, which is hot and sweet, but all you have is coffee and lemonade. You can drink coffee, it’s warm but bitter, and you can drink lemonade, which is sweet, but cold. Someone says, take lemonade, someone says take coffee. In the end, it doesn’t matter which one you take. Both solve the problem partially, but community will take none, because it’s not perfect. Translated to the case I wanted to cover: let’s say we have part of the community which wants to improve the world we live in. Some think building better government will help, some think doing research, improving technology etc will help, some think we should start at the bottom and help the ones in the greatest need. They will fight each other, although they could give each team it’s task and do it that way. This is my intention on the long stick. To make teams. Each member could help solving problems he feels he could solve, without spam from other teams, etc.
In the end, I just wanted to help solving our inability to organize.
I was confused by casebash’s reply, but your explanation of the same suggestion makes sense.
Pointing out when we are engaging in analysis paralysis and thus becoming less effective would be a good habit, I think. I’m not good at it, but I’ll see what I can do.
Constructive feedback is great—except when you’re trying to actually get something done. Often it is better to go with a less than perfect plan, as opposed to doing nothing at all
I agree with you completely. Just want to point out that LW lacks directions. It’s complete bullshit that we should all focus on one thing. And having all directions interfere is just making it harder to do anything sensible.
Why do you believe that separate sections will increase the amount of content that’s posted?
He just gave you a reason.
If you organize content, you would get rid of that sort of things. Imagine going on reddit, to math subreddit, and commenting on some theorem “yeah, but it’s better to develop new political system than solving these equations”. It’s just bizarre, and for a reason: not everyone on this world should be solving the same problem.
I actually appreciated ChristianKl’s question, I didn’t answer it as well as I could have
Because different sections could have different rules or norms on what kind of content is acceptable. Sections wouldn’t necessarily increase the amount of content by themselves, but they would if they were well selected. Take for example an off-topic question section. Some conversation already occur—via special threads—but if there was a separate section, many more would happen.
What kind of offtopic discussion do you think would be good to happen that don’t already happen in the open thread of stupid question threads?
The stupid questions threads only happen once every few weeks.
But I’d love to see separate areas for politics or social skills.
As far as a separate area for politics goes, we had a while a separate recurring thread for it. Now we have Omnilibrium. What’s wrong with those solutions and why do you rather want a new section?
As far as social skills how about opening up a new recurring thread for it or specific threads on subaspects? Threads already seem quite successful in establishing different rules and norms.
I’m not a fan of Omnilibrium’s UI, but I guess that’s the lesser issue. The bigger issue is how often do people actually post there? How active is the community? I suspect it’s not going to be very large because it’s a separate site that people have to visit.
If that’s the key problem we might add the Omnilibrium threads to “Recent on Rationality Blogs”
Not just that, but you also get a lot of comments nitpicking a minor detail that hardly affects the main points. For me, at least, that sort of response discourages to post anything that isn’t perfect (which nothing ever is).
You don’t necessarily need one person. The Sequences started due to a conversation between Yudkwosky and Hanson.
I’d also like to see targeted interaction and outreach to the academic research community.
GiveWell has a good model of validating and checking intuitions against prominent people in development, but seems to opt for public intellectuals over less famous experts in the field who’s thinking those public intellectuals may defer to. In the EA community, I feel this has lead to such confidence in deworming, when deworming is actually one of if not the most controversial topics in academic impact evaluation (nicknamed worm wars. And DALY’s are the pariah outside of specific subcommunities of impact analysis looking to the future, not immediate use.
There may be many similar misunderstandings in the rationality community which are taken for granted. But unlike the EA community, the rationalist community seems to be less transparent. MIRI technical research agenda is still secret, amongst other things..
By contrasts, I can go on GiveWell, which in some ways isn’t part of the EA community so much as the inspiration for it, and see how they think they think and reason to be skeptical about charity cleanly laid out, without even going into their methodology. Be warmed, ordinary readers, I’m playing the critic here. MIRI is much more technically complicated that GiveWell, I’m just trying to give criticism to be constructive. Path dependence and novelty of MIRI’s agenda, amongst other things, are obvious barriers to doing things the EA way in the rationalist community.
Btw, I think you’ve misspelled ‘community’. Some members of the community seem really neurotic about that sort of thing and it would be shame if you were downvoted or missed upvotes for something as trivial as that.
I am a lazy and selfish person. I want to get more rational myself, but I don’t want to put any effort into helping others become more rational.
That’s cool. Do you want to have everything sorted in this forum, so that you can choose which topic you want to read? If yes, contribute to that idea, it will help you.
I hope you get rational, cure death alone, and spare me the effort.I’m lazy and selfish as well, and I’m better at that than you./s