LessWrong has a team of six (which includes site development and support, the Alignment Forum in collaboration with MIRI and the EA Forum in collaboration with CEA, plus some assorted smaller projects). We get some funding from small donors, but the majority of funding is from a few large donations. We chose which donations to pursue in part based on which donors would best preserve our independence, and don’t talk to them about site directions and decisions very much. We are currently adequately funded and not actively seeking more donations.
Again, thanks for your replies, though I’m still not sure what to make of them.
On the one hand, I agree that independence is a good thing (even though I may sometimes disagree with some people’s independent decisions). On the other hand, I have deep reservations about charities that in a sense allow governments to evade their appropriate responsibilities to the citizens of their nations. Especially in the case of serious problems, it shouldn’t be a matter of luck (if the victim stumbles across a helpful charity) or willingness and ability to actively beg for help. (Food as an obvious example. Some people prefer to starve to death before begging.) On the third hand, I think there are multiple constituencies here (within LW) and each person and each group of people have different priorities and objectives, etc.
Several more hands, but let me try a few exploratory questions instead. Which “constituency” do I belong to (from your LW team perspective)? How should I properly express support for or concern about “developments” (on LW)?
BtW, I think I like the leisurely atmosphere of LW. However I may be projecting due to my recent externally forced shifts of priorities (which are also obliging me to give LW a relatively low priority). But on the fourth hand I am also having trouble figuring out what material on LW is still relevant even though it is old. LW kind of feels like a virtual book in the process of formation, with various chapters in various states of completion… (The longest chunk of my career was technical editing for a TLC, but the research lab didn’t publish many books. Some chapters and dissertations came across my desk from time to time, but mostly just conference papers and HR stuff.)
Charity vs government, both have big disadvantages. Charity depends on luck, and on the victim being “popular” in some sense. Government depends on politics, and dealing with the bureaucracy is sometimes almost as humiliating as begging.
This said, projects like “new Less Wrong website” in my opinion should not be paid by government. It is something that serves a specific group, which can pay its own expenses.
I am also having trouble figuring out what material on LW is still relevant even though it is old. LW kind of feels like a virtual book in the process of formation, with various chapters in various states of completion...
Just to avoid possible confusion, the team is paid for developing and maintaining the technical infrastructure, not writing the articles. The articles are all written by volunteers. So if you were worried about independence of content from sponsors, I hope this helps.
Making the corpus of old articles easier to navigate is a known problem, and there are several attempts to solve it: wiki, tags, books.
Wiki could in theory be as organized as you want it to be. In practice, it seems to be ignored, as the main attention is on the articles, and the wiki is almost a separate project. (But recently it was integrated with tags.)
Tags provide an overview of topics, catalogize articles per topic, and allow you to find articles similar to one you are reading.
Best articles from 2018 were published as a book, and the same is planned for the following years. So if you joined recently and want to quickly get an overview of the “best of Less Wrong”, I would recommend reading the Sequences (web; PDF/epub/mobi) and the 2018 book.
Again, thank you for your thoughtful reply. I feel like I’m trying to use a depth-first response strategy and it’s making it harder for me to see what is really going on.
I think the most interesting problem raised in your response is the integration problem. If people are just contributing their thoughts because they want to, then they don’t really have much incentive to do the hard work of integrating their thoughts into the thoughts of other people. If Wikipedia is able to accomplish that kind of integration to a fairly high degree, I think it is due to their guiding principles, and right now I don’t understand the principles of LW. I can definitely say (based on many years of professional work) that it’s hard work and I was well paid for my efforts in making technical papers (up to dissertations) more cohesive and integrated with previous research results.
My newer view is that LW is almost like a form of performance art, with the contributors in the role of artists.
What could LW do to encourage more integration of the content? I see it largely as a search and editing problem. Projecting again? At least I think it would be nice if LW was looking at what I am writing and searching for related content, perhaps showing candidates over in that empty space on the right side (of my biggest display). Then the editing problem would be supporting me in integrating my new content into the older content so that I could help extend or clarify that material.
But why would I make the effort? Obviously when I was paid to do that kind of thing, the answer was easy. Because I was doing it for money. Trickier to justify now. I think I’m mostly writing because it helps me clarify my own thinking about things. I also feel a sort of “teacher’s satisfaction” when I feel like I’ve “enlightened” someone. It would be nice if the system (LW in this example) offered me some way to track my contributions. I might even feel like I’d accomplished something if I found I had written 30,000 words last year. (Actually, I am tracking my writing, but without caring enough to run the totals. But I’d estimate at least 200,000 words/year. Probably less than 400,000?)
But there are many reasons for writing. I think some people write in hopes of getting famous. That may be linked to hopes of earning money, or even striking it rich with a bestselling book. Some people seem to write in search of attention or recognition. Then there are the trolls, some of whom seem to write to annoy people and get negative attention. (Why? Such motives are evidently beyond my ken?)
Anyway, I’ve wandered off again. My original intention in posing this question was rather different. I’m trying to figure out what sort of things I can ‘properly’ write about here on LW. My interests are pretty broad. I do feel like AI is a heavy concern, even a favored topic, here on LW, and that is probably related to the preferences of the donors or maybe the personal concerns of the “team of six” (artists?) who do the “site development and support”.
Maybe it would be good to try a list of topics and ask which are and are not appropriate for LW? For the appropriate topics, someone could help me figure out where they belong? Or even better if they have already been discussed exhaustively and I can just learn how to find those discussions? So here’s a short list of a few of the things on my mind these days:
Literacy development software (mostly for kids but with adult options (including multilingual))
Personal reputation systems (mostly to become a better person by understanding how other people evaluate my reputation (and I already wrote a bit about this on LW) but also to recognize (and filter) ‘untrustworthy’ sources of dubious information) [Should I link to that comment? But evidently anchored links are not supported here in replies?]
Time-based economics (which I tag “ekronomics” (and which is broadly related to this selfsame discussion))
New products (like smartphone hats and modular smart chairs and multi-mode super-bikes and timing-based continuous BP monitors (without pressure cuffs) and a Pokemon Chair app and...)
AI (but mostly I feel like it’s a pointless topic, since the answers are intuitively obvious to the most casual observer (such as the late and great Iain M Banks)) ;-)
Political reform (with some radical thoughts like no-loser guaranteed-representation elections with logarithmic weighting and additional dimensions of for new political checks and balances)
Now I feel like I’m wandering way too much, but I hope some parts of it were of some interest to someone. Right now I’m mostly just trying to figure out where my ideas fit on LW. If they fit anywhere? I just started with the ‘influence of money’ aspect, probably because I feel like I should pay for any value received and I hope to receive value from LW. (And of course payments don’t need to be monetary or even evaluated (with shoehorns) based on monetary equivalents.)
Wikipedia generally works fine, but occassionally problems happen. Sometimes obsessive editors are rewarded with power, which they sometimes abuse to win the debates on their pet topics. As long as other similarly powerful editors don’t care, they are allowed to rule their little fiefdoms.
As an example, David Gerard, the admin of RationalWiki, is currently camping at the Wikipedia article on Less Wrong; most of his effort goes towards reducing the section on effective altruism and expanding the section on “Roko’s basilisk”… which itself is known mostly because he previously popularized it on RationalWiki. (Also notice other subtle manipulation, like the fact that the page mentions the political opinion of 0.92% of 2016 survey participants, but the remaining 99.08% is not worth mentioning.) I mean, just make your own opinion on how much the content of Less Wrong as you see it here actually resembles the thing that is described at Wikipedia. -- One guy, with a strong grudge, willing to spend more time fighting wiki wars than all his opponents together. ¯\_(ツ)_/¯
The principles of LW… well, originally it was a shared blog by Robin Hanson and Eliezer Yudkowsky, later (cca 10 years ago) Yudkowsky moved his part (which is commonly referred to as “the Sequences”) to a separate website, which enabled voting for articles, and allowed other people to register accounts and post their own articles. So the principles are, de facto, “what the community, which has grown around Yudkowsky’s blog, approves of”. (Note that Yudkowsky himself, other than being respected as a founder, doesn’t currently have any special rights within the website, and he is gradually less and less involved; e.g. he only postedone article in 2020.) To see what the community approves of, stay here for some time, watch what gets upvoted and what does not; on a lucky day a comment may explain why.
My newer view is that LW is almost like a form of performance art, with the contributors in the role of artists.
I like this metaphor. But aren’t most web debates like this? By which I mean that the metaphor alone doesn’t explain how LW is different from the rest of internet. Perhaps we should add that this performance is played for the same kind of audience that enjoyed the original Yudkowsky’s blog.
I agree that posting links to related articles is useful, and also lot of work, especially as the site had grown so no one can now remember everything. There are the tags below articles, which (I only noticed this now) display articles on the same topic when you move your mouse over the tag. You can look at the list of all tags, or you can use Google. I agree that this could be made more convenient, but I don’t see that as high priority.
It would be nice if the system (LW in this example) offered me some way to track my contributions.
I’m trying to figure out what sort of things I can ‘properly’ write about here on LW.
I think how is much more important than what. Of course it’s a bonus if the thing is related to artificial intelligence, machine learning, math, effective altruism, self-help, etc. But more important is not to write bullshit. Write about something you understand, or something you experienced; but generally something you care about. Don’t bluff; you never know what kind of expert will read and comment your article; there are some pretty smart people here.
(As a rule of thumb, don’t write anything about politics, unless you can reliably write articles on other topics that get like 10 karma each, otherwise you risk doing exactly the same mistake many people did before you, and invoking the anger of community. Which again, relates more to how people write rather than what. Somehow, when people start thinking about politics, their IQ automatically drops by 50 points, and they produce exactly the type of content we are trying to avoid here. Yeah, there are exceptions, but notice how e.g. this differs from 99.99% of political discourse you can find online.)
I do feel like AI is a heavy concern, even a favored topic, here on LW, and that is probably related to the preferences of the donors or maybe the personal concerns of the “team of six” (artists?) who do the “site development and support”.
Just ignore this completely. Other people already write about AI. If you have something insightful to add, go ahead, but if you are more interested in other things, don’t force yourself; it would help no one. I’d prefer to read a good article about how to train a pet lizard, rather than a mediocre essay on why artificial intelligence is this or that. (The worst case for a good off-topic article is that it will be ignored. A bad essay will be downvoted. And precisely because there are many AI experts here, a mediocre essay on AI would be perceived as bad.)
If you have an idea of a very important article, where it would hurt you to get negative reaction, maybe test waters with something smaller first. My opinion (which may not be representative for LW as a whole) about your suggestions is:
1 - great;
2 - potentially interesting; I would recommend writing it from perspective “what I use” or “these are some existing systems”, rather that “this is the One True Way to do it”;
4 - if there is a specific product you like and find useful, go ahead;
5 - don’t force yourself;
6 - depends… maybe leave this for later, and test the waters with other topics first.
By the way, links are supported in replies. Select the words, a context menu will appear, click on the “Link” button, and paste the URL. To get the URL for an article or a comment, right-click on the publishing date and copy the link.
I mean, just make your own opinion on how much the content of Less Wrong as you see it here actually resembles the thing that is described at Wikipedia.
The problem here is that the goal of Wikipedia isn’t to describe LessWrong as it’s seen by someone who goes to LessWrong but how LessWrong is seen by reliably secondary sources.
Thank you for another deep and thoughtful response. But what response should I make? [Note that second person “you” here refers to Viliam, but there is risk of confusion if I say something to the broader (but unknown) audience. I’ll try to be careful… But in this discussion I am sure that I have already used “you” with reference to someone else. [I find myself wishing that English had a mechanism to avoid confusing “you” references without ponderous third person descriptions such as “Viliam in his comment of <timestamp> said...”]]
The easy part is to pick a couple of nits, but I’m trying to get deeper than that… But when I back up (and look at the context) then the volume becomes overwhelming and I’m having trouble unraveling the topics. I do feel that part of the problem is my poor and unclear writing, but it is also true that I don’t understand how to use the system well.
So I’m going to focus on two nits here, one that reflects my lack of understanding of the system and one that reflects the lack of clarity in my writing. Then I’ll try to get back up to a higher perspective, which seems to be the karma thing… (But that topic is more related to my earlier reply on the karma “research” from the end of 2019.)
At the end of your comment, what you described is an interesting example of my lack of understanding of the LW system. Or maybe an example of my failing eyesight? I definitely knew that it worked exactly the way you described it for “top-level” content, but for several days I was apparently unable to see the fifth icon on the context menu when I was working on a reply (such as this one). But this is just part of a more general lack of familiarity with the system. Another example: A few minutes ago I spent several minutes figuring out that a “5m” notation meant 5 minutes ago, not 5 months ago, even though the article had an “11y” notation for the 11 years from 2010. The section heading of “Recent Discussion” should have made it more obvious to me, but now I wonder what the notation for 5 months ago would have been… (Relative times are good, but sometimes confusing.)
The other nit involves my poor clarity. I was already quite aware of the “this” link you posted to my user page and it does list my contributions, but not in the sense of “track” that I was trying to describe. There are also the pull-down notices invoked by the bell icon at the upper right. What I am currently unable to do is combine these views to get a mental image of what is happening. Where do my own comments fit into the discussion? What is the structure of the replies?
Is there a tree graphic representation of the discussions hidden somewhere around here? I’m imagining a node diagram with one color for my own contributions, separate colors for each of the primary contributors, and then a fallback color for grouping all of the minor contributors. Now I’m imagining solid lines for direct replies and dotted lines for links that go elsewhere. (If the 80-20 rule applies to discussions here, then at least the part with colors for contributors might work well enough with a reasonably small number of colors.)
For whatever it is worth, I feel like this discussion itself is already beyond my ken. I feel like the lesson that I am learning is that I need to learn to limit my questions MUCH more narrowly. (I have only looked at a few relies, and my available time is already becoming exhausted by this one reply. But was this the best place to begin today? (And now I lack time (and musal energy) to return to the karma topic.))
Also I greatly appreciate the politeness of the replies and I feel like I am being indulged in my ignorance. In solution terms, how could I learn about the system without bothering other people? (Or is that intrinsically impossible in the context of a discussion system such as this?)
Hey, if you’re new here, it’s perfectly natural that there are some website functions you are not familiar with. I am here for years, and there are still things I don’t know. Keep reading, you will gradually get more familiar with how this all works.
A few minutes ago I spent several minutes figuring out that a “5m” notation meant 5 minutes ago, not 5 months ago, even though the article had an “11y” notation for the 11 years from 2010.
Good catch! I never noticed this one. (If you move the mouse above the abbreviation, the full date and time will be displayed.)
The UI you imagine probably does not exist. What you can get is (a) the list of all articles you posted, in chronological order; and (b) the list of all comments you made, in chronological order, with links to context. Both of them are on the same page, when you click on your name.
For me, this is quite enough, because the number of my posts will most likely never exceed three digits, probably not even two (though I wish the meetup announcements were displayed separately from the actual articles), and given the huge number of comments I wrote during the years, I don’t believe I would ever want to see them all.
In solution terms, how could I learn about the system without bothering other people?
Maybe read articles with the Site Meta tag? Not all of them are related to what you want, but probably most of what you want is covered somewhere there.
Thanks for the lead to the “Site Meta” tag. I have that one open in another tab and will explore it next. However my general response to your reply is that part of the problem is that I would like to see different kinds of “tracking summaries” depending on what kinds of things I am trying to understand at a particular time.
You introduced a new example with your mention of “meetup announcements”. If you are trying to track your activity on LW in terms of such meetings, then you want to see things from that perspective.
What I have done in today’s experiment is to open all the “recent” notifications in tabs because it is not clear which ones are actually new… It would be helpful if the notifications pulldown list also showed the notification times (though the mouseover trick for date expansion also works for the relative dates on the floating summary that appears to the left of the notification when you hover over it). Overall I’m still having a difficult time grasping the status of this question.
Just rereading the entire “question” to try to assess it, and almost overlooked your [Viliam’s] helpful numbered list. I think I have replied as appropriate (if replying was appropriate?) and hope that the notification system will let me know if I should come back.
On the basis of your encouragement, I’m going to try to write something for the literacy software topic. Not sure upon what basis you think it might be “great”, but I could not find much that seemed to be related in my search efforts on LW. The obvious searches did produce some results, but how they are ranked is still unclear. For example, I remember a “literacy” search with four primary results, but two of them were for narrow senses of literacy such as “financial literacy”. Before starting to write, I’m going to try searching from the list of tags. (It would be helpful if there were an option to sort by the numbers there… That way I could spot the more relevant tags more easily. (I’m guessing that the numbers are the authors’ usage counts for the tags, but there should be a way to link to the readers’ counts to capture the other side of interest? (What people want to read about in contrast to what people want to write about. (Yet another symmetry thing?))))
Backing up to the top level, I haven’t obtained much insight into the original question. I guess my summary of my understanding now would be “We’re sort of above worrying about money, so go have fun with the LW tools we are creating.” I think that summary reflects input from at least two of the creators of the tools. The users’ side seems to be “We’re having fun and that’s why we do it.”
Here is a part of LW history that may be relevant to the question of money and sponsors: the Less Wrong website you see is, from a technical perspective, already a third version.
The first version was Overcoming Bias, a shared blog of Robin Hanson and Eliezer Yudkowsky, which started in 2006. Being just two guys’ personal WordPress blog, I assume the costs were negligible.
The second version was Less Wrong implemented with a clone of Reddit code in 2009, which started with importing the existing Eliezer’s articles. The initial software was free, but required some maintenance and extra functionality, which was provided by TrikeApps. TrikeApps is a company owned by Less Wrong user matt.
The third version that you see now, with a complete rewrite of code, was actually made only a few years ago. I couldn’t quickly find the exact year, but not sooner than 2017. This was the first version that was actually quite expensive to develop.
In other words, before Less Wrong started needing serious money to exist as a website, it already had more than 10 years of history. So there is a strong momentum. The people who donated money are presumably the people who liked the existing LW, and therefore their wish is probably to keep it roughly like it was, only more awesome. (The people who didn’t like the historical Less Wrong would probably not donate money to keep it alive.) The fans of Less Wrong, as a whole, are sufficiently rich to keep the website alive.
PS: You are taking this too seriously; probably more seriously than most users here. There is no need to overthink it. If you have an idea for a nice article, write it. If you don’t, just reading and commenting is perfectly okay.
The people who donated money are presumably the people who liked the existing LW, and therefore their wish is probably to keep it roughly like it was, only more awesome.
I think a better historical perspective would be that they liked was LessWrong was in it’s first years of existing and felt that LessWrong declined and that there was a potential to bring it back to it’s old glory and make it even beter.
I feel like this branch of the discussion might be related to Dunbar’s Number? Either for total members or for active participants. Is there any data for number of participants over time and system versions?
However I also feel like Dunbar’s Number is probably different for different people. Social hubs have large numbers of personal friends, whereas I feel overwhelmed by any group of 150. My personal Dunbar’s Number might be around 15?
My interests are pretty broad. I do feel like AI is a heavy concern, even a favored topic, here on LW, and that is probably related to the preferences of the donors or maybe the personal concerns of the “team of six” (artists?) who do the “site development and support”.
AI (but mostly I feel like it’s a pointless topic, since the answers are intuitively obvious to the most casual observer (such as the late and great Iain M Banks)) ;-)
LessWrong doesn’t focus on AI in general but on AI safety or AGI. Saying that the answers are intuitively obvious sounds to me like not understanding the questions and why people consider the open questions to be open and interesting. Without understanding the questions well enough, I doubt writing about it would lead to articles that are useful to anyone and such are well received.
It would be nice if the system (LW in this example) offered me some way to track my contributions. I might even feel like I’d accomplished something if I found I had written 30,000 words last year.
LessWrong shows you how much karma you get on your profile. That seems to me like a better metric then how much words are written.
Thank you for the reply, and I am also somewhat aware of karma. It does seem useful, but not in a searchable way. Per my suggestion for extended karma (one of my first efforts on LW), I wish that karma (in a multidimensional form) were usable for self-improvement, for filtering and prioritizing, and even for searching for people who are likely to write things worth reading.
I guess one helpful step would be if karma was included in the flyover display. Right now the “ChristianKI” flyover only reveals 4 dimensions of your identity: Your identity’s age (joined date), # of sequences, # of posts, and # of comments. That gives me some idea of your activities, but isn’t as helpful (in my imagination) as a radar icon showing that you are above average on consistency and accuracy and perhaps below average in some other dimensions.
Accuracy is relatively easy to assess. If you think someone is saying something that is false and you are reacting to the comment on that basis, then you should be able to cite appropriate evidence to that effect. (But the other person should be able to object to your evidence as part of a ‘proper’ MEPR system.)
I actually think most dimensions of the reputation system should be normalized around zero, so that if people tend to give more negative reactions, then the system should be adjusted to make it more difficult to give a negative reaction, such as saying a comment is inaccurate. (However I also think that should be weighted by the MEPR of the person making the rating. If someone has a established a long track record of catching inaccuracies, then the likelihood is higher for that person.)
I agree that consistency is much trickier. Even in the case where I know the person has changed his mind on a topic, I would not regard it as inconsistent if there was good reason for that change. I think I might like computer support for something like that. How about a triggered search? “Show me this person’s comments about <target keyword>” and I could then look over the results to see if they are unchanged, evolving over time, or jumping back and forth.
But actually that is something I would like to apply to my own comments over time. I think I am fairly consistent, but perhaps I am deluding myself?
LessWrong has a team of six (which includes site development and support, the Alignment Forum in collaboration with MIRI and the EA Forum in collaboration with CEA, plus some assorted smaller projects). We get some funding from small donors, but the majority of funding is from a few large donations. We chose which donations to pursue in part based on which donors would best preserve our independence, and don’t talk to them about site directions and decisions very much. We are currently adequately funded and not actively seeking more donations.
Yep, for more context, most of our funding comes from the Survival and Flourishing Fund and the Open Philanthropy Project.
Again, thanks for your replies, though I’m still not sure what to make of them.
On the one hand, I agree that independence is a good thing (even though I may sometimes disagree with some people’s independent decisions). On the other hand, I have deep reservations about charities that in a sense allow governments to evade their appropriate responsibilities to the citizens of their nations. Especially in the case of serious problems, it shouldn’t be a matter of luck (if the victim stumbles across a helpful charity) or willingness and ability to actively beg for help. (Food as an obvious example. Some people prefer to starve to death before begging.) On the third hand, I think there are multiple constituencies here (within LW) and each person and each group of people have different priorities and objectives, etc.
Several more hands, but let me try a few exploratory questions instead. Which “constituency” do I belong to (from your LW team perspective)? How should I properly express support for or concern about “developments” (on LW)?
BtW, I think I like the leisurely atmosphere of LW. However I may be projecting due to my recent externally forced shifts of priorities (which are also obliging me to give LW a relatively low priority). But on the fourth hand I am also having trouble figuring out what material on LW is still relevant even though it is old. LW kind of feels like a virtual book in the process of formation, with various chapters in various states of completion… (The longest chunk of my career was technical editing for a TLC, but the research lab didn’t publish many books. Some chapters and dissertations came across my desk from time to time, but mostly just conference papers and HR stuff.)
Charity vs government, both have big disadvantages. Charity depends on luck, and on the victim being “popular” in some sense. Government depends on politics, and dealing with the bureaucracy is sometimes almost as humiliating as begging.
This said, projects like “new Less Wrong website” in my opinion should not be paid by government. It is something that serves a specific group, which can pay its own expenses.
Just to avoid possible confusion, the team is paid for developing and maintaining the technical infrastructure, not writing the articles. The articles are all written by volunteers. So if you were worried about independence of content from sponsors, I hope this helps.
Making the corpus of old articles easier to navigate is a known problem, and there are several attempts to solve it: wiki, tags, books.
Wiki could in theory be as organized as you want it to be. In practice, it seems to be ignored, as the main attention is on the articles, and the wiki is almost a separate project. (But recently it was integrated with tags.)
Tags provide an overview of topics, catalogize articles per topic, and allow you to find articles similar to one you are reading.
Best articles from 2018 were published as a book, and the same is planned for the following years. So if you joined recently and want to quickly get an overview of the “best of Less Wrong”, I would recommend reading the Sequences (web; PDF/epub/mobi) and the 2018 book.
Again, thank you for your thoughtful reply. I feel like I’m trying to use a depth-first response strategy and it’s making it harder for me to see what is really going on.
I think the most interesting problem raised in your response is the integration problem. If people are just contributing their thoughts because they want to, then they don’t really have much incentive to do the hard work of integrating their thoughts into the thoughts of other people. If Wikipedia is able to accomplish that kind of integration to a fairly high degree, I think it is due to their guiding principles, and right now I don’t understand the principles of LW. I can definitely say (based on many years of professional work) that it’s hard work and I was well paid for my efforts in making technical papers (up to dissertations) more cohesive and integrated with previous research results.
My newer view is that LW is almost like a form of performance art, with the contributors in the role of artists.
What could LW do to encourage more integration of the content? I see it largely as a search and editing problem. Projecting again? At least I think it would be nice if LW was looking at what I am writing and searching for related content, perhaps showing candidates over in that empty space on the right side (of my biggest display). Then the editing problem would be supporting me in integrating my new content into the older content so that I could help extend or clarify that material.
But why would I make the effort? Obviously when I was paid to do that kind of thing, the answer was easy. Because I was doing it for money. Trickier to justify now. I think I’m mostly writing because it helps me clarify my own thinking about things. I also feel a sort of “teacher’s satisfaction” when I feel like I’ve “enlightened” someone. It would be nice if the system (LW in this example) offered me some way to track my contributions. I might even feel like I’d accomplished something if I found I had written 30,000 words last year. (Actually, I am tracking my writing, but without caring enough to run the totals. But I’d estimate at least 200,000 words/year. Probably less than 400,000?)
But there are many reasons for writing. I think some people write in hopes of getting famous. That may be linked to hopes of earning money, or even striking it rich with a bestselling book. Some people seem to write in search of attention or recognition. Then there are the trolls, some of whom seem to write to annoy people and get negative attention. (Why? Such motives are evidently beyond my ken?)
Anyway, I’ve wandered off again. My original intention in posing this question was rather different. I’m trying to figure out what sort of things I can ‘properly’ write about here on LW. My interests are pretty broad. I do feel like AI is a heavy concern, even a favored topic, here on LW, and that is probably related to the preferences of the donors or maybe the personal concerns of the “team of six” (artists?) who do the “site development and support”.
Maybe it would be good to try a list of topics and ask which are and are not appropriate for LW? For the appropriate topics, someone could help me figure out where they belong? Or even better if they have already been discussed exhaustively and I can just learn how to find those discussions? So here’s a short list of a few of the things on my mind these days:
Literacy development software (mostly for kids but with adult options (including multilingual))
Personal reputation systems (mostly to become a better person by understanding how other people evaluate my reputation (and I already wrote a bit about this on LW) but also to recognize (and filter) ‘untrustworthy’ sources of dubious information) [Should I link to that comment? But evidently anchored links are not supported here in replies?]
Time-based economics (which I tag “ekronomics” (and which is broadly related to this selfsame discussion))
New products (like smartphone hats and modular smart chairs and multi-mode super-bikes and timing-based continuous BP monitors (without pressure cuffs) and a Pokemon Chair app and...)
AI (but mostly I feel like it’s a pointless topic, since the answers are intuitively obvious to the most casual observer (such as the late and great Iain M Banks)) ;-)
Political reform (with some radical thoughts like no-loser guaranteed-representation elections with logarithmic weighting and additional dimensions of for new political checks and balances)
Now I feel like I’m wandering way too much, but I hope some parts of it were of some interest to someone. Right now I’m mostly just trying to figure out where my ideas fit on LW. If they fit anywhere? I just started with the ‘influence of money’ aspect, probably because I feel like I should pay for any value received and I hope to receive value from LW. (And of course payments don’t need to be monetary or even evaluated (with shoehorns) based on monetary equivalents.)
Wikipedia generally works fine, but occassionally problems happen. Sometimes obsessive editors are rewarded with power, which they sometimes abuse to win the debates on their pet topics. As long as other similarly powerful editors don’t care, they are allowed to rule their little fiefdoms.
As an example, David Gerard, the admin of RationalWiki, is currently camping at the Wikipedia article on Less Wrong; most of his effort goes towards reducing the section on effective altruism and expanding the section on “Roko’s basilisk”… which itself is known mostly because he previously popularized it on RationalWiki. (Also notice other subtle manipulation, like the fact that the page mentions the political opinion of 0.92% of 2016 survey participants, but the remaining 99.08% is not worth mentioning.) I mean, just make your own opinion on how much the content of Less Wrong as you see it here actually resembles the thing that is described at Wikipedia. -- One guy, with a strong grudge, willing to spend more time fighting wiki wars than all his opponents together. ¯\_(ツ)_/¯
The principles of LW… well, originally it was a shared blog by Robin Hanson and Eliezer Yudkowsky, later (cca 10 years ago) Yudkowsky moved his part (which is commonly referred to as “the Sequences”) to a separate website, which enabled voting for articles, and allowed other people to register accounts and post their own articles. So the principles are, de facto, “what the community, which has grown around Yudkowsky’s blog, approves of”. (Note that Yudkowsky himself, other than being respected as a founder, doesn’t currently have any special rights within the website, and he is gradually less and less involved; e.g. he only posted one article in 2020.) To see what the community approves of, stay here for some time, watch what gets upvoted and what does not; on a lucky day a comment may explain why.
I like this metaphor. But aren’t most web debates like this? By which I mean that the metaphor alone doesn’t explain how LW is different from the rest of internet. Perhaps we should add that this performance is played for the same kind of audience that enjoyed the original Yudkowsky’s blog.
I agree that posting links to related articles is useful, and also lot of work, especially as the site had grown so no one can now remember everything. There are the tags below articles, which (I only noticed this now) display articles on the same topic when you move your mouse over the tag. You can look at the list of all tags, or you can use Google. I agree that this could be made more convenient, but I don’t see that as high priority.
Like this?
I think how is much more important than what. Of course it’s a bonus if the thing is related to artificial intelligence, machine learning, math, effective altruism, self-help, etc. But more important is not to write bullshit. Write about something you understand, or something you experienced; but generally something you care about. Don’t bluff; you never know what kind of expert will read and comment your article; there are some pretty smart people here.
(As a rule of thumb, don’t write anything about politics, unless you can reliably write articles on other topics that get like 10 karma each, otherwise you risk doing exactly the same mistake many people did before you, and invoking the anger of community. Which again, relates more to how people write rather than what. Somehow, when people start thinking about politics, their IQ automatically drops by 50 points, and they produce exactly the type of content we are trying to avoid here. Yeah, there are exceptions, but notice how e.g. this differs from 99.99% of political discourse you can find online.)
Just ignore this completely. Other people already write about AI. If you have something insightful to add, go ahead, but if you are more interested in other things, don’t force yourself; it would help no one. I’d prefer to read a good article about how to train a pet lizard, rather than a mediocre essay on why artificial intelligence is this or that. (The worst case for a good off-topic article is that it will be ignored. A bad essay will be downvoted. And precisely because there are many AI experts here, a mediocre essay on AI would be perceived as bad.)
If you have an idea of a very important article, where it would hurt you to get negative reaction, maybe test waters with something smaller first. My opinion (which may not be representative for LW as a whole) about your suggestions is:
1 - great;
2 - potentially interesting; I would recommend writing it from perspective “what I use” or “these are some existing systems”, rather that “this is the One True Way to do it”;
3 - potentially interesting, potentially bullshit;
4 - if there is a specific product you like and find useful, go ahead;
5 - don’t force yourself;
6 - depends… maybe leave this for later, and test the waters with other topics first.
By the way, links are supported in replies. Select the words, a context menu will appear, click on the “Link” button, and paste the URL. To get the URL for an article or a comment, right-click on the publishing date and copy the link.
The problem here is that the goal of Wikipedia isn’t to describe LessWrong as it’s seen by someone who goes to LessWrong but how LessWrong is seen by reliably secondary sources.
Thank you for another deep and thoughtful response. But what response should I make? [Note that second person “you” here refers to Viliam, but there is risk of confusion if I say something to the broader (but unknown) audience. I’ll try to be careful… But in this discussion I am sure that I have already used “you” with reference to someone else. [I find myself wishing that English had a mechanism to avoid confusing “you” references without ponderous third person descriptions such as “Viliam in his comment of <timestamp> said...”]]
The easy part is to pick a couple of nits, but I’m trying to get deeper than that… But when I back up (and look at the context) then the volume becomes overwhelming and I’m having trouble unraveling the topics. I do feel that part of the problem is my poor and unclear writing, but it is also true that I don’t understand how to use the system well.
So I’m going to focus on two nits here, one that reflects my lack of understanding of the system and one that reflects the lack of clarity in my writing. Then I’ll try to get back up to a higher perspective, which seems to be the karma thing… (But that topic is more related to my earlier reply on the karma “research” from the end of 2019.)
At the end of your comment, what you described is an interesting example of my lack of understanding of the LW system. Or maybe an example of my failing eyesight? I definitely knew that it worked exactly the way you described it for “top-level” content, but for several days I was apparently unable to see the fifth icon on the context menu when I was working on a reply (such as this one). But this is just part of a more general lack of familiarity with the system. Another example: A few minutes ago I spent several minutes figuring out that a “5m” notation meant 5 minutes ago, not 5 months ago, even though the article had an “11y” notation for the 11 years from 2010. The section heading of “Recent Discussion” should have made it more obvious to me, but now I wonder what the notation for 5 months ago would have been… (Relative times are good, but sometimes confusing.)
The other nit involves my poor clarity. I was already quite aware of the “this” link you posted to my user page and it does list my contributions, but not in the sense of “track” that I was trying to describe. There are also the pull-down notices invoked by the bell icon at the upper right. What I am currently unable to do is combine these views to get a mental image of what is happening. Where do my own comments fit into the discussion? What is the structure of the replies?
Is there a tree graphic representation of the discussions hidden somewhere around here? I’m imagining a node diagram with one color for my own contributions, separate colors for each of the primary contributors, and then a fallback color for grouping all of the minor contributors. Now I’m imagining solid lines for direct replies and dotted lines for links that go elsewhere. (If the 80-20 rule applies to discussions here, then at least the part with colors for contributors might work well enough with a reasonably small number of colors.)
For whatever it is worth, I feel like this discussion itself is already beyond my ken. I feel like the lesson that I am learning is that I need to learn to limit my questions MUCH more narrowly. (I have only looked at a few relies, and my available time is already becoming exhausted by this one reply. But was this the best place to begin today? (And now I lack time (and musal energy) to return to the karma topic.))
Also I greatly appreciate the politeness of the replies and I feel like I am being indulged in my ignorance. In solution terms, how could I learn about the system without bothering other people? (Or is that intrinsically impossible in the context of a discussion system such as this?)
Hey, if you’re new here, it’s perfectly natural that there are some website functions you are not familiar with. I am here for years, and there are still things I don’t know. Keep reading, you will gradually get more familiar with how this all works.
Good catch! I never noticed this one. (If you move the mouse above the abbreviation, the full date and time will be displayed.)
The UI you imagine probably does not exist. What you can get is (a) the list of all articles you posted, in chronological order; and (b) the list of all comments you made, in chronological order, with links to context. Both of them are on the same page, when you click on your name.
For me, this is quite enough, because the number of my posts will most likely never exceed three digits, probably not even two (though I wish the meetup announcements were displayed separately from the actual articles), and given the huge number of comments I wrote during the years, I don’t believe I would ever want to see them all.
Maybe read articles with the Site Meta tag? Not all of them are related to what you want, but probably most of what you want is covered somewhere there.
Thanks for the lead to the “Site Meta” tag. I have that one open in another tab and will explore it next. However my general response to your reply is that part of the problem is that I would like to see different kinds of “tracking summaries” depending on what kinds of things I am trying to understand at a particular time.
You introduced a new example with your mention of “meetup announcements”. If you are trying to track your activity on LW in terms of such meetings, then you want to see things from that perspective.
What I have done in today’s experiment is to open all the “recent” notifications in tabs because it is not clear which ones are actually new… It would be helpful if the notifications pulldown list also showed the notification times (though the mouseover trick for date expansion also works for the relative dates on the floating summary that appears to the left of the notification when you hover over it). Overall I’m still having a difficult time grasping the status of this question.
Just rereading the entire “question” to try to assess it, and almost overlooked your [Viliam’s] helpful numbered list. I think I have replied as appropriate (if replying was appropriate?) and hope that the notification system will let me know if I should come back.
On the basis of your encouragement, I’m going to try to write something for the literacy software topic. Not sure upon what basis you think it might be “great”, but I could not find much that seemed to be related in my search efforts on LW. The obvious searches did produce some results, but how they are ranked is still unclear. For example, I remember a “literacy” search with four primary results, but two of them were for narrow senses of literacy such as “financial literacy”. Before starting to write, I’m going to try searching from the list of tags. (It would be helpful if there were an option to sort by the numbers there… That way I could spot the more relevant tags more easily. (I’m guessing that the numbers are the authors’ usage counts for the tags, but there should be a way to link to the readers’ counts to capture the other side of interest? (What people want to read about in contrast to what people want to write about. (Yet another symmetry thing?))))
Backing up to the top level, I haven’t obtained much insight into the original question. I guess my summary of my understanding now would be “We’re sort of above worrying about money, so go have fun with the LW tools we are creating.” I think that summary reflects input from at least two of the creators of the tools. The users’ side seems to be “We’re having fun and that’s why we do it.”
Your summary seems correct.
Here is a part of LW history that may be relevant to the question of money and sponsors: the Less Wrong website you see is, from a technical perspective, already a third version.
The first version was Overcoming Bias, a shared blog of Robin Hanson and Eliezer Yudkowsky, which started in 2006. Being just two guys’ personal WordPress blog, I assume the costs were negligible.
The second version was Less Wrong implemented with a clone of Reddit code in 2009, which started with importing the existing Eliezer’s articles. The initial software was free, but required some maintenance and extra functionality, which was provided by TrikeApps. TrikeApps is a company owned by Less Wrong user matt.
The third version that you see now, with a complete rewrite of code, was actually made only a few years ago. I couldn’t quickly find the exact year, but not sooner than 2017. This was the first version that was actually quite expensive to develop.
In other words, before Less Wrong started needing serious money to exist as a website, it already had more than 10 years of history. So there is a strong momentum. The people who donated money are presumably the people who liked the existing LW, and therefore their wish is probably to keep it roughly like it was, only more awesome. (The people who didn’t like the historical Less Wrong would probably not donate money to keep it alive.) The fans of Less Wrong, as a whole, are sufficiently rich to keep the website alive.
PS: You are taking this too seriously; probably more seriously than most users here. There is no need to overthink it. If you have an idea for a nice article, write it. If you don’t, just reading and commenting is perfectly okay.
I think a better historical perspective would be that they liked was LessWrong was in it’s first years of existing and felt that LessWrong declined and that there was a potential to bring it back to it’s old glory and make it even beter.
I feel like this branch of the discussion might be related to Dunbar’s Number? Either for total members or for active participants. Is there any data for number of participants over time and system versions?
However I also feel like Dunbar’s Number is probably different for different people. Social hubs have large numbers of personal friends, whereas I feel overwhelmed by any group of 150. My personal Dunbar’s Number might be around 15?
I don’t think the history here is about Dunbar’s number.
LessWrong doesn’t focus on AI in general but on AI safety or AGI. Saying that the answers are intuitively obvious sounds to me like not understanding the questions and why people consider the open questions to be open and interesting. Without understanding the questions well enough, I doubt writing about it would lead to articles that are useful to anyone and such are well received.
LessWrong shows you how much karma you get on your profile. That seems to me like a better metric then how much words are written.
Thank you for the reply, and I am also somewhat aware of karma. It does seem useful, but not in a searchable way. Per my suggestion for extended karma (one of my first efforts on LW), I wish that karma (in a multidimensional form) were usable for self-improvement, for filtering and prioritizing, and even for searching for people who are likely to write things worth reading.
I guess one helpful step would be if karma was included in the flyover display. Right now the “ChristianKI” flyover only reveals 4 dimensions of your identity: Your identity’s age (joined date), # of sequences, # of posts, and # of comments. That gives me some idea of your activities, but isn’t as helpful (in my imagination) as a radar icon showing that you are above average on consistency and accuracy and perhaps below average in some other dimensions.
Consistency and accuracy are both dimensions that are hard to measure. I don’t see where you would get numbers for that.
Accuracy is relatively easy to assess. If you think someone is saying something that is false and you are reacting to the comment on that basis, then you should be able to cite appropriate evidence to that effect. (But the other person should be able to object to your evidence as part of a ‘proper’ MEPR system.)
I actually think most dimensions of the reputation system should be normalized around zero, so that if people tend to give more negative reactions, then the system should be adjusted to make it more difficult to give a negative reaction, such as saying a comment is inaccurate. (However I also think that should be weighted by the MEPR of the person making the rating. If someone has a established a long track record of catching inaccuracies, then the likelihood is higher for that person.)
I agree that consistency is much trickier. Even in the case where I know the person has changed his mind on a topic, I would not regard it as inconsistent if there was good reason for that change. I think I might like computer support for something like that. How about a triggered search? “Show me this person’s comments about <target keyword>” and I could then look over the results to see if they are unchanged, evolving over time, or jumping back and forth.
But actually that is something I would like to apply to my own comments over time. I think I am fairly consistent, but perhaps I am deluding myself?