Lesswrong Potential Changes
I have compiled many suggestions about the future of lesswrong into a document here:
It’s long and best formatted there.
In case you hate leaving this website here’s the summary:
Summary
There are 3 main areas that are going to change.
-
Technical/Direct Site Changes
-
-
new home page
-
new forum style with subdivisions
-
new sub for “friends of lesswrong” (rationality in the diaspora)
-
-
New tagging system
-
New karma system
-
Better RSS
-
-
Social and cultural changes
-
Positive culture; a good place to be.
-
Welcoming process
-
Pillars of good behaviours (the ones we want to encourage)
-
Demonstrate by example
-
3 levels of social strategies (new, advanced and longtimers)
-
-
Content (emphasis on producing more rationality material)
-
For up-and-coming people to write more
-
for the community to improve their contributions to create a stronger collection of rationality.
-
-
For known existing writers
-
To encourage them to keep contributing
To encourage them to work together with each other to contribute
-
-
How will we know we have done well (the feel of things)
How will we know we have done well (KPI—technical)
Technical/Direct Site Changes
Initiatives for long-time users
Target: a good 3 times a week for a year.
Approach formerly prominent writers
Place to talk with other rationalists
Pillars of purpose
(with certain sub-reddits for different ideas)
Encourage a declaration of intent to post
Why change LW?
Lesswrong has gone through great times of growth and seen a lot of people share a lot of positive and brilliant ideas. It was hailed as a launchpad for MIRI, in that purpose it was a success. At this point it’s not needed as a launchpad any longer. While in the process of becoming a launchpad it became a nice garden to hang out in on the internet. A place of reasonably intelligent people to discuss reasonable ideas and challenge each other to update their beliefs in light of new evidence. In retiring from its “launchpad” purpose, various people have felt the garden has wilted and decayed and weeds have grown over. In light of this; and having enough personal motivation to decide I really like the garden, and I can bring it back! I just need a little help, a little magic, and some little changes. If possible I hope for the garden that we all want it to be. A great place for amazing ideas and life-changing discussions to happen.
How will we know we have done well (the feel of things)
Success is going to have to be estimated by changes to the feel of the site. Unfortunately that is hard to do. As we know outrage generates more volume than positive growth. Which is going to work against us when we try and quantify by measurable metrics. Assuming the technical changes are made; there is still going to be progress needed on the task of socially improving things. There are many “seasoned active users”—as well as “seasoned lurkers” who have strong opinions on the state of lesswrong and the discussion. Some would say that we risk dying of niceness, others would say that the weeds that need pulling are the rudeness.
Honestly we risk over-policing and under-policing at the same time. There will be some not-niceness that goes unchecked and discourages the growth of future posters (potentially our future bloggers), and at the same time some other niceness that motivates trolling behaviour as well as failing to weed out potential bad content which would leave us as fluffy as the next forum. there is no easy solution to tempering both sides of this challenge. I welcome all suggestions (it looks like a karma system is our best bet).
In the meantime I believe being on the general niceness, steelman side should be the motivated direction of movement. I hope to enlist some members as essentially coaches in healthy forum growth behaviour. Good steelmanning, positive encouragement, critical feedback as well as encouragement, a welcoming committee and an environment of content improvement and growth.
While at the same time I want everyone to keep up the heavy debate; I also want to see the best versions of ourselves coming out onto the publishing pages (and sometimes that can be the second draft versions).
So how will we know? By trying to reduce the ugh fields to people participating in LW, by seeing more content that enough people care about, by making lesswrong awesome.
The full document is just over 11 pages long. Please go read it, this is a chance to comment on potential changes before they happen.
Meta: This post took a very long time to pull together. I read over 1000 comments and considered the ideas contained there. I don’t have an accurate account of how long this took to write; but I would estimate over 65 hours of work has gone into putting it together. It’s been literally weeks in the making, I really can’t stress how long I have been trying to put this together.
If you want to help, please speak up so we can help you help us. If you want to complain; keep it to yourself.
Thanks to the slack for keeping up with my progress and Vanvier, Mack, Leif, matt and others for reviewing this document.
As usual—My table of contents
- 27 Nov 2016 13:57 UTC; 42 points) 's comment on On the importance of Less Wrong, or another single conversational locus by (
- My future posts; a table of contents. by 30 Aug 2015 22:27 UTC; 31 points) (
- Revitalising Less Wrong is not a lost purpose by 15 Jun 2016 8:10 UTC; 6 points) (
- 19 Mar 2016 21:12 UTC; 2 points) 's comment on The increasing uselessness of Promoted by (
This feels wrong to me. I mean, I would like to have a website with a lot of high-quality materials. But given a choice between higher quality and more content, I would prefer higher quality. I am afraid that measuring these KPIs will push us in the opposite direction.
Reading spends time. Optimizing for more content to read means optimizing for spending more time here, and maybe even optimizing for attracting the kind of people who prefer to spend a lot of time debating online. Time spent reading is a cost, not a value. The value is what we get from reading the text. The real thing we should optimize for is “benefits from reading the text, minus time spent reading the text”.
I think the subreddits should only be created after enough articles for given category were posted (and upvoted). Obviously that requires having one “everything else” subreddit. And the subreddits should reflect the “structure of the thingspace” of the articles.
Otherwise we risk having subreddits that remain empty. Or subreddits with too abstract names, or such that authors are confused where exactly which article belongs. (There will always be some difficult cases, but if the subreddit structure matches the typically written articles, the confusion is minimized.) For example, I wouldn’t know whether talking about algorithms playing Prisonners’ Dilemma belongs to “AI” or “math”, or whether debates of procrastination among rationalists and how to overcome it are “instrumental” or “meta”. By having articles first and subreddits later we automatically receive intensional definition of “things like this”.
Perhaps we could look at some existing highly upvoted articles (except for the original Sequences) and try to classify those. If they can fit into the proposed categories, okay. But maybe we should have a guideline that a new subreddit cannot be created unless at least five already existing articles can be moved there.
Upvoting and downvoting should be limited to users already having some karma; not sure about exact numbers, but I would start with e.g. 100 for upvoting, and 200 or 300 for downvoting. This would prevent the most simple ways to game the system, which in its current form is insanely fragile—a single dedicated person could destroy the whole website literally in an afternoon even without scripting. This is especially dangerous considering how much time it takes to fix even the smallest problems here.
EDIT:
It would be nice to have scripts for creating things like Open Thread automatically.
Definitely add PJ Eby to the list. I am strongly convinced that ignoring him was one of the largest mistakes of the LW community. I mean, procrastination is maybe the most frequently mentioned problem on this website, and coincidentally we have an expert on this who also happens to speak our language and share our views in general, but instead of thinking about how to cooperate with him to create maximum value, CFAR rather spent years creating their own curriculum from the scratch which only a few selected people have seen. (I guess a wheel not invented in the Bay Area is not worth trying, despite all the far-mode talk about the virtue of scholarship.)
Agreed. This is why I shut off main and forced everything into discussion—I don’t think we know enough about how LW will be used to partition things ahead of time. (I’m also pretty skeptical of doing a subreddit split on topics instead of on rules.)
Currently the limits are 10 for both upvoting and downvoting. We’ve already seen some innocent bystanders hit.
I think you’re underestimating the difficulty in getting up to 100 karma. (One comment made a while ago is that the fragility of the voting system—especially when it comes to serial downvoters—happens in part because of how infrequently good users vote. It is problematic when we exclude people with good taste who don’t contribute much, because that means the base of good votes to overcome is even shallower.)
The disabled buttons should have tooltips saying “you need X karma to vote”.
Maybe 10 is okay for upvoting, but there needs to be a sufficiently high limit for downvoting, to stop the usual Eugine’s strategy of “post three quotes in rationality thread, get a few upvotes, and immediately use the karma to harass others”. Higher costs of doing things increase the cost of avoiding bans by making new accounts repeatedly.
Our design docs for LW v2.0 are not written by Eugine, are they?
Well, if they cannot even solve the existing problems, then I have to predict that the existing problems will continue to exist, duh.
A different solution would be okay too. But some solution is needed. And I violate the virtue of silence again by saying that as a determined user, I could ruin the website in a weekend, even without scripting. (With scripting, I could make a program that ruins the website at any moment at a click of a button.) Eugine is really picking just the absolutely lowest hanging fruit; and fixing that already creates about 50% of moderators’ work. Multiply this by ten, and LW will not have enough manpower to deal with the problems. A script could multiply it by a million.
When a user has admin on, they can see the list of users who upvoted or downvoted a comment in the tooltip (which currently shows % positive).
All karma calculations use a ‘weight’ variable that’s stored per user, and can be adjusted at the userpage by a user with admin on.
That means shutting down sockpuppets is two clicks, and discovering them is a mouseover. The main uncertainty is how the weight variable will impact the performance of the site.
The “weight” only needs values 0 and 1. And the value 0 can be achieved by disabling the buttons (which has negligible impact on performance) and removing the existing votes (which only happens one per user).
You need to define and specify the problem first. For example, you were saying that the 10 karma to gain the right to vote is too low—but e.g. in the weaponised scenarios that you mention the number of karma points is pretty much irrelevant. Setting this variable to 100 (or 1000) will not provide much defense.
As usual, step one should be to specify the threat model.
I think describing it publicly in details is not a good idea. I have a model of attack which I believe is strong enough that within a day I could reduce your (or anyone else’s) karma to zero, without using my existing account (i.e. pretending that I am a completely new person, or that my old account was banned), and without scripting, assuming I spend the whole day doing this. With scripting, it is merely a question of clicking a button when the script is ready, and the script could be ready in a day or two. After having the script ready, the slowest part would be getting the first 10 karma for the new account, which is quite easy. Which is why I recommend making exactly this part more difficult.
After running the script, to undo the damage it would be necessary to find the account that did it, and make a script that removes all its votes. (Assuming the attack was done with one account. That’s not a reasonable assumption with scripting.) Judging by how “quickly” the support has reacting in the past, that would take about a month. Running the script again would take just one more click. Fixing the problem again would probably take a few days. There is a huge assymetry of effort. And with small modification, version 2.0 of the script could create hundreds of new accounts (now the slowest part would be the attacker typing the captcha for registering the new accounts), which would make defense impossible for a few months, until the necessary changes in code would be implemented.
All I am asking for is to make this vector of attack more costly, by increasing a fucking constant. What else am I supposed to do to convince anyone? Do I have to produce a working prototype of the scipt? There is already enough information here for anyone to connect the dots.
My point is that increasing that constant is not a viable defence against the attacks. You are not putting up a roadblock, merely a microscopic speed bump that a capable attacker will not even notice.
You suggestion is like prohibiting passwords consisting of a single character. Will it help in the case of really stupid people? A bit. Will it help in the case of people actually likely to mount an attack? Not at all. Does it create the impression that you’ve “improved the security”? Yes, and that’s the worst part.
In theory, the only values defensible from the first principles are 0, 1, and infinity. In practice, the difference between an hour and a week can be significant.
Actually, it can make the attack without scripting quite expensive, and having to write and debug a script can be an obstacle for many people. For example, I am quite tempted to make a proof-of-concept script and fire it at you just to prove my point; and I have already written scripts interacting with websites in the past; but I am still quite likely not to do it, because it would take me a few hours of work. Procrastination, trivial inconveniences, etc.
I believe that CAPTCHA would be a better analogy, because it is an amount of work that has to be done by the user manually, before they are given access to the full functionality. More specifically, it is like changing a one-character CAPTCHA into multiple characters.
Sure, but why do you want to take a roundabout-karma way about it? If you care about slowing attacks down, make it so that no account younger than X days can vote. If you care about a sockpuppet explosion, implement some checks on the front end so that no IP address can create more than Y accounts in Z days (yes, proxies, but that’s another speed bump).
However I feel that all this distracts from a bigger point. LW is in crisis and some people even say it’s dying. This is not because LW is under siege from multiple accounts or sockpuppets. If Eugene goes away LW will still be in crisis. While I’m not in general a big fan of YAGNI, I feel that it’s appropriate here. Focus on important parts first.
There is more than one problem with LW. But for me this is just more reason to make one go away quickly by increasing a constant, and then focus on the remaining ones.
We’re disagreeing about whether increasing that constant will make the problem go away.
I don’t think straight number limits like this are going to work well. Let’s take two new users Alice and Bob, and stipulate that, using gaming terminology, Alice is a casual and Bob is an elitist jerk. Alice might well take a month or two or three to accumulate 100 karma in the course of her ordinary use of LW. Bob, being who he is, will minmax the process and get his 100 karma in a couple of days.
Managing the power gap between casuals and elite minmaxers is a big problem in multiplayer games and it doesn’t look like an easily solved one.
I think straight number limits give us the most usefulness for the difficulty to implement. If you have other suggestions, I’m interested.
If we are talking about the criteria for the promotion to the full vote-wielding membership of LW, you are not limited to looking just at karma.
For example: Promote to full membership when (net karma > X) AND (number of positive-karma comments > Y) AND (days when posted a positive-karma comment > Z).
Implementation shouldn’t be difficult, given how all these conditions are straightforward SQL queries.
A more general question is the trade-off between false positives and false negatives. Do you want to give the vote to the newbies faster at the cost of some troll vandalism, or do you want to curtail the potential for disruption at the cost of newbies feeling themselves second-class citizens longer?
Very funny.
If what should be straightforward SQL queries are too difficult to implement, LW code base is FUBARed anyway.
Anyone wants to write another middle layer which will implement normal SQL on top of that key-value store implemented on top of normal SQL? X-D
A bit more seriously, LW code clearly uses some ORM which, hopefully, makes some sense in some (likely, non-SQL) way. Also reading is not writing and for certain tasks it might make sense to read the underlying Postgres directly without worrying about the cache.
I just posted an article to Main. Would you check & see if it appears there for you, too?
Hmm. It is visible to me; we put in a safety valve so that people could edit posts that were already in Main, but I’d have to look into more detail to see what happened.
(You shouldn’t have had the option in the dropdown, I think.)
But more content equals a higher chance that some of the content is worth reading. You can’t get to gold without churning through lots of sand.
Instead I think there should be decent filtering. It shouldn’t be sorted by new by default, but instead “hot” or “top month” etc.
I second this. In fact I would go further and say there should only be 1 or 2 distinct subreddits. Ideally just 1.
The model for this is Hacker News. They only have one main section. And no definition of what belongs there except maybe “things of interest to hackers” it’s filled with links of all kinds of content from politics to new web frameworks.
I think lesswrong could do something like that successfully. The only reason it isn’t is because, see above, new content like that is discouraged.
Lesswrong currently uses (a highly outdated version of) reddit’s api, so writing bots to do various tasks shouldn’t be too difficult, and doesn’t require access to Lesswrong’s code.
I don’t know about CFAR, but my sense is that if the LW community as a whole ignored PJ Eby it wasn’t because of Bay Area prejudice (what fraction of LW people have, or had, any idea where he lives?) but because the style of his writing was offputting to many here.
I mean, for instance, his habit of putting everything important in boldface, which feels kinda patronizing (and I think LW people tend to be extra-sensitive to that). And IIRC he used too many exclamation marks! The whole schtick pattern-matches to “empty-headed wannabe lifestyle guru”!
Having said that, I just had a quick historical look and it seems like from ~2013 (which is as far back as I looked) he hasn’t been doing that much, and hasn’t been ignored any more than other LW contributors. But perhaps he also hasn’t been posting much about his lifestyle-guru/therapist/coach stuff either. (I can easily believe that the unusual writing style goes with the self-help territory rather than being something he just does all the time.)
I think this is the main factor. I didn’t find his style offputting, at least to the degree others did, but I notice that I never went on an archive-binge of what he’d written.
The evidence against “empty-headed” is that his articles and comments often got highly upvoted on LW.
I am arguing not that PJE is in fact empty-headed but that his writing style may have felt like that of someone empty-headed and that, if in fact he was ignored and neglected, this may be why.
But I’m a bit confused now, because if his articles and comments were highly upvoted on LW I don’t think I understand in what sense you can say that “ignoring him was one of the largest mistakes of the LW community”. (Of course it could still be a mistake made by, say, CFAR.)
After noticing that procrastination is a serious problems for many aspiring rationalists, and that we have a domain expert on LW, the reasonable approach would be to invite him to make a lecture for CFAR seminars. (And then of course use the standard CFAR methods to measure the impact of the lecture.) Motivation is a multiplier; if the lessons actually work, CFAR would get a huge bonus not only by having these lessons for their students, but also by using them for themselves; and maybe even for the folks at MIRI to build the mechanical god faster.
If the negotiation fails, there are still backup options, such as having someone infiltrate his lessons, steal the material, and modify it to avoid copyright issues. (Shouldn’t be difficult. LW is accused all the time of inventing new names for the existing concepts, which is exactly what needs to be done here, because only names can be copyrighted and trademarked, not the concepts themselves.) But I would expect the negotiation to be successful, because PJE is already okay with publishing articles on LW, so whatever he is trying to achieve by that, he would achieve even more of it by cooperating with CFAR.
Maybe some kind of cooperation actually happened, I just haven’t heard about it, in which case I apologize to everyone concerned.
I sincerely believe that in his area of work, PJE is doing the same kind of high-quality work as Eliezer did in writing the Sequences. Joining high motivation with avoiding biases seems like a super powerful combo, like the royal road to winning at life. I am quite sensitive about reading bullshit, and the field of motivation is 99% bullshit. Yet PJE somehow manages to read all those books, extract the 1% that makes sense, and explain it separately from the rest. I have listened to a few of his lectures, and read his unfinished book, and I don’t remember finding anything that I would be ashamed to tell at a LW meetup. There are people who swallow the bullshit completely; there are also people who believe that there is a dilemma between lying to yourself and being more productive or refusing to lie to yourself at a cost of losing productivity (and then explain why they choose one side over the other, or vice versa), but PJE always takes apart the stuff that experimentally works from the incorrect explanation that surrounds it, in a way that makes sense to me.
Most wannabe rationalists avoid the emotional topics and pretend they don’t exist. The Vulcan stereotype exists for a reason, and many explanations why this is not how we do rationality feel like “the lady doth protest too much”. Our culture rewards trying to explain away emotions by using pseudomathematical bullshit such as “hyperbolic discounting” (oh, you used two scientifically sounding words, that’s neat; but you also completely failed to explain why some people procrastinate while others don’t, or why a short exercise can switch a person from avoiding work to doing the work). This is our collective blind spot; our motivated stopping before stepping on an unfamiliar territory. Back to the safety of abstractions and equations; even if we are forced to use equations as metaphors, so the actual benefits of doing maths are not there, it still feels safer at home.
Unfortunately, this is one of the situations where I don’t believe I could actually convince anyone. I mean, not just admit verbally that I may have a point, but to actually change their “aliefs” (which is by the way yet another safe word for emotions).
I have a lot of KPI’s because I realise some will not be effective. as we know; what gets measured gets optimised for; which is why I think having so many different measures will help make it hard to select for the wrong goals. By at least watching all of them; I expect we are likely to be able to make progress.
Agree. But how? if you have a better metric for measuring that I would gladly try to implement it; until then—I came up with the best possible solutions I could.
I am thinking tags might be an easer implemented and stronger solution. maybe two layers of tags; one for “content tags” and one for “sorting tags”. The content tags will be anything (as per the current system). The sorting tags will be a set number of possible tags and clearly visible everywhere for sorting posts by, and posting into.
Some sorting tags will also be able to be auto-assigned i.e. +10karma score. which can then be automatically aggregated to an RSS feed.
I like these ideas.
Yes and no; particularly no on meetups. I don’t want dead meetups to appear on the meetup schedule, I was thinking an opt in email, “the last time you planned this meetup was 2 weeks ago; would you like to set one for two weeks now; reply “yes” to this email to confirm a meetup with the same location and this date and time.”
A weekly thread can be automated; a monthly thread will find less use being automated. But certainly an option.
PJ eby added.
Edit: On second thought; if you want to just remove those particular KPI’s or “discount their validity a lot” I can also do that.
I’m thinking, but not sure, whether watching average karma would be a good idea.
Or maybe some curve that would transform article karma, something like articles with positive karma would get “karma − 10” points, and articles with zero or negative karma would get constant “-10″ points, and measuring a sum of that. (The rationale is that we subtract a few points as a cost of time spent reading; but we don’t penalize the negative-karma articles too much, because skipping an article with −100 karma is just as easy as skipping an article with −5 karma.)
when was the last time you skipped an article, comment or post because it was negative? (not that you are a typical user) (and maybe this is worthy of a poll in the OT)
This seems reasonable in general, aside from that minor quibble.
As one anecdata point, I do generally skip articles with much negative karma. I read via RSS, so I just hit ‘mark read’ on them. LW users are not big downvoters, most of the time, so if something has more than a few downvotes, I have found that I probably don’t want to read it.
And of course, comments with a score of −3 are hidden by default, so many people probably don’t read them.
Thank you very much for putting this together.
Great work!
A clarifying question—is this more of a “here are the changes that we’re going to make unless people find serious problems with them” kind of document (implying that ~everything in it will be implemented), or more of a “here are changes that we think seem the most promising, later on we’ll decide which ones we’ll actually implement” type of document (implying that only some limited subset will be implemented)?
I would like to see as many of these implementations made in the least break-things order possible.
I already culled a lot of the unpromising ones, (yes some of these are less promising, i.e. maybe having subs will create a ghost town)
Re. “Reducing total negative karma” as a goal--
Negative karma is already less common than positive karma. This is good, since it would be bad if the “average user” couldn’t post. But without a justified target for what the proper amount of negative karma is, setting “reduce negative karma” as a goal isn’t reasonable. How do we know we don’t already have the right amount? Or too little?
There is an underlying assumption of what “negative karma” means. It hopefully means things like:
this post is wrong
this is a badly expressed opinion
and various other reasons for downvote. If we assume that enough downvotes means we are not effectively communicating useful thoughts, then we want to minimise that. Of course this may not be representative if say; our number of active accounts doubles in size; we should expect more negative karma as a factor of the number of people in the conversation.
As we know; what get’s measured get’s optimised for—I want to keep the measuring options various and several so that we optimise in the general direction of less bad stuff more good stuff.
But if we assume that enough downvotes means we are effectively filtering out the stupid stuff, then we want to maximize that.
I agree with PhilGoetz that “less negative karma” is a bad goal. It’s trivially reachable by eliminating downvotes, for example.
it’s a terrible goal on it’s own, it’s definitely not to be taken on it’s own. If not taken to the extreme of aiming for zero downvotes or something stupid like that; I think it represents a (maybe bad) measure of how disagreeable we are.
If you think it’s completely unrepresentative I will take it out; I was of the opinion that it can show something; and is worth checking up on, probably not optimising for.
Yes, I think that’s quite right: the amount of negative karma might be a useful indicator (together with other indicators), but it’s not a good target for optimization.
This is not an unusual phenomenon.
No, it’s even simpler than that. Think about using salt in cooking—if you produce an oversalted dish that’s a problem that you should notice and fix, but talking about minimizing the amount of salt is silly (I’m talking gastronomically, not nutritionally).
I think there are two separate things going on here.
It might (at present) be beneficial to reduce X, but the optimal level might not be zero.
Treating X as a target for optimization might be harmful.
(Here X is “amount of salt” for your oversalted dish, and “amount of downvoting” for present-day LW.)
Addressing the alleged “too much negative karma” problem by prohibiting downvotes would be bad in both respects. But whatever target we might pick, aiming for exactly that level of downvoting and optimizing would likely give bad results, whereas picking a target level of saltiness in your dish and optimizing might work just fine.
The point is that you optimize for taste and let saltiness fall where it may. Similarly, LW should optimize for some metric of “goodness” and let negative karma be whatever it has to be to produce that deliciousness.
Of course. But that may be ill-specified and hard to measure, and something else may be a usable proxy.
Your (perfectly correct) point is that optimizing a poorly chosen proxy (e.g., minimizing the amount of salt) can produce very poor results. My point is that even if you have what looks like an excellently chosen proxy, as soon as you start optimizing it your (or others’) ingenuity is liable to turn up ways to improve it while making what you care about worse.
(None the less, proxy measurements are really useful. I believe we are agreed that at the very least they’re worth keeping an eye on as a rough guide, provided you also keep an eye on whether they’re ceasing to be useful proxies.)
That, however, is not the case here.
I agree. (Did you expect me not to? If so, I apologize for anything misleading in what I wrote.)
PhilGoetz said, “votes on posts in Discussion should count for more than votes on comments do. Maybe 5 points per vote?”
I agree. posting this here so that it doesn’t get lost.
This looks reasonable to me.
Who is involved in this effort that has power to make any of it happen?
That raises the issue that “have more transparency about who runs the site and makes decisions” would be nice. The “About Less Wrong” page doesn’t say anything about who runs the site, who the mods are, anything of that nature. I have no way of knowing whether Elo is a site webmaster, or some guy tossing out ideas.
Current process involves pushing changes to the Git repository; and being in contact with other dev’s via the slack. Trikeapps then double check that the changes do not break anything (because they are responsible if the site goes down) and then push them live.
Added to the doc.
http://lesswrong.com/r/lesswrong/about/moderators this list exists but is not published in many places.
I will make a top-level post asking for contributors.
There’s also the editor list here, which doesn’t seem to be correct (or at least, the various titles aren’t kept clear in an obvious way). I’m an editor (in that I can turn on admin and it says “Editor” when you look at my userpage), Eliezer looks like an editor when you look at his page, but he’s on the moderator list and I’m on neither? And NancyLebovitz doesn’t have any flair to distinguish her as a moderator on her userpage.
update the doc to include editors.
Very nice job! I added some edits, and encourage everyone else who cares about the future of LW to add their own thoughts and edit the document. I really appreciate you all leading this effort to change things for the better!
Clarity’s seal of approval
I didn’t downvote you and I intend no offense.
Assuming that you genuinely approve of this article, a seal of approval from a user with 50% positive karma is more like an anti-seal of approval, regardless of the validity of that user’s statements. Most humans will be inclined to smack you down for grabbing status that they don’t think you deserve, and they may even improperly use the quality and content of your comment as a substitute criterion for evaluating the quality and content of the parent article. It’s not to say that you should refrain from commenting at all, but that you should refrain from commenting if your comments lack cues that will lead readers to evaluate your comments and their parents slowly and deliberatively. The easiest way to do this is to write comments that can only be written slowly and deliberatively and to refrain from commenting otherwise.
Assuming that you don’t genuinely approve of this article, obscurantism and social engineering are unlikely to be productive and are discouraged.
Thanks for the comment; We have experienced this before (Clarity’s support and following discussion). it might not be in everyone’s memory but it is in mine.
http://lesswrong.com/r/discussion/lw/mmu/how_to_learn_a_new_area_x_that_you_have_no_idea/coby
At this point I am glad he posted. Clarity is still a member of the culture around here; even if he is not a good-karma scoring member, he still participates regularly and has valid opinions on things.
Relax, its just a good article. No need to go too meta and derail this important article.
HN has a mechanism for giving an article your seal of approval: it’s called upvoting. More than that is only necessary if you expect your approval specifically to weigh more highly than that of other users.
Seeing comments from (say) three people who explicitly say that they agree or think I’ve done a good work, feels much better than just seeing three upvotes on my comment / post. I know that there are other people who feel the same. Our minds aren’t good at visualizing numbers.
I think that “if you are particularly happy about something, you can indicate this with an explicit comment in addition to the upvote” is a good norm to have. Giving people extra reward for doing particularly good work is good.
That’s a reasonable point. (And I have no inkling why anyone thought you should be downvoted for making it.)
None the less, it seems to me that if you really find an article particularly impressive then you should almost always be able to find something more specific to say than “I like this”, and that a better norm than “if you really like something, post a generic positive comment” would be “if you really like something, post a positive comment saying something about why you like it”. More useful feedback, more discussion fodder, less clutter, and (because of the small extra effort required) I think a better indication of actually having liked something substantially more than usual.
Doesn’t it feel nice nice to validate someone and express yourself?
There are many things that feel nice but that I prefer not to do in public without good reason :-).
Maybe I’m a starry-eyed idealist, or maybe I’m deceiving myself, but I don’t think my comments on LW are mostly made with the goal of feeling nice. I’m trying to do something like overall utility maximization too.
Imagine for a moment that every article gets festooned with comments from all the LW regulars saying “I approve of this” or “I disapprove of this”. Don’t you think that would get in the way? Do you think it would add much value beyond just upvoting and downvoting?
If you have something interesting to say about why you approve or disapprove, or if your approval or disapproval comes with useful suggestions for improvement, or if you have reason to think that LW’s readership will have a particular interest in your attitude (e.g., because you’re a domain expert in whatever the post is about), then I can see the point. If you just want to say “yay!” or “boo!”, though, that’s exactly what the voting mechanism is for.
I think you’re strawmanning. This isn’t all regulars, just me, and it isn’t all articles, just one. This is making a mountain out of a molehill.
Sure. But “what if everyone did it all the time?” is a useful heuristic sometimes. Things that can’t stand being universalized may be things better not done in the first place. (Not always, of course.)
Continuing to argue about it further certainly would be :-). So I’ll leave it here. (For the avoidance of doubt: I agree it’s a molehill.)
I hope it isn’t too late to make another feature suggestion:
What about enabling users to suggest edits to other users’ posts and comments? It is my favorite feature on Quora, and I wish the same thing was possible everywhere on the internet.
It is awesome for all those times when people make minor spelling or formatting errors and I don’t want to make a big deal about it by writing a comment. Whether or not the suggestion should be accepted or rejected is of course always the original author’s decision.
Re. having the proposed tag “Rational fiction”, but no tag for “fiction”—this seems strange to me. Compare Politics and Art—would it make sense to change those to “rational politics” and “rational art”? No; politics and art are social phenomena we may wish to discuss. So is fiction. Irrational fiction may be as useful to discuss as rational fiction.
( Now I’m imagining a tag system with predicate logic, so you could tag a post with “not(rational(fiction))”. )
Honestly; I didn’t think some of those tags through very well.
www.omnilibrium.com exists for politics. so I have removed the politics tag.
There already exists many places to talk about non-rational fiction. but few places to talk about rational-targeted fiction.
As for Art—The slack filled a hole in lw’er’s desire to talk about anything with their peers. We needed a place to cover art so that it didn’t clog up other places where it was less relevant. Art isn’t very rational; but videos and music preferences get shared there. I don’t know if lw-com needs a place to talk about art; I don’t think having another master-tag is a problem. if it is of no use we can remove it again.
There seems to be an implicit assumption there that we shouldn’t discuss anything on LW for which some other venue for reasoned discussion exists. Or perhaps that this is true only for politics, perhaps because politics tends to make people stupid?
I’m not sure I agree either in general or when restricted to politics. I think it would be bad to have much political discussion on LW, but it seems inevitable that there’ll be some, in which case it should probably be tagged appropriately.
I name omnilibrium because it was deliberate created from lesswrong; for lesswrong; for politics.
More specifically; in choosing to change lesswrong; if we make it a venue for everything; we basically declare noise and not signal. Yes expanding LW is good; but also making lesswrong be the place for everything is bad.
The question that I am getting at is; What is lesswrong for?
I would have answered something like:
For sharing and cultivating epistemic and instrumental rationality techniques. For cultivating a community on the topic of rationality for the purposes of growing personally as well as producing more rationality materials together for the good of all people.
Any community spends some of its time and energy on “non-central” discussions. I think trying to prevent that would be a bad idea. (I agree that encouraging political discussions on LW would be a bad idea too.)
When I made my comment above I hadn’t actually looked at the relevant bit of your document. Having done so, I think I’m confused. LW already has lots of tags. In particular, it already has a politics tag that’s been used quite a bit. Your document lists a bunch of tags; what significance exactly is this list intended to have?
“We should make sure LW has these tags.” Doesn’t it already?
“We should make sure LW has these tags, and no others.” That seems like an obviously bad idea.
“We should burn the existing LW to the ground and rebuild a new system; that system should have these tags.” Is anyone proposing anything so radical?
“No, no, this is just a list of tags that should have an associated RSS feed.” Isn’t it much easier just to make that happen for all tags automagically?
Something else. But what?
Surely, whatever we do to LW’s tagging system, it needs to be possible for posters to choose (and if necessary create) their own tags. Unless we are going to ban all discussions that have any connection with politics, sometimes they will quite rightly want a “politics” tag. Despite the existence of Omnilibrium, and despite the fact that politics is far from being the core business of LW.
(Some of those articles tagged “politics” are in fact really good.)
Lesswrong has plenty of tags; they are not organised well or easy to browse.
By creating a list of common tags or master tags (class/type tags); (and showing them on the page where article writing happens) We can help people classify their efforts. It’s great to share a link of something found elsewhere on LW; but someone not interested in seeing linkposts cannot currently filter out link-posts from their view. They can choose not to click on them but I would much prefer a LW where I can opt-out of seeing posts on certain topics (easy example: politics).
Also if we show tags on the sidebar it will be easier to get a feel for what lesswrong is about (or filter what is available on lw) than say; reading the first 3 posts in discussion.
In terms of encouraging crossposting, what do you think would be the benefits and drawbacks of something like a RSS aggregator of rationality blogs on the Less Wrong site, especially if the posts on other blogs have a Creative Commons license, or something similar? That could make it relatively easy for someone with a blog elsewhere to share with Less Wrong, instead of manually crossposting all of the time.
planetrationalist.com exists already; the benefit of cross posting is going to be the opportunity to see comments (and add comments of your own) to existing posts. RSS can’t do that.
A mixture of both could be possible. A raw RSS and a sub for talking about crosspost posts.
When it comes to KPIs, one of the things I’d like to track is something like:
How long has it been since users on posted something?
How long has it been since users on visited LW?
How long has it been since users on voted on something?
Here is something like the top 50 users by karma, or a curated list of specific people. (Gary Drescher has an account, but isn’t one of the top 50 karma users.) To reduce the number of numbers, instead of a distribution this probably looks like “number that have within the last week (month?)”.
All three seem easy to do technically. I think there are possibly privacy concerns to be address with the latter two. “Time last seen” is a fairly common thing to have public on someone’s profile, and StackExchange makes public vote totals (including number of votes this month).
The thought here is that partial engagement is worth tracking, and it’ll give some insight into what specifically is going on. If people are visiting but not posting, this may actually be the best scenario (if we trust them to know when they should post) and means we don’t need to throw a bunch of links at them, for example.
I am hesitant to track some of these publicly because we probably have some retired active users that enjoy being lurkers and not being noticed any more (who like making the occasional vote).
I am all for an internal anonymized tracker. and have added these to the list.
Additional Suggestion 1: Regular reminders of places to send suggestions could be helpful. I occasionally come up with additional ones and usually just post them on whatever recent suggestion-related thread is new
Additional Suggestion 2: The search function would be massively improved if it ignored and didn’t search the text in the sidebar. This was referenced and I was reminded of this by gjm from his comment here in the latest Open Thread.
Re: 2. There is now a help ticket on the github.
Re: 1. in the document now. So long as your suggestions get noticed; they can be posted anywhere. But a “place to make it easier to notice suggestions” would help greatly for both people suggesting and people trying to make progress on changes needed to lw.
But there is no link from “About LessWrong” to github.
I am editing the FAQ on the wiki to include a direction to report problems. (It’s faster than fixing the about page)
https://github.com/tricycle/lesswrong/issues
is the place; I will get the about page updated ASAP
Why specifically 1/day? It seems a bit too much. Why not e.g. ~3/week?
I suppose that could work too. I would like to aim higher (towards 1/day) but maybe 3/week is more realistic.
I really wish the LW editor would strip font specifications out of the HTML when I do a paste into its editor while creating a new post. It’s a pain in the ass to have to go thru the raw HTML by hand and strip out all the font specifications. I have never yet wanted to copy the fonts from quotes and links that I’ve copy-pasted.
Didn’t there use to be a formatting stripping button?
Doh! I forgot about that. ’Coz it is indicated by an eraser icon.
When I use it, it looks like it isn’t working—it resizes everything to a tiny font, and converts all quotes to indents. Inspecting the HTML, it looks like it’s done the right thing. I’ll have to test it some to decide whether I trust it.
Fixing that tool to not change what’s displayed so much would be perfect.
I usually “paste as plain text” (depending on the browser. I use chrome)
HOW DID I NOT KNOW ABOUT THIS
You could copy and paste into a (plain ordinary) text editor first, and then from there into the LW editor. Or would that remove more formatting than you want removed?
I do this.