Edit: Significantly rewritten. Original question was more specifically oriented around money-as-a-motivator.
One of the questions (ha) that we are asking ourselves on the LW team is “can the questions feature be bootstrapped into a scalable way of making intellectual progress on things that matter.”
Motivations
Intrinsic vs Extrinsic
I’d cluster most knobs-to-turn here into “intrinsic motivation” and “extrinsic motivation.”
Intrinsic motivation covers things like “the question is interesting, and specified in a way that is achievable, and fun to answer.”
Extrinsic motivation can include things like “karma rewards, financial rewards, and other things that explicitly yield higher status for
(Things like “I feel a vague warm glow because I answered the question of someone I respect and they liked the answer” can blur the line between intrinsic and extrinsic motivation)
Improving Intrinsic Motivation
Right now I think there’s room to improve the flow of answering questions:
New features such as the ability to spawn related questions that break down a confusing question into an easier.
Better practices/culture, such as as a clearer idea of how to specify questions such that they communicate what one needs to do to solve them (or, have a set of common practices among answerers such that this is easier to figure out).
A combination (wherein best practices are communicated via tooltips or some-such)
Bounties and Reliability
A lot of questions are just hard to answer – realistically, you need a lot of time, at least some of the time won’t be intrinsically fun, and the warm glow of success won’t add up to “a few days to a few months worth of work.”
So we’re thinking of adding some more official support for bounties. There have been some pretty successful bounty-driven content on LW (such as the AI Alignment Prize, the Weird Aliens Question, and Understanding Information Cascades), which have motivated more attention on questions.
Costly signaling of value
They showcase that the author of the question cares about the answer. Even if the money is still relatively minor, it reaffirms that if you work on the question, someone will actually derive value from it, which can be an actual important part of intrinsic motivation (as well as a somewhat legible-but-artificial status game you to more easily play, which I’d classify as extrinsic)
Serious times requires livable-money
In some cases you just actually need to put serious time into solving it to succeed, which means you either need to have already arranged your life such that you can spend serious time answering questions on LW, or you need “answering hard questions on LW” to actually provide you with enough financial support to do so.
This requires not just “enough” money, but enough reliability of money that “quit your day job” (or get a day job that pays-less-but-gives-more-flexiblity) is a an actual option.
What would it take?
So, with all that in mind...
What would it take for you (you, personally), to start treating “answer serious LW questions” as a thing you do semi-regularly, and/or put serious time into?
My assumptions (possibly incorrect) here are that you need a few things (in some combination)
A clear enough framework for answering questions, that relies on skills you already have (and/or a clear path towards gaining them)
A sense that the questions matter
Enough money (and reasonable expectation of earning it) for a given question that working on it is worth the hours spent directly on it, if you’re doing work that’s demanding enough that it doesn’t funge against other hobby activities.
Enough reliability of such questions showing up in your browser that you can build a habit of doing so, such that you reallocate some chunk of your schedule (that formally went either to another paying job, or perhaps some intellectual hobby that trades off easily against question answering)
Some types of intellectual labor I’m imagining here (which may or may not all fit neatly into the “questions” framework).
Take an scientific paper that’s written in confusing academic-speak PDF format, and translate it into “plain english blogpost.”
bonus points/money if you can do extra interpretive work to highlight important facts in a way that lets me use my own judgment to interpret them
Do a literature review on a topic
If you already know a given field, provide a handy link to the paper that actually answers a given question.
Figure out the answer to something that involves research
(can include contributing to small steps like ‘identify a list of articles to read’ or ‘summarize one of those articles’ or ‘help figure out what related sub-questions are involved’)
Conduct a survey or psych experiment (possibly on mechanical turk)
“Serious” questions could range from “take an afternoon of your time” to “take weeks or months of research”, and I’m curious what the actual going rate for those two ends of the spectrum are, for LW readers who are a plausible fit for this type of distributed work.
I feel like I should provide some data as someone who participated in a number of past bounties.
For one small bounty <$100, it was a chance to show off my research (i.e., Googling and paper skimming) skills, plus it was a chance to learn something that I was somewhat interested in but didn’t know a lot about.
For one of the AI alignment related bounties (Paul’s “Prize for probable problems” for IDA), it was a combination of the bounty giver signaling interest, plus it serving as coordination for a number of people to all talk about IDA at around the same time and me wanting to join that discussion while it was a hot topic.
For another of the AI alignment related bounties (Paul’s “AI Alignment Prize”), it was a chance to draw attention to some ideas that I already had or was going to write about anyway.
For both of the AI alignment related bounties, when a friend or acquaintance asks me about my “work”, I can now talk about these prize that I recently won, which sounds a lot cooler than “oh, I participate on this online discussion forum”. :)
Thanks, it was useful to hear about that variety of cases.
I don’t think it’s possible on LW. It’s not a matter of money (ok, it is, in that I don’t think anyone’s likely to offer a compelling bounty that I expect to be able to win). It’s not a matter of reliability of available offers (except that I don’t expect ANY).
It’s _is_ a question of reliability and trust, though. There are no organizations or people I trust enough to define a task well and make sure multiple aren’t competing in some non-transparent way, so that I actually expect to get paid for the work posted on a discussion site. And I don’t expect that I have enough track record for any bidder to prefer me for the kind of tasks you’re talking about at the rates I expect. [edit to add] Nor do I have any tasks where I’d prefer a bounty or open-bid rather than finding a partner/employee and agreeing on specific terms.
It’s also a question of what LW is for—posting and discussion of thought-provoking, well-researched, interestingly-modeled, and/or fun ideas is something that’s very hard to measure in order to reward monetarily. Also, I’ll be massively demotivated by thinking of this as a commercial site, even if I’m only in the free area.
My recommendation would be to use a different place to manage the tasks and the bid/ask process, and the acceptance of work and payment. Some tasks and their outputs might be appropriate to link here, but not the job management.
tl;dr: don’t mix money into LW. Social and intellectual rewards are working pretty well, and putting commerce into it could well kill it.
There’s some important points here that I’m going to address by rewriting the OP significantly.
(I’ve rewritten the post a bunch, which doesn’t directly answer your question but at least frames the question a bit better, which seemed higher priority)
Thanks. Still triggers my “money would be a de-motivator for what I like about LW” instinct, but I’m glad you’re acknowledging that it’s only one aspect of the question you’re asking.
The relevant questions are “how do you know what things need additional motivation” and “why do you think LW is best suited for it”? For the kind of things you’re talking about (summarizing research, things that take “a few days to a few weeks” of “not intrinsically-fun”), I think that matching is more important than motivation. Finding someone with the right skillset and mindset to be ABLE to do the work at an acceptable cost is a bigger filter than motivating someone who just doesn’t know it’s needed. And I don’t think LW is the only place you’d want to advertise such work anyway.
Fortunately, it’s easy to test. Don’t add any site features, just post a job that you think is typical of what you’re thinking. See how people (both applicants and observers) react.
Note that I really _DO_ like your thinking about breaking down into managable sub-questions and managing inquiries that are bigger than a single post. I’d love to explore that completely separately from motivation and taskrabbit-like knowledge work.
Part of the impetus of our current thought process is there does seem to be a limit on the complexity of stuff that typically gets answered without bounties attached (but, we now have a track record of occasional bounty posts successfully motivating such work).
I could imagine it turning out that the correct balance involves “not building any additional site features, just allow it to be something that happens organically sometimes”, so that it can happen but there’s friction that prevents runaway Moloch processes.
I currently think there’s room to slightly increase the option of monetary incentives without destroying everything but it’s definitely something I’d want to think carefully about.
My answer (not necessarily endorsed by rest of team) to your question is something like “right now, it seems like for the most part, LessWrong motivates stuff that is either Insight Porn, or ‘Insight That Is At Least Reasonably Competitive with Insight Porn.’”
And we actually have the collective orientation and many of the skills needed to work on real, important problems collaboratively. Much of those problems won’t have the intrinsic feedback loops that make it natural to solve them – they’re neither as fun to work on nor to read.
We see hints of people doing this anyway, but despite the fact that I think, say, a Scott Alexander More Than You Wanted To Know post is 10x as valuable as the average high-karma LW post, it isn’t 10x as rewarded. (And I’d be much less willing to read if it Scott wasn’t as funny, and it’s sad if people have to gain the skill ‘be funny’ in order to work on stuff like that)
Meanwhile, there’s a bunch of ways Academia seems to systematically suck. I asked a friend who’s a bio grad student if Academia could use better communication infrastructure. And they said (paraphrased) “hah. Academia isn’t about communication and working together to solve problems. Academics wouldn’t want to share their early work, they’d be afraid of getting scooped.”
I’m not sure if their experience is representatives but it seemed at least pretty common.
Meanwhile, LessWrong has an actual existing culture that is pretty well suited to this. I think a project that attempted to move this elsewhere would not be nearly as successful. Even a “serious intellectual progress” solution network is still a social network, and still requires the chicken/egg problem of getting people to collectively believe in it.
I’m much more excited about such a project bootstrapping off LW than trying to start from scratch.
Can you elaborate on this? I haven’t seen any bounty-driven work adjacent to LW, and I’d like to look at a few successes to help me understand whether adding some of those mechanisms to LW is useful, comparing to adding some LW interactions (ads or links) to those places where bounties are already successful.
I totally get that, but those aren’t the only two options, and that excitement doesn’t make it the right choice.
Examples of bounties were included in the rewrite (they have already become moderately common on LW, and most of the time seem to produce more/better discussion). See the middle section for a few links.
I meant ‘excited’ in the sense that I expect it to work and generate a lot of value.
I suspect there’s a higher level difference in our thinking, something like:
It seems like your position is “LessWrong seems basically fine, don’t fix what’s not broke.”
Whereas my position is something like “LessWrong seems… well, fine, but there’s a huge gap between where we are now, and where I think we could be if we put in a lot more experimentation and optimization effort, and I think it would be fairly sad if LessWrong stayed where it is now.
I also think there’s a huge gap between where we are now, and where we were when Eliezer was writing. And where we are now feels very precarious. It depends on a number of people all having interesting things to write about that they are writing about publicly.
In “The Dark Times” (2015-2017), the interesting people gradually migrated elsewhere, and then LessWrong withered. We’ve built some new mechanical things that have improved the baseline of how LessWrong works, but we haven’t changed the overall landscape in a way that makes me confident that the site wouldn’t wither again.
I think even maintaining the baseline of “keep the site pretty okay” requires a continuous injection of effort and experimentation. And I’d like to get to a place where the site is generating as much (and ideally more) value as it was in 2011, without relying on a single author to drum up lots of interest.
Getting there will necessarily involve a lot of experimentation, so my default response to any given experiment is “that sounds interesting, let’s think about it and figure out how to make it the best version of itself and take the site in interesting new directions” rather than “hmm, but will this hurt the status quo?”.
That’s not how I’d summarize it. Much credit to you and the team and all the other participants for how well it’s doing, but I remember the various ups and downs, and the near-death in the “dark times”. I also hope it can be even better, and I don’t want to prevent all changes so it stagnates and dies again.
I do fear that a complete pivot (such that monetary prizes are large and common enough that money is a prime motivator) will break it. The previous prizes all seemed small enough that they were basically a bit above the social status of a giant upvote, and I didn’t see any day-long efforts from any of the responders. That’s very different from what you seem to be considering.
So I support cautious experimentation, and gradual changes. Major experiments (like prizes big enough to motivate day- or week-long efforts) probably should be labeled as experiments and done with current site features, rather than investing very much in. I’m actually more gung-ho than “so let’s think about it and figure out how to make it the best version” in many cases—I’d rather go with “let’s try it out cheaply and then think about what worked and what didn’t”. Pick something you’d like to fund (or find someone who has such a topic and the money to back it up), run it in Google Docs, with a link and summary here.
This applies to the more interesting (to me; I recognize that I’m not the only constituent) ideas as well. Finding ways to break problems down into manageable questions, and to link/synthesize the results seems like a HUGE potential, and it can be tested pretty cheaply. Have someone start a “question sequence”—no tech change, just titled as such. The asker seeks input on how to split the problem, as well as on sub-problems.
Really, I don’t mean to say “this is horrible, please don’t do anything in this cluster of ideas!” I do mean to say “I’m glad you’re thinking about the issues, but I see a _LOT_ of risk in introducing monetary incentive where social incentives are far more common. Please tread very carefully.”
(Not sure how serious I am about the following—it may just be an appeal to meta) You could use this topic as an experiment. Ruby’s posting some documents about Q&A thinking—put together an intro post, and label them all (including this post) “LW Q&A sequence”. Ask people how best to gather data and perform experiments along the way.
If answering the question takes weeks or months of work, won’t the question have fallen off the frontpage by the time the research is done?
What motivates me is making an impact and getting quality feedback on my thinking. These both scale with the number of readers. If no one will read my answer, I’m not feeling very motivated.
I’m currently exploring a possible feature wherein question-authors, and moderators, can flag answers as “Top Answers”, which trigger the question moving to the top of the home page, and adding the most recent “top answer” author as a co-author of the post.
Not 100% sure on the implementation details. Does that sound like that would help with this problem?
Well, the question asker will always see it (they’ll receive a notification). The act of answering it will also:
a) put it on the recent discussion section
b) it’ll also appear in on the slightly revamped Questions page, where both “Top Questions” and “Recent Activity” are sorted by which questions were most recently commented on. (“Top Questions” are “questions with 40 or more karma, sorted by recently commented/answered”).
We’ll be putting some work into figuring out how to keep questions take up “the correct amount of attention” (i.e. enough so old, important questions aren’t lost track of, but without cluttering the frontpage). If an answer is good, we will likely also curate the question along with the answer.
This could motivate me to spend minutes or hours answering a question, but I think it would be insufficient to motivate me to spend weeks or months. Maybe if there was an option to also submit my question answer as a regular post.
I do think that when you’re tackling something that’ll take weeks or months, it’s quite likely you’ll end up with multiple posts worth of content. In that case I think the “Answer” part would look more like linking to a separate post (or sequence) and summarizing it, than writing the whole thing in the answer section.
(I’ve also been thinking about having high-quality answers getting displayed as part of the question’s post item, so rather than the “primary author” being the person who asked the question, the top answer author is given prominent billing)
For the examples you give, the improvements you cite to intrinsic motivation + karma would be sufficient to motivate me for questions of the “take an afternoon of your time” type, which is approximately where my blogposts have been landing anyway. Further, several are already of the summarize papers/point to a list of sources type. On the long end of weeks or months, bounties in the hundreds would probably satisfy depending on the level of interest I have, which is the true variable in whether I engage.
It is hard to tell in the current format what kind of depth-of-answer the questioner is looking for, and what time frame would be appropriate for an answer. It is also hard to tell how well answered a question already is, which has a big impact on reading older questions or questions with many answers. Mostly I have been viewing questions at the same rate as blog posts, but it occurs to me that they don’t age in the same way informative or exploratory posts do; the question is unresolved until it is.
Having some way to disentangle the content of this site from when it was posted would be handy.
It’s still worth mentioning, but what sort of guidelines should be included when using the site and making it a better place?
For a few months I started donating a bit more money to LW instead of to EA, as a motivation to donate for reasons that don’t sound very interesting to me, but for reasons that are hard to evaluate and simple enough that I’d like to not see it’s impact on the world. But now, while I might donate that money to EA, the potential consequence is still quite a big win to my happiness and my career.
It would help if the poster directly approaches or tags me as a relevant expert.
Given that there is some probability of winning a question, let’s just guess it’s 20% on any particular question I might try to answer. This suggests to me a bounty of 5x whatever I would be willing to answer the question for in order to make me willing to do it. Assuming a question takes about a day of work (8 hours) to answer fully and successfully, and given our 5x multiplier, I’d be willing to try to answer a question I wasn’t already excited to answer for other reasons if it paid about $1800.
Many others may have lower opportunity costs, though (and I undercounted a bit because I assume any question I would answer would deliver me at least some sense of value beyond the money; otherwise my number would probably jump up closer to $2500).
Yeah, there’s two issues this points at that we’ve been thinking about:
1. “bounties” come with an issue where you’re not sure you’ll succeed, so if you’re actually relying on it for “real money” (instead of using the money as an indicator that someone cared which might motivate you enough to do it for fun), you need much more money for it to work
2. I actually expect a “well functioning” Q&A system works by having lots of people tackle small parts of a problem, in ways that are harder to assign credit for. (Or, at least, credit is distributed among many people)
Two approaches we’ve thought about include:
be more like “craigslist for intellectual progress”, where one section of LW is more like a contact-job-finding board. (This runs into usual issues of “job finding is hard both for employers and employees”, but would mean that you don’t need the 5x multiplier)
instead of “there’s one bounty that goes to the best thing”, a common pattern ends up being “question-asker puts forth the total amount they’re willing to spend on a thing”, with a vague goal of “distribute that money fairly towards people who contributed.”
Relatedly, we’ve considered something like a “tip jar” feature, where you can put a link to your paypal/patreon/whatever that shows up as a (not-too-obtrusive, but available) button when you mouse over someone’s username, or something). So that it’s easier to see “oh, this person did something that’s worth about $10 to me, I’mma give them $10.” And this might lend itself towards rewarding the person who took an initial step of “refactor your confusing question into 3 separate less confusing ones.”
If people provided this as a service, they might be risk-averse (it might make sense for people to be risk-averse with their runway), which means you’d have to pay more than hourly rate/chance of winning.
This might not be a problem, as long as the market does the cool thing markets do: allowing you to find someone with a lower opportunity cost than you for doing something.
In order to be motivated, I would like to have a good idea of the impact the work would be making. I would like to see a clear explanation of the process taken to come up with the question and a list of who in LW supports this question as being an effective target of attention at this point in time and why. Maybe this could be documented in the question post and maybe there could be rounds that potential questions to go through for community members to vote/discuss/rate them. Maybe there could be a backlog of other questions that have not been chosen yet with reasons why they have not been chosen yet to help new questions arise. I would also like to know which other LW users are working on it (to avoid duplication of efforts) and if there are good opportunities for delegating work among multiple community members.
I like the idea of sub-questions. It might be interesting to have a display in the form of a graph with vertices as question/answers and directed edges as indicating a sub-super relationship between questions/answers. I think this would help us get a big picture view of the progress made and how it was achieved.
Since there is only so much that can be done by one community, I think it could in some cases be useful to have questions that are intended to be handed off to external parties like academic groups or certain organizations or renowned individuals after we do enough investigatory work.
If this blog’s “hard questions” have utility, they should be novel, important, and answerable.
Important questions are highly likely to be known already among experts in the relevant field. If they’re answerable, one of those experts is likely already working on it with more rigor than you’re capable of extracting from a crowd of anonymous bloggers. I think, then, that any questions you ask have a high probability of being redundant, unimportant, or unanswerable (at least to a useful degree of rigor). Unfortunately, you’re unlikely to know that in advance unless you vet the questions with experts in the relevant literature.
And at that point, you’re starting to look like an unaccountable, opaque, disorganized, and underresourced anonymously peer-reviewed journal.
It might be interesting to explore the possibility that a wiki-written or amateur-sourced peer reviewed journal could have some utility, especially if it focused on a topic that is not so dependent on the expensive and often opaque process of gathering empirical data. I expect that anyone who can advance the field of mathematics is probably already a PhD mathematician. So philosophy, decision theory, something like that?
Developing a process to help an anonymous crowd of blog enthusiasts turn their labor into a respectable product would be useful and motivating. I would start by making your next “hard question” what specific topic such a PRJ could usefully focus on.
Your premises seem strange to me – questions are either important and already worked on, or not important? Already-worked-on-questions don’t need answers? Both of these seem false.
If an expert somewhere knows the answer to something, I still often need to know the answer myself (because it’s a piece of a broader puzzle that I care about, which the expert doesn’t necessarily care about). I still need someone to go find the answer, distill it, and to help put it into a new context.
The LW community historically has tackled questions that were important, and that few other people were working on (in particular related to human rationality, AI alignment and effective altruism)