LW2.0: Technology Platform for Intellectual Progress
This post presents one lens I (Ruby) use to think about LessWrong 2.0 and what we’re trying to accomplish. While it does not capture everything important, it does capture much and explains a little how our disparate-seeming projects can combine into a single coherent vision.
I describe a complimentary lens in LessWrong 2.0: Community, Culture, & Intellectual Progress.
(While the stated purpose of LessWrong is to be a place to learn and apply rationality, as with any minimally specified goal, we could go about pursuing this goal in multiple ways. In practice, myself and other members of the LessWrong team care about intellectual progress, truth, existential risk, and the far-future and these broader goals drive our visions and choices for LessWrong.)
Our greatest challenges are intellectual problems
It is an understatement to say that we can aspire to the world being better than it is today. There are myriad forms of unnecessary suffering in the world, there are the utopias we could inhabit, and, directly, there is an all too real chance that our civilization might wipe itself out in the next few years.
Whether we do it in a state of terror, excitement, or both—there is work to be done. We can improve the odds of good outcomes. Yet the challenge which faces us isn’t rolling up our sleeves and “putting in hard work”—we’re motivated—it’s that we need to figure out what exactly it is we need to do. Our problems are not of doing, but knowing.
Which interventions for global poverty are most cost-effective? What is the likelihood of a deadly pandemic or nuclear war? How does one influence government? What policies should we want governments to adopt? Is it better for me to earn-to-give or do direct work? How do we have a healthy, functioning community? How do we cooperate from groups small to large? How does one build a safe AGI? Will AGI takeoffs be fast or slow? How do we think and reason better? Which questions are the most important to answer? And so on, and so on, and so on.
One of our greatest challenge is answering the questions before us. One of our greatest needs is to make more intellectual progress: to understand the world, to understand ourselves, to know how to think, and to figure out what is true.
Technologies for intellectual progress
While humans have been improving our understanding the world for hundreds of thousands of years, our rate of progress has increased each time we evented new technologies which facilitate even more intellectual progress.
Such technologies for intellectual progress include: speech, writing, libraries, encyclopedias, libraries, microscopes, lectures, conferences, schools, universities, the scientific method, statistics, peer review, Double Crux, the invention of logic, the identification of logical fallacies, whiteboards and blackboards, flow charts, researching funding structures, spreadsheets, typewriters, the Internet, search engines, blogging, Wikipedia, StackExcange and Quora, collaborative editing such as Google Docs, the field of heuristics and biases, epistemology, rationality, and so on.
I am using the term technology broadly to include all things which did not exist naturally which we humans designed and implement to serve a function, including ideas, techniques, and social structures. What unifies the above list is that each item helps us to organize or share our knowledge. By building on the ideas of others and thinking collectively, we accomplish far more than we ever could alone. Each of the above, perhaps among other things, has been a technology which increased humanity’s rate of intellectual progress.
LessWrong as a technology platform for intellectual progress
I see no absolutely no reason to think that the above list of technologies for intellectual progress is anywhere near complete. There may even be relatively low-hanging fruit lying around that hasn’t been picked up since the invention of the Internet a mere thirty years ago. For example, the academic journal system, while now online, is mostly a digitized form of the pre-Internet systems not taking advantage of all the new properties of the internet such as effectively free and instantaneous distribution of material.
My understanding of the vision for LessWrong 2.0 is that we are a team who builds new technologies for intellectual progress and that LessWrong 2.0 is the technology platform upon which we can build these technologies.
Which technologies might LessWrong 2.0 build?
Having stated the vision for LessWrong 2.0 is to be a technology platform, it’s worth listing examples of thing we might build (or already are).
Open Questions Research Platform
In December 2018, we launched a beta of our Open Questions platform. Click to see current questions.
Make the goal of answering important questions explicit on LessWrong.
Provide affordance for asking, for knowing, and for providing.
Provide infrastructure to coordinate on what the most important problems are.
Provide incentives to spend days, weeks, or months researching answers to hard questions.
Lowering the barriers to contributing to the community’s research output, e.g. you don’t have to be hired to an organization to contribute.
Building a communal repository of knowledge upon which everyone can build.
Applying our community’s interests, techniques, culture, and truth-seeking commitment to uniquely high-quality research on super tricky problems.
In the opening of this document, I asserted that humanity’s greatest challenges are intellectual problems, that is, knowledge we need to build and questions we need to answer. It makes sense that we should make explicit that we want to ask, prioritize, and answer important questions on the site. And to further make it explicit that we are aiming to build explicit community of people who work to answer these important questions.
The core functionality of LessWrong to date has been people making posts and commenting on them. Authors write posts which are the intersection of their knowledge and community’s overall interests, perhaps there will be something of a theme at times. We haven’t had an obvious affordance that you can specifically request someone else generate or share content about a question you have. We haven’t had a way that people can easily see which questions other people have and which they could help with. And we overall haven’t had a way for the community to coordinate on which questions are most important.
As part of the platform, we can build new incentive systems to make it worth people’s time to spend days or weeks researching answers to hard questions.
The platform could provide an accessible way for new people to starting contributing to the community’s research agenda. Getting hired at a research org is very difficult, the platform could provide a pathway for far more people to contribute, especially if we combine it with a research talent training pipeline.
By the platform being online and public, it would begin to build a repository of shared knowledge upon which others can continue to build. Humanity’s knowledge comes from our sharing knowledge and building upon each other’s work. The more we can do that, the better.
Last, Open Questions is a way to turn our community’s specialized interests (AI, AI safety, rationality, self-improvement, existential risk, etc.), our culture, techniques and tools (Bayesian epistemology, quantitative mindset, statistical literacy, etc.), and truth-seeking commitment to uniquely high-quality research on super tricky problems.
See this comment thread for a detailed list of reasons why it’s worth creating a new questions platform when others already exist.
Marketplace for Intellectual Labor
Standard benefits of a market: matches up people want to hire work with people who want to perform work, therefore causing more valuable work to happen.
Possible advantages over Open Question:
Marketplace is a standard thing people are used to, easier to drive adoption than a very novel questions platform.
Potentially reduces uncertainty around payments/incentives.
Can help with trust.
Can provide more privacy (possibly a good thing, possibly not).
Less of a two-sided marketplace challenge.
Diversifies the range of work which can be traded, e.g. hiring, proofreading, lit-reviews, writing code.
A related idea to Open Questions (especially if people are paid for answers) is that of a general marketplace place where people can sell and buy intellectual labor including tasks like tutoring, proofreading essays, literature reviews, writing code, or full-blown research.
It might look like TaskRabbit or Craigslist except specialized for intellectual labor. The idea being that this would cause more valuable work to happen than otherwise would, and progress on important things to made.
A more detailed description of the Marketplace idea can be found in my document, Review of Q&A.
Talent Pipeline for Research
A requirement for intellectual research is people capable of do it.
An adequate supply of skill researchers is especially required for an Open Questions research platform.
LessWrong could potentially build expertise in doing good research and training others to do it.
We could integrate our trainees into the Open Questions platform.
A requirement for intellectual progress is that there are people capable of doing it, so generally we want more people capable of doing good research.
It may especially be a requirement for the Open Questions Platform to succeed. One of my primary uncertainties about whether Open Questions can work is whether we will have enough people willing and able to conduct research to answer questions. This leads to the idea that LessWrong might want to set up a training pipeline that helps people who want to become good researchers train up. We could build up expertise in both good research process and in teaching that process to people. We can offer to integrate our trainees into the LessWrong Open Question platform.
A research training pipeline might be a non-standard definition of “technology”, but I think it still counts, and entirely fits within the overall frame of LessWrong. In LW2.0: Culture, Community, and Intellectual Progress, I list training as one of LessWrong’s core activities.
Collaborative Documents a la Google Docs
Communications technologies are powerful and important causing more and different work to accomplished than would be otherwise.
Google Docs represents a powerful new technology, and we could improve on it even further with our own version of collaborative documents..
I expect this to result in significant gains to research productivity.
There have been successive generations of technology which make the generation and communication of ideas easier. One lineage might be: writing, the typewriter, Microsoft Word, email, Google Docs. Each has made communication easier and more efficient. Speed, legibility, ease of distribution, and ease of editing have made each successive technology more powerful.
I would argue that sometimes the efficiency gains with these technologies are so significant that they enable qualitatively new ways to communicate.
With Google Docs, multiple collaborators can access the same document (synchronously or asynchronously), this document is kept in sync, collaborators can make or suggest edits, and they comment directly on specific text. Consider how this was not really possible at all with Microsoft Word plus email attachments. You might at most send a document for feedback from one person, if you’d made edits in the meantime, you’d have merge them with their revisions. If you sent the document to two people via email attachment, they wouldn’t see each others feedback. And so on.
Google Documents, though we might take it for granted by now, was a significant improvement in how we can collaborate. It is generally useful, but also especially useful to do those doing generative intellectual work together who can share, collaborate, and get feedback in a way not possible with previous technologies.
Yet as good as Google Docs is, it could be better. The small things add up. You can comment on any text, but it is difficult to have any substantial discussion in comment chain due to length restrictions and how the comment chains are displayed. There isn’t a built in comment section for the whole document rather than specific text. Support for footnotes is limited. There isn’t support for Latex. This is only a starting list of optimizations which could make Google Docs an even better tool for research and general intellectual progress.
There could be further benefits from having this tool integrated with LessWrong. Easy and immediate publishing of documents to posts, access to a community of collaborators and feedback givers. Through the tool encouraging people to do their work on LessWrong, we could become the archive and repository for the community’s intellectual output.
Prediction Markets
Making predictions is a core rationality skill.
Prediction markets aggregate individual opinions to get an even better overall prediction.
LessWrong could build or partner with an existing prediction market project.
First, making good predictions is a core rationality skill, and one of the best ways to ensure you make good predictions is to have something riding on them, e.g. a bet. Second, aggregating the (financially-backed) predictions of multiple people is often an excellent generate overall predictions that are better than those of individuals.
Given the above, we could imagine that LessWrong, as a technology platform for intellectual progress, should be integrated with a prediction market and associated community of forecasters. There have been many past attempts at prediction markets, some existing ones, and few more nascent ones. I don’t know if LessWrong should set up its own new version, or perhaps seek to partner with an existing project.
I haven’t thought through this idea much, but it’s an idea the team has had.
- Novum Organum: Introduction by 19 Sep 2019 22:34 UTC; 86 points) (
- [Team Update] Why we spent Q3 optimizing for karma by 7 Nov 2019 23:39 UTC; 70 points) (
- [Site Meta] Feature Update: More Tags! (Experimental) by 22 Apr 2020 2:12 UTC; 68 points) (
- Tagging Open Call / Discussion Thread by 28 Jul 2020 21:58 UTC; 65 points) (
- LW Team Updates—October 2019 by 1 Oct 2019 23:08 UTC; 30 points) (
- LW2.0: Community, Culture, and Intellectual Progress by 19 Jun 2019 20:25 UTC; 27 points) (
- 1 Aug 2019 0:17 UTC; 4 points) 's comment on Appeal to Consequence, Value Tensions, And Robust Organizations by (
- 23 Jun 2019 12:31 UTC; 2 points) 's comment on Raemon’s Quick takes by (EA Forum;
I think you guys are doing a great job rapidly hypothesizing and testing ways to enable the LW community create more value. I’m a fan of the questions feature and predict its usage will grow steadily. I’m interested to see how the other ideas play out.
It’s interesting to compare this with the polymath projects. I wasn’t a part of them, and I don’t know what technology they used, but it might be interesting to look into what they used, with regards to tech and organization.
One might ask how this might be integrated with asking questions.
Consider the benefits of building a reverse dictionary*, and the difficulties**. While complex, it seems a simpler task than “answer questions” and might be a tractable sub-problem.
*Sometimes people in different fields work on similar problems, but are unaware of each other. The question “Can we use ideas from ecosystem management to cultivate a healthy rationality memespace?” seems related.
**How do we make something that takes a description, and finds 1) the idea (if it exists), and it’s names or 2) related ideas?
I looked into the polymath project a bit. I might end up writing a post about my thoughts at some point, but it’s been a significant inspiration for LW 2.0.
Errata:
Thanks! Fixed.