LessWrong has been around for 10+ years, CFAR’s been at work for around 6, and I think there have been at least a few other groups or individuals working on what I think of as the “Human Rationality Project.”
I’m interested, especially from people who have invested significant time in attempting to push the rationality project forward, what they consider the major open questions facing the field. (More details in this comment)
“What is the Rationality Project?”
I’d prefer to leave “Rationality Project” somewhat vague, but I’d roughly summarize it as “the study of how to have optimal beliefs and make optimal decisions while running on human wetware.”
If you have your own sense of what this means or should mean, feel free to use that in your answer. But some bits of context for a few possible avenues you could interpret this through:
Early LessWrong focused a lot of cognitive biases and how to account for them, as well as Bayesian epistemology.
CFAR (to my knowledge, roughly) started from a similar vantage point and eventually started moving in the direction of “how to do you figure out what you actually want and bring yourself into ‘internal alignment’ when you want multiple things, and/or different parts of you want different things and are working at cross purposes. It also looked a lot into Double Crux, as a tool to help people disagree more productively.
CFAR and Leverage both ended up exploring introspection as a tool.
Forecasting as a field has matured a bit. We have the Good Judgment project.
Behavioral Economics has begun to develop as a field.
I recently read “How to Measure Anything”, and was somewhat struck at how it tackled prediction, calibration and determining key uncertainties in a fairly rigorous, professionalized fashion. I could imagine an alternate history of LessWrong that had emphasized this more strongly.
With this vague constellation of organizations and research areas, gesturing at an overall field…
...what are the big open questions the field of Human Rationality needs to answer, in order to help people have more accurate beliefs and/or make better decisions?
Including this in the 2019 Review is a bit odd, since most of the content is in the answers rather than the question, but I like how those answers set a research agenda that can be followed up on.
TLDR: I nominate this post for the 2019 review because I want more people to pay attention to the sort of self- and community- wide reflection this post and the comments therein encourage! My reasons are listed below this paragraph, but the basic idea is that a central resource explicitly stating open problems, closed problems, research agendas, skills + techniques for solving specific problems, speculation about how to solve specific problems, and more would be rather helpful for the community in that we’d likely become individually and as a group significantly more capable of notices problems and applies skills and techniques to solving those problems, plus we’d get a lot better at institutional (community) onboarding, knowledge transference, etc. etc. etc.
I think hosting (on this site) a list of such problems, techniques used to try to solve them, research agendas for solving them, etc. would be enormously helpful for not only gaining an explicitly promoted community wide resource to serve as a foundation for reflection, understanding, and improvement upon what our Human Rationality project is all about, but would also serve as an excellent resource for individual Rationalists to improve our own epistemic AND instrumental rationality skills, individually.
Especially if, similar to how some commenters categorized or sectioned off different concepts or even concept-areas, the resource had sections ranging from the meta meta level, meta, reflection, abstract, etc. all the way down to concrete instructions on performing this one shiny ritual to improve this one particular skill or aspect about ones own self, techniques for improving instrumental rationality, techniques for building a Rationalist community in one’s location, etc.
Basically, it’s helpful to have a resource that points out what’s wrong, what needs to be fixed, speculates about problems we might not even be cognizant of thus far, specifies what we have solved and how we’ve solved, what individuals and groups can do to become more epistemically, instrumentally, and prudently Rational (did I miss any types?), and more.
At present it is possible for members of this community to identify open problems, to write about them, try to solve them, but it’s usually very individual-only kinds of efforts, ad-hoc, and not easily findable by many other members of the community.
And that’s just what’s written...at present everyone has to spend months or years of lurking, participating, etc. to get a good sense of what the open problems, or even learn about extant techniques for Rationalist self-improvement. I believe that the lack of a central, explicit resource about said problems and the other things aforementioned in my comment is definitely a net-drag on the community’s improvement, momentum, etc. Having such a resource would reduce that drag, increase the momentum of the community, and be very helpful for newcomers and long established members alike!
Thus, such a resource would raise the sanity waterline one Rationalist at a time (since it’d be easier for each person to improve individually thanks to such a community-wide resource) while also helping us improve more quickly and deeply as a community overall.
On a side note, there used to be a Meta section on LessWrong.com in the left sidebar menu thing, but that disappeared at some point. I see that you can now filter posts to see what are tagged “meta” (which is cool!), but even though that tag has hovertext saying that this is intended to be used only with things that are meta for the LessWrong site itself, it seems to be used as a general meta tag by a number of individuals and thus is not as useful as it could be, its utility is diluted, etc.
Cheers, Willa