There’s some debate about which things are “improvements” as opposed to changes. It’s varied a bit which of these have happened directly on LessWrong, but things that seem like improvements to me, which I now think of as important parts of the LessWrong idea ecosystem include:
Updates on decision theory
seem like the clearest example of intellectual progress on an idea that happened “mostly on LessWrong”, as opposed to mostly happening in private and then periodically being written up on LessWrong afterwards.
Sequences were written pre-replication crisis.
At least some elements were just wrong due to that. (More recent editing passes on Rationality A-Z have removed those, from what I recall. For example, it no longer focuses on the Robber Caves Experiment)
AI Landscape has evolved
During the sequences days, a lot of talk about how AI was likely to develop was more in the “speculation” phase. By now we’ve seen a lot of concrete advances in the state of the art, which makes for more concrete and different discussions of how things are likely to play out and what good strategies are to address it.
Shift from specific biases towards general mental integration/flexible skillsets
In the Sequences Days, a lot of discussion focused on “how do we account for particular biases.” There has been some shift away from this overall mindset, because dealing with individual biases mostly isn’t that useful.
There are particular biases like ‘confirmation’ and ‘scope insensitivity’ that still seem important to address directly, but it used to be more common to, say, read through Wikipedia’s list of cognitive biases and address each one)
Instead there’s a bit more of a focus on how to integrate your internal mental architecture in such a way that you can notice biases/motivated thinking/etc and address it flexibly. In particular, if dialog with yourself about why you don’t seem to be acting on information even after it’s pointed out to you.
Developing the general skill of noticing a situation, and then taking some kind of action on it, turns out to be a core rationality skill. (There are versions of this that focus on the action you take in particular situations, and there are versions that are more focused on the “noticing” part, where you just focus on noticing-in-the-moment particular mental states, conversational patterns, or situations that warrant applying some kind of ‘thinking on purpose’ to)
So it seems that there was progress in applied rationality and in AI. But that’s far from everything LW has talked about. What about more theoretical topics, general problems in philosophy, morality, etc? Do you feel than discussing some topics resulted in no progress and was a waste of time?
There’s some debate about which things are “improvements” as opposed to changes.
Important question. Does the debate actually exist, or is this a figure of speech?
On the AI safety side, I feel like there’s been an enormous amount of progress. Most notably for me was Stuart Armstrong’s post: Humans can be assigned any values whatsoever..
There has been significant work on utility functions, but it’s not so much incremental progress and more correction of a mistake.
There’s some debate about which things are “improvements” as opposed to changes. It’s varied a bit which of these have happened directly on LessWrong, but things that seem like improvements to me, which I now think of as important parts of the LessWrong idea ecosystem include:
Updates on decision theory
seem like the clearest example of intellectual progress on an idea that happened “mostly on LessWrong”, as opposed to mostly happening in private and then periodically being written up on LessWrong afterwards.
Sequences were written pre-replication crisis.
At least some elements were just wrong due to that. (More recent editing passes on Rationality A-Z have removed those, from what I recall. For example, it no longer focuses on the Robber Caves Experiment)
AI Landscape has evolved
During the sequences days, a lot of talk about how AI was likely to develop was more in the “speculation” phase. By now we’ve seen a lot of concrete advances in the state of the art, which makes for more concrete and different discussions of how things are likely to play out and what good strategies are to address it.
Shift from specific biases towards general mental integration/flexible skillsets
In the Sequences Days, a lot of discussion focused on “how do we account for particular biases.” There has been some shift away from this overall mindset, because dealing with individual biases mostly isn’t that useful.
There are particular biases like ‘confirmation’ and ‘scope insensitivity’ that still seem important to address directly, but it used to be more common to, say, read through Wikipedia’s list of cognitive biases and address each one)
Instead there’s a bit more of a focus on how to integrate your internal mental architecture in such a way that you can notice biases/motivated thinking/etc and address it flexibly. In particular, if dialog with yourself about why you don’t seem to be acting on information even after it’s pointed out to you.
Understanding of Trigger-Action-Plans and Noticing
Developing the general skill of noticing a situation, and then taking some kind of action on it, turns out to be a core rationality skill. (There are versions of this that focus on the action you take in particular situations, and there are versions that are more focused on the “noticing” part, where you just focus on noticing-in-the-moment particular mental states, conversational patterns, or situations that warrant applying some kind of ‘thinking on purpose’ to)
Focusing (and other introspective techniques)
There’s a class of information you might be operating on subconsciously that’s hard to think concretely about.
Doublecrux
Generated by CFAR but by now has extensive writeups that have built upon it by other people.
Internal Doublecrux
So it seems that there was progress in applied rationality and in AI. But that’s far from everything LW has talked about. What about more theoretical topics, general problems in philosophy, morality, etc? Do you feel than discussing some topics resulted in no progress and was a waste of time?
Important question. Does the debate actually exist, or is this a figure of speech?
There has been significant work on utility functions, but it’s not so much incremental progress and more correction of a mistake.