Replication Crisis definitely hit hard. Lots of stuff there.
People’s timelines have changed quite a bit. People used to plan for 50-60 years, now it’s much more like 20-30 years.
Bayesianism is much less the basis for stuff. I think this one is still propagating, but I think Embedded Agency had a big effect here, at least on me and a bunch of other people I know.
There were a lot of shifts on the spectrum “just do explicit reasoning for everything” to “figuring out how to interface with your System 1 sure seems really important”. I think Eliezer was mostly ahead of the curve here, and early on in LessWrong’s lifetime we kind of fell prey to following our own stereotypes.
A lot of EA related stuff. Like, there is now a lot of good analysis and thinking about how to maximize impact, and if you read old EA-adjacent discussions, they sure strike me as getting a ton of stuff wrong.
Spaced repetition. I think the pendulum on this swung somewhat too far, but I think people used to be like “yeah, spaced repetition is just really great and you should use it for everything” and these days the consensus is more like “use spaced repetition in a bunch of narrow contexts, but overall memorizing stuff isn’t that great”. I do actually think rationalists are currently underusing spaced repetition, but overall I feel like there was a large shift here.
Nootropics. I feel like in the past many more people were like “you should take this whole stack of drugs to make you smarter”. I see that advice a lot less, and would advise many fewer people to follow that advice, though not actually sure how much I reflectively endorse that.
A bunch of AI Alignment stuff in the space of “don’t try to solve the AI Alignment problem directly, instead try to build stuff that doesn’t really want to achieve goals in a coherent sense and use that to stabilize the situation”. I think this was kind of similar to the S1 stuff, where Eliezer seemed ahead of the curve, but the community consensus was kind of behind.
Replication Crisis definitely hit hard. Lots of stuff there.
People’s timelines have changed quite a bit. People used to plan for 50-60 years, now it’s much more like 20-30 years.
Bayesianism is much less the basis for stuff. I think this one is still propagating, but I think Embedded Agency had a big effect here, at least on me and a bunch of other people I know.
There were a lot of shifts on the spectrum “just do explicit reasoning for everything” to “figuring out how to interface with your System 1 sure seems really important”. I think Eliezer was mostly ahead of the curve here, and early on in LessWrong’s lifetime we kind of fell prey to following our own stereotypes.
A lot of EA related stuff. Like, there is now a lot of good analysis and thinking about how to maximize impact, and if you read old EA-adjacent discussions, they sure strike me as getting a ton of stuff wrong.
Spaced repetition. I think the pendulum on this swung somewhat too far, but I think people used to be like “yeah, spaced repetition is just really great and you should use it for everything” and these days the consensus is more like “use spaced repetition in a bunch of narrow contexts, but overall memorizing stuff isn’t that great”. I do actually think rationalists are currently underusing spaced repetition, but overall I feel like there was a large shift here.
Nootropics. I feel like in the past many more people were like “you should take this whole stack of drugs to make you smarter”. I see that advice a lot less, and would advise many fewer people to follow that advice, though not actually sure how much I reflectively endorse that.
A bunch of AI Alignment stuff in the space of “don’t try to solve the AI Alignment problem directly, instead try to build stuff that doesn’t really want to achieve goals in a coherent sense and use that to stabilize the situation”. I think this was kind of similar to the S1 stuff, where Eliezer seemed ahead of the curve, but the community consensus was kind of behind.