Admittedly, AI safety wasn’t the one focus of LW. Back in the old days of LW 1.0, there wasn’t a sole focus on AI safety as a conversational topic, the way modern LW is.
The discussion before modern deep learning that started to conquer real problems was a mix of the following:
Intelligence Explosion and the Hanson-Yudkowsky Foom debate.
Whole Brain Emulation.
Cryonics.
Transhumanism.
Friendly AI.
Harry Potter and the Methods of Rationality.
And more.
Speaking of Whole Brain Emulations, that’s what Kurzgesagt had made a video about 2 years ago, but unfortunately it looks like brain emulations probably won’t be developed before AGIs.
I’m pretty new, but I thought LW was basically established with that focus in place, wasn’t it?
It was probably the primary motivation for Eliezer writing the sequences and building the site, yeah.
Admittedly, AI safety wasn’t the one focus of LW. Back in the old days of LW 1.0, there wasn’t a sole focus on AI safety as a conversational topic, the way modern LW is.
The discussion before modern deep learning that started to conquer real problems was a mix of the following:
Intelligence Explosion and the Hanson-Yudkowsky Foom debate.
Whole Brain Emulation.
Cryonics.
Transhumanism.
Friendly AI.
Harry Potter and the Methods of Rationality.
And more.
Speaking of Whole Brain Emulations, that’s what Kurzgesagt had made a video about 2 years ago, but unfortunately it looks like brain emulations probably won’t be developed before AGIs.