A philosophy professor of Oxford University founded the Future of Humanity Institute to study what we can do now to ensure a long flourishing future. (Curiousity, what else did they do?) One of the effects this group had was to spin off (how?) a group-written blog, OvercomingBias.com, dedicated to the general theme of how to move our beliefs closer to reality.
One writer, Eliezer Yudkowsky, worked on Artificial Intelligence and wanted to warn other people about dangers he’d realized could come from it. When he tried to talk about the dangers, he found that not only did people not know the dangers, they did not understand the ideas necessary to understand the explanation of the dangers. Before he could explain the ideas he thought were most important, he had to explain a lot of smaller ideas that built up to it.
Yudkowsky’s writing covered a variety of topics, yet made them all come together to feel like part of the same deep philosophy in the spirit of becoming “less wrong” in one’s understanding of reality. As his writing gained popularity, he moved it to a new blogging website, Lesswrong.com, which anybody could post to.
Almost everyone who participated in this community in the early days had read Eliezer’s posts. Whether or not a person agreed with his ideas, his posts on various topics were iconic and precise; they became a common basis to start important conversations on. Other people’s content filled gaps in and built on this common canon. People liked having this common basis enough to try to share it with people they knew in everyday life or preferentially talk with people who had already read it.
[Zet: THEN CFAR. THEN HPMOR… Or was it the other way around?]
some source here and here, I am so bad at not plagiarizing.
thing I started typing out:
A philosophy professor of Oxford University founded the Future of Humanity Institute to study what we can do now to ensure a long flourishing future. (Curiousity, what else did they do?) One of the effects this group had was to spin off (how?) a group-written blog, OvercomingBias.com, dedicated to the general theme of how to move our beliefs closer to reality.
One writer, Eliezer Yudkowsky, worked on Artificial Intelligence and wanted to warn other people about dangers he’d realized could come from it. When he tried to talk about the dangers, he found that not only did people not know the dangers, they did not understand the ideas necessary to understand the explanation of the dangers. Before he could explain the ideas he thought were most important, he had to explain a lot of smaller ideas that built up to it.
Yudkowsky’s writing covered a variety of topics, yet made them all come together to feel like part of the same deep philosophy in the spirit of becoming “less wrong” in one’s understanding of reality. As his writing gained popularity, he moved it to a new blogging website, Lesswrong.com, which anybody could post to.
Almost everyone who participated in this community in the early days had read Eliezer’s posts. Whether or not a person agreed with his ideas, his posts on various topics were iconic and precise; they became a common basis to start important conversations on. Other people’s content filled gaps in and built on this common canon. People liked having this common basis enough to try to share it with people they knew in everyday life or preferentially talk with people who had already read it.
[Zet: THEN CFAR. THEN HPMOR… Or was it the other way around?]
some source here and here, I am so bad at not plagiarizing.