This. A lot of the blame goes to MIRI viewing AI Alignment discretely, rather than continuously, as well as a view that only heroic or pivotal acts save the world. This tends to be all or nothing, and generates all-or-nothing views.
David Chapman talks about ways of thinking and is influenced by Buddhism and LW-style rationality. I’ve read his website-book “Meaningness” and I’m starting to read his new website-book “In the Cells of the Eggplant”. His twitter has a link to this page which seems like the right place to start reading his work. He would describe EY’s way of thinking as “rationalist eternalism” and “fixated”.
(He should not be confused with the guy who shot John Lennon.)
This. A lot of the blame goes to MIRI viewing AI Alignment discretely, rather than continuously, as well as a view that only heroic or pivotal acts save the world. This tends to be all or nothing, and generates all-or-nothing views.
I really wish David Chapman and his ideas were a more active part of this discussion.
Can you give some context?
David Chapman talks about ways of thinking and is influenced by Buddhism and LW-style rationality. I’ve read his website-book “Meaningness” and I’m starting to read his new website-book “In the Cells of the Eggplant”. His twitter has a link to this page which seems like the right place to start reading his work. He would describe EY’s way of thinking as “rationalist eternalism” and “fixated”.
(He should not be confused with the guy who shot John Lennon.)