I definitely agree that the AFF work is essential and does not seem to get as much attention as warranted, judging by the content of the weekly alignment newsletter. I still think that a bit more quantitative approach to philosophy would be a good thing. For example, I wrote a post “Order from Randomness” giving a toy model of how a predictable universe might spontaneously arise. I would like to see more foundational ideas from the smart folks at MIRI and elsewhere.
Fyi, if you’re judging based on the list of “what links have been included in the newsletter”, that seems appropriate, but if you’re judging based on the list of “what is summarized in the newsletter”, that’s biased away from AF and AFF because I usually don’t feel comfortable enough with them to summarize them properly.
I definitely agree that the AFF work is essential and does not seem to get as much attention as warranted, judging by the content of the weekly alignment newsletter. I still think that a bit more quantitative approach to philosophy would be a good thing. For example, I wrote a post “Order from Randomness” giving a toy model of how a predictable universe might spontaneously arise. I would like to see more foundational ideas from the smart folks at MIRI and elsewhere.
Fyi, if you’re judging based on the list of “what links have been included in the newsletter”, that seems appropriate, but if you’re judging based on the list of “what is summarized in the newsletter”, that’s biased away from AF and AFF because I usually don’t feel comfortable enough with them to summarize them properly.