In writing my post, I had a number of different examples in the back of my mind.
Even if I don’t think that MIRI’s current Friendly AI research is of high value, I believe that there are instances in which people have undervalued Eliezer’s holistic output for the reason that I describe in my post.
There’s a broader context that my post falls into: note that I’ve made 11 substantive posts over the past 2.5 weeks, about subjects ranging from GiveWell’s on climate change and meta-research, to effective philanthropy in general, to epistemology.
You may be right that I should be spacing my posts out in a different way, temporally.
In writing my post, I had a number of different examples in the back of my mind.
Even if I don’t think that MIRI’s current Friendly AI research is of high value, I believe that there are instances in which people have undervalued Eliezer’s holistic output for the reason that I describe in my post.
There’s a broader context that my post falls into: note that I’ve made 11 substantive posts over the past 2.5 weeks, about subjects ranging from GiveWell’s on climate change and meta-research, to effective philanthropy in general, to epistemology.
You may be right that I should be spacing my posts out in a different way, temporally.