I’ve been thinking about the responses I’ve received a lot the past few days, and have somewhat changed my opinions written here, though not entirely. It really deserves a second essay, but it seems to me that EA (as normally practiced in this community) has a number of potentially dangerous blindspots, most notably in areas where it is hard to determine in advance how effective a given cause will be, or in general in areas that are hard to compute the value of using any currently known formal utilitarian systems. I think too much weight is currently being given by the EA community into our ability to formally calculate the value of a given good, and additionally, there needs to be greater willingness to fund more diverse actions, in my opinion. I know I’m not explaining my case very well here, but I would like to go back to this at some point and expand on it.
thanks for your insightful feedback!
I’ve been thinking about the responses I’ve received a lot the past few days, and have somewhat changed my opinions written here, though not entirely. It really deserves a second essay, but it seems to me that EA (as normally practiced in this community) has a number of potentially dangerous blindspots, most notably in areas where it is hard to determine in advance how effective a given cause will be, or in general in areas that are hard to compute the value of using any currently known formal utilitarian systems. I think too much weight is currently being given by the EA community into our ability to formally calculate the value of a given good, and additionally, there needs to be greater willingness to fund more diverse actions, in my opinion. I know I’m not explaining my case very well here, but I would like to go back to this at some point and expand on it.