I’d enjoy watching the evaluation contortions that an EA would have to go through to decide that their best contribution is to help a specific not-very-effective (due to mental health problems or disability) contributor rather than more direct contributions.
Uncertainty is multiplied, not just added, with each step in a causal chain. If you’re trying to do math on consequentialism (let alone utilitarianism, which has further problems with valuation), you’re pretty much doomed for anything more complicated than mosquito nets.
Edit—leaving original for the historical record. OMG this came out so much meaner than I intended. Honestly, even small improvements in depression across many sufferers seems like it could easily multiply out to huge improvements in human welfare—it’s a horrible thing and causes massive amounts of pain. I meant only to question the picking of individuals based on their EA intentions and helping them specifically rather than scalable options for all.
I’m pretty unsure about statistics for this. Depression seems to be about six to ten percent of the population.
So, are there strong arguments that disproportionately high amounts of promising EAs have depression / disabilities?
I can steelman a sort of consequentialist argument for redirecting existing efforts to help disabled people towards the most promising, high-value people, but I’m more curious if anyone has info about mental health and the EA community.
I’d enjoy watching the evaluation contortions that an EA would have to go through to decide that their best contribution is to help a specific not-very-effective (due to mental health problems or disability) contributor rather than more direct contributions.
Uncertainty is multiplied, not just added, with each step in a causal chain. If you’re trying to do math on consequentialism (let alone utilitarianism, which has further problems with valuation), you’re pretty much doomed for anything more complicated than mosquito nets.
Edit—leaving original for the historical record. OMG this came out so much meaner than I intended. Honestly, even small improvements in depression across many sufferers seems like it could easily multiply out to huge improvements in human welfare—it’s a horrible thing and causes massive amounts of pain. I meant only to question the picking of individuals based on their EA intentions and helping them specifically rather than scalable options for all.
EDIT: Replied to wrong OP.
I’m pretty unsure about statistics for this. Depression seems to be about six to ten percent of the population.
So, are there strong arguments that disproportionately high amounts of promising EAs have depression / disabilities?
I can steelman a sort of consequentialist argument for redirecting existing efforts to help disabled people towards the most promising, high-value people, but I’m more curious if anyone has info about mental health and the EA community.