Second edit: Dagon is very kind and I feel ok; for posterity, my original comment was basically a link to the last paragraph of this comment, which talked about helping depressed EAs as some sort of silly hypothetical cause area.
Edit: since someone wants to emphasize how much they would “enjoy watching [my] evaluation contortions” of EA ideas, I elect to delete what I’ve written here.
eep! I deeply apologize that my remarks have caused you pain. I am skeptical of EA, and especially the more … tenuous causal and ethical calculations that are sometimes used to justify non-obvious charities. But I deeply respect and appreciate everyone who is thinking and acting with the intent to make the world better rather than worse, and my disbelief in the granularity of calculation is tiny and unimportant compared to my belief that individuals who want to make a difference can do so.
Also, I cry at the drop of a hat, so if you start I’m definitely joining you out of both shame and sympathy.
Ok, thank you, this helps a lot and I feel better after reading this, and if I do start crying in a minute it’ll be because you’re being very nice and not because I’m sad. So, um, thanks. :)
I’d enjoy watching the evaluation contortions that an EA would have to go through to decide that their best contribution is to help a specific not-very-effective (due to mental health problems or disability) contributor rather than more direct contributions.
Uncertainty is multiplied, not just added, with each step in a causal chain. If you’re trying to do math on consequentialism (let alone utilitarianism, which has further problems with valuation), you’re pretty much doomed for anything more complicated than mosquito nets.
Edit—leaving original for the historical record. OMG this came out so much meaner than I intended. Honestly, even small improvements in depression across many sufferers seems like it could easily multiply out to huge improvements in human welfare—it’s a horrible thing and causes massive amounts of pain. I meant only to question the picking of individuals based on their EA intentions and helping them specifically rather than scalable options for all.
I’m pretty unsure about statistics for this. Depression seems to be about six to ten percent of the population.
So, are there strong arguments that disproportionately high amounts of promising EAs have depression / disabilities?
I can steelman a sort of consequentialist argument for redirecting existing efforts to help disabled people towards the most promising, high-value people, but I’m more curious if anyone has info about mental health and the EA community.
I’m pretty unsure about statistics for this. Depression seems to be about six to ten percent of the population.
So, are there strong arguments that disproportionately high amounts of promising EAs have depression / disabilities?
I can steelman a sort of consequentialist argument for redirecting existing efforts to help disabled people towards the most promising, high-value people, but I’m more curious if anyone has info about mental health and the EA community.
So, are there strong arguments that disproportionately high amounts of promising EAs have depression / disabilities?
Even when it’s not disproportionately high for EA’s if it’s 8% of EAs that might be enough. I think it’s plausible that a psychologist who specializes on helping EA people does better at helping EA people than the average psychologist.
If a psychologist already understands worries about AGI destroying humanity it’s easier for the patient to talk to them about it.
I’ll try to be gentler about my concern, but I really do want to caution against EA interventions that are targeted at EA members. Helping someone is a pure good, but there’s both a bias problem and an optics problem with helping people because they’re similar to yourself.
(and note: one of the reasons I don’t consider myself to be part of EA is that I prefer to help people close or similar to myself disproportionately to the amount of net human impact. I’m not saying “don’t do that”, just “be careful not to claim that EA justifies it”).
When it comes to publically recommending causes it’s worthwhile to focus on projects with good optics like the GiveWell recommended charities. At the same time it’s okay if individual people decide that they believe projects with worse optics are high impact interventions.
To the extent that there are fuzzies involved in helping fellow EA people, it’s worthy to acknowledge the fact and be conscious that they are part of the reason for your donation but in generating fuzzies isn’t a reason against donating.
I don’t mean to discourage helping friends, family, neighbors, or other groups where you’re a member. Or anyone else—all charity is good. I only wanted to point out that EA loses credibility if it suspiciously turns out that the detailed calculations and evaluation of options give clear support to your friends/co-believers.
I guess it needs to be made even more obvious that one can help their friends without having (or pretending to have) an exact calculation proving that this is the optimal thing to do.
I agree that this might be a niche role. But I’m still unsure about the demand. There’s about 12,000 people in the FB group. If that’s conservatively about 10% of all EAs, we’re still only looking at about 120,000 people, and then only about 9,600 potential patients, spread across the entire globe.
Then again, I admit I really don’t know how demand works for psychology (is ~10,000 potential patients enough?), and those are just ballpark works.
A psychologist who does weekly 1-hour sessions with their patients might have 40 patients at one time if they work 40 hours a week and just spent time with patients. I think it’s likely that you want the person to do more than just 1-on-1 work and also write a few blog posts about what they learn, so 30 patients at a time might be a decent count.
CBT can be done via Skype, so the fact that patients are spread over the globe isn’t a problem.
That means your therapist might treat 120 people in a year. It would be fine to fund a single therapist for this task as an MVP. If you would have a single therapist who the EA community holds in high regard I would estimate that the person can find those 120 people to treat.
Cool. Thanks for the stats on how psychologists work; all this is new to me. A sort of Schelling therapist who’s able to help w/ people in the EA community does seem like a force multiplier / helpful thing to have, I guess.
Second edit: Dagon is very kind and I feel ok; for posterity, my original comment was basically a link to the last paragraph of this comment, which talked about helping depressed EAs as some sort of silly hypothetical cause area.
Edit: since someone wants to emphasize how much they would “enjoy watching [my] evaluation contortions” of EA ideas, I elect to delete what I’ve written here.
I’m not crying.
eep! I deeply apologize that my remarks have caused you pain. I am skeptical of EA, and especially the more … tenuous causal and ethical calculations that are sometimes used to justify non-obvious charities. But I deeply respect and appreciate everyone who is thinking and acting with the intent to make the world better rather than worse, and my disbelief in the granularity of calculation is tiny and unimportant compared to my belief that individuals who want to make a difference can do so.
Also, I cry at the drop of a hat, so if you start I’m definitely joining you out of both shame and sympathy.
Ok, thank you, this helps a lot and I feel better after reading this, and if I do start crying in a minute it’ll be because you’re being very nice and not because I’m sad. So, um, thanks. :)
I’d enjoy watching the evaluation contortions that an EA would have to go through to decide that their best contribution is to help a specific not-very-effective (due to mental health problems or disability) contributor rather than more direct contributions.
Uncertainty is multiplied, not just added, with each step in a causal chain. If you’re trying to do math on consequentialism (let alone utilitarianism, which has further problems with valuation), you’re pretty much doomed for anything more complicated than mosquito nets.
Edit—leaving original for the historical record. OMG this came out so much meaner than I intended. Honestly, even small improvements in depression across many sufferers seems like it could easily multiply out to huge improvements in human welfare—it’s a horrible thing and causes massive amounts of pain. I meant only to question the picking of individuals based on their EA intentions and helping them specifically rather than scalable options for all.
EDIT: Replied to wrong OP.
I’m pretty unsure about statistics for this. Depression seems to be about six to ten percent of the population.
So, are there strong arguments that disproportionately high amounts of promising EAs have depression / disabilities?
I can steelman a sort of consequentialist argument for redirecting existing efforts to help disabled people towards the most promising, high-value people, but I’m more curious if anyone has info about mental health and the EA community.
I’m pretty unsure about statistics for this. Depression seems to be about six to ten percent of the population.
So, are there strong arguments that disproportionately high amounts of promising EAs have depression / disabilities?
I can steelman a sort of consequentialist argument for redirecting existing efforts to help disabled people towards the most promising, high-value people, but I’m more curious if anyone has info about mental health and the EA community.
Even when it’s not disproportionately high for EA’s if it’s 8% of EAs that might be enough. I think it’s plausible that a psychologist who specializes on helping EA people does better at helping EA people than the average psychologist.
If a psychologist already understands worries about AGI destroying humanity it’s easier for the patient to talk to them about it.
I’ll try to be gentler about my concern, but I really do want to caution against EA interventions that are targeted at EA members. Helping someone is a pure good, but there’s both a bias problem and an optics problem with helping people because they’re similar to yourself.
(and note: one of the reasons I don’t consider myself to be part of EA is that I prefer to help people close or similar to myself disproportionately to the amount of net human impact. I’m not saying “don’t do that”, just “be careful not to claim that EA justifies it”).
When it comes to publically recommending causes it’s worthwhile to focus on projects with good optics like the GiveWell recommended charities. At the same time it’s okay if individual people decide that they believe projects with worse optics are high impact interventions.
To the extent that there are fuzzies involved in helping fellow EA people, it’s worthy to acknowledge the fact and be conscious that they are part of the reason for your donation but in generating fuzzies isn’t a reason against donating.
Thanks, that said it better than I did.
I don’t mean to discourage helping friends, family, neighbors, or other groups where you’re a member. Or anyone else—all charity is good. I only wanted to point out that EA loses credibility if it suspiciously turns out that the detailed calculations and evaluation of options give clear support to your friends/co-believers.
I guess it needs to be made even more obvious that one can help their friends without having (or pretending to have) an exact calculation proving that this is the optimal thing to do.
Hm, okay, i hadn’t thought about it like this.
I agree that this might be a niche role. But I’m still unsure about the demand. There’s about 12,000 people in the FB group. If that’s conservatively about 10% of all EAs, we’re still only looking at about 120,000 people, and then only about 9,600 potential patients, spread across the entire globe.
Then again, I admit I really don’t know how demand works for psychology (is ~10,000 potential patients enough?), and those are just ballpark works.
A psychologist who does weekly 1-hour sessions with their patients might have 40 patients at one time if they work 40 hours a week and just spent time with patients. I think it’s likely that you want the person to do more than just 1-on-1 work and also write a few blog posts about what they learn, so 30 patients at a time might be a decent count.
CBT can be done via Skype, so the fact that patients are spread over the globe isn’t a problem.
According to the Mayo clinic CBT takes an average of 10-20 sessions (http://www.mayoclinic.org/tests-procedures/cognitive-behavioral-therapy/details/what-you-can-expect/rec-20188674). That means you might change patients every 3 months.
That means your therapist might treat 120 people in a year. It would be fine to fund a single therapist for this task as an MVP. If you would have a single therapist who the EA community holds in high regard I would estimate that the person can find those 120 people to treat.
Cool. Thanks for the stats on how psychologists work; all this is new to me. A sort of Schelling therapist who’s able to help w/ people in the EA community does seem like a force multiplier / helpful thing to have, I guess.