What qualifies one as an effective altruist for the purposes of this survey? Is it “self-identifies as an effective altruist”? Or something else?
Also:
were altruistic before becoming EAs
This phrase strongly suggests that the EA community needs to more clearly describe what it is they mean when they use the terms “altruism” and “effective altruism” (as I’ve commented before).
What qualifies one as an effective altruist for the purposes of this survey? Is it “self-identifies as an effective altruist”?
Yes, the second question is:
Could you, however loosely, be described as ‘an EA’?Answer no if you are not familiar with the term ‘EA’, which stands for ‘Effective Altruist’. This question is not asking if you are altruistic and value effectiveness, but rather whether you loosely identify with the existing ‘EA’ identity.
This phrase strongly suggests that the EA community needs to more clearly describe what it is they mean when they use the terms “altruism” and “effective altruism” (as I’ve commented before).
What would you suggest? I take ‘altruistic’ to generally mean ‘acts partly for the good of others, and is willing to make sacrifices for this end’. There’s then a decent behavioural test for whether people were altruistic beforehand. There’s no clear definition of being EA, besides accepting some sufficient number of EA ideas.
This question is not asking if you are altruistic and value effectiveness, but rather whether you loosely identify with the existing ‘EA’ identity.
I judge this to be a problematic criterion. See this comment, esp. starting with “To put this another way …”, for why I think so.
What would you suggest? I take ‘altruistic’ to generally mean ‘acts partly for the good of others, and is willing to make sacrifices for this end’.
That does seem like a reasonable definition, but in that form it seems rather too vague to be useful for the purposes of constructing a behavioral test. We’d have to at least begin to sketch out what sorts of acts we mean (literally any act that benefits anyone else in any way?), and what sorts of sacrifices, and how willing, etc.
There’s no clear definition of being EA, besides accepting some sufficient number of EA ideas.
Quite so. My contention is that there’s a distinct separation between, on the one hand, the general idea that we should be altruistic (in whatever sense we decide is meaningful and useful) and that we should seek to optimize the effectiveness of our altruism, and on the other hand, the loose community of people who share certain values, certain approaches to ethics, etc. (as I outline in the above-linked comment), which are not necessarily causally or conceptually entangled with the former (more general) idea.
This is problematic for various reasons, I think. I won’t clutter this thread by starting a debate on those reasons (unless asked), but I think it’s at least important (and relevant to endeavors like this survey) to recognize this distinction.
I judge this to be a problematic criterion. See this comment, esp. starting with “To put this another way …”, for why I think so.
That comment makes a lot of sense. It depends what we use the criterion for. In the survey, it’s to gather information, and it’s for precisely this reason that I chose not to ask if people were ‘EAs’ in your loose sense—almost everyone would say yes. I’m curious as to what uses do you think the criterion’s problematic for.
My contention is that there’s a distinct separation between, on the one hand, the general idea that we should be altruistic (in whatever sense we decide is meaningful and useful) and that we should seek to optimize the effectiveness of our altruism, and on the other hand, the loose community of people who share certain values, certain approaches to ethics, etc. (as I outline in the above-linked comment), which are not necessarily causally or conceptually entangled with the former (more general) idea.
It’s a matter of a degree, but in the EA context (which sets a high bar), I personally call people ‘altruistic’ if (but not only if) they’ve donated >=10% of a real income for over a year or they’ve consistently spent over an hour a week doing something they’d otherwise rather not do to help others.
My contention is that there’s a distinct separation between, on the one hand, the general idea that we should be altruistic (in whatever sense we decide is meaningful and useful) and that we should seek to optimize the effectiveness of our altruism, and on the other hand, the loose community of people who share certain values, certain approaches to ethics, etc. (as I outline in the above-linked comment), which are not necessarily causally or conceptually entangled with the former (more general) idea.
That’s right, if by ‘conceptually entangled’ you mean ‘necessarily connected’, or even ‘commonly accepted by both groups of people’. For example, I believe utilitarianism’s widely accepted by EAs (though the survey may show otherwise!), but not entangled with merely valuing altruism and the effectiveness of altruism.
This is problematic for various reasons, I think. I won’t clutter this thread by starting a debate on those reasons (unless asked), but I think it’s at least important (and relevant to endeavors like this survey) to recognize this distinction.
I see no harm in thread-cluttering, at least here—go for it.
This is problematic for various reasons, I think. I won’t clutter this thread by starting a debate on those reasons (unless asked), but I think it’s at least important (and relevant to endeavors like this survey) to recognize this distinction.
I see no harm in thread-cluttering, at least here—go for it.
Well, one issue is recruiting/evangelism/outreach/PR/etc. If you want to convince people[1] to both be altruistic and to attempt to optimize their altruism (i.e., the general form of the “effective altruism” concept), it does not do to conflate that general form with your specific form (which involves the specific, idiosyncratic ideas I listed in that comment I linked — a particular form of utilitarianism, a particular set of values including e.g. the welfare of animals, etc.).
Take me, for instance. I find the general concept to be almost obvious. (I’m an altruistic person by temperament, though I remain agnostic on whether certain forms of direct action are in fact the best way to bring about the sort of world toward which such action is ostensibly aimed, as compared with e.g. a more libertarian approach. As for the “effective” part — well, duh.) However, if you were to say: “Hey, Said Achmiz, want to join this-and-such EA group / organization / etc.? Or donate to it? Or otherwise contribute to its success?” I would demur, because in my experience, groups and organizations that self-identify as EA tend to have the aforementioned specific form of EA as their aim — and I have significant disagreements with many components of that specific form.
If you (this hypothetical organization) do not make it clear that you have, as your goal, the general form of effective altruism, and that the specific form is merely one way in which your members express it, then I won’t join/contribute/etc.
If you in fact have only the specific, and not the general, form as your goal, then not only will I not join, but I will be quite cross about the fact that you would thereby be appropriating the term “effective altruism” (which would otherwise describe a perfectly reasonable concept with which I agree and a general ethical and practical stance which I support), and using it to describe something which I do not support and about which I have strong reservations, and leaving me (and others like me) without what would otherwise be the best term for a position I do support.
I have another concern, which I will discuss in a sibling comment.
Edit: Whoops, forgot to resolve the footnote:
[1] When I say “convince people”, I mean both convincing non-altruists to become altruistic, and convincing ineffective altruists (“I’m a high-powered lawer who spends every weeknight volunteering at my local soup kitchen, while giving no money to charity!”) to be more effective in their altruism. I realize these two aims may require different approaches; I think those differences are tangential to my points here.
Here is the promised other issue I see with the conflation of the general[1] and specific[2] forms of effective altruism.
You do not actually ever argue for the ideas making up that specific form.
It seems to go like this:
“We all think being altruistic is good, right? Of course we do. And we think it’s important to be effective in our altruism, don’t we? Of course. Good! Now, onwards to the fight for animal rights, the saving of children in Africa, the application of utilitarian principles to our charity work, and all the rest.”
Now, as I say in my other comments, one issue is that potential newcomers to the movement might assent to those first two questions, but to the “Now, onwards …” say — “whoa, whoa, where did that suddenly come from?”. But the other issue is that it seems like you yourselves haven’t given much thought to those positions. How do you know they’re right, those philosophical and moral ideas? A lot of EA writing seems not to even consider the question! It’s not like these are obvious principles you’re assuming — many intelligent people, on LessWrong and elsewhere, do not agree with them!
Of course I don’t actually think you’ve simply accepted these ideas out of some sort of blind go-alonging with some liberal crowd. This is LessWrong; I think better of you folks than that. (Although some EA-ers without an LW-or-similar background may well have given the matter just as little thought as that.) Presumably, you were, at some point, convinced of these ideas, in some way, by some arguments or evidence or considerations.
But I have no idea what those considerations are. I have no idea what convinced you; I don’t know why you believe what you believe, because you hardly even acknowledge that you believe these things. In most EA writings I’ve seen, they are breezily assumed. That is not good for the epistemic health of the movement, I think.
I think it would be good to have some effort to clearly delineate the ideas that are held by, and commonly taken as background assumptions by, the majority of people in the EA movement; to acknowledge that these are nontrivial philosophical and moral positions, which are not shared by all people or even all who identify as rationalists; to explain how it was that you[3] became convinced of these ideas; and to lay out some arguments for said ideas, for potential disagreers to debate, if desired.
[1] “Being altruistic is good, and we should be effective in our altruistic actions.” [2] The specific cluster of ideas held by a specific community of people who describe themselves as the EA community. [3] By “you” I don’t necessarily mean you, personally, but: as many prominent figures in the EA movement as possible, and more generally, anyone who undertakes to write things intended to build the EA movement, recruit, etc.
Now, onwards to the fight for animal rights, the saving of children in Africa, the application of utilitarian principles to our charity work, and all the rest.
Global poverty don’t generally state or imply utilitarianism or similar views, though x-riskers do (at least those who value non-existent people). I personally favour global poverty charities, and am quite tentative in my attitudes to many mainstream ethical theories, and don’t think being more so would affect my donations (though being less so might).
But the other issue is that it seems like you yourselves haven’t given much thought to those positions. How do you know they’re right, those philosophical and moral ideas?
The degree of thought varies a lot, sure. I agree that people should spend more time on them when they’re action relevant, as they are for people who’d act to prevent x-risk if they accepted them.
In most EA writings I’ve seen, they are breezily assumed.
Breezy assumption isn’t optimal, but detailed writing about ethical theory isn’t either.
It’s a matter of a degree, but in the EA context (which sets a high bar), I personally call people ‘altruistic’ if (but not only if) they’ve donated >=10% of a real income for over a year or they’ve consistently spent over an hour a week doing something they’d otherwise rather not do to help others.
I apply a similarly high bar for altruism—many EAs don’t count as altruistic based on this.
What qualifies one as an effective altruist for the purposes of this survey? Is it “self-identifies as an effective altruist”? Or something else?
Also:
This phrase strongly suggests that the EA community needs to more clearly describe what it is they mean when they use the terms “altruism” and “effective altruism” (as I’ve commented before).
Yes, the second question is:
Could you, however loosely, be described as ‘an EA’? Answer no if you are not familiar with the term ‘EA’, which stands for ‘Effective Altruist’. This question is not asking if you are altruistic and value effectiveness, but rather whether you loosely identify with the existing ‘EA’ identity.
What would you suggest? I take ‘altruistic’ to generally mean ‘acts partly for the good of others, and is willing to make sacrifices for this end’. There’s then a decent behavioural test for whether people were altruistic beforehand. There’s no clear definition of being EA, besides accepting some sufficient number of EA ideas.
I judge this to be a problematic criterion. See this comment, esp. starting with “To put this another way …”, for why I think so.
That does seem like a reasonable definition, but in that form it seems rather too vague to be useful for the purposes of constructing a behavioral test. We’d have to at least begin to sketch out what sorts of acts we mean (literally any act that benefits anyone else in any way?), and what sorts of sacrifices, and how willing, etc.
Quite so. My contention is that there’s a distinct separation between, on the one hand, the general idea that we should be altruistic (in whatever sense we decide is meaningful and useful) and that we should seek to optimize the effectiveness of our altruism, and on the other hand, the loose community of people who share certain values, certain approaches to ethics, etc. (as I outline in the above-linked comment), which are not necessarily causally or conceptually entangled with the former (more general) idea.
This is problematic for various reasons, I think. I won’t clutter this thread by starting a debate on those reasons (unless asked), but I think it’s at least important (and relevant to endeavors like this survey) to recognize this distinction.
That comment makes a lot of sense. It depends what we use the criterion for. In the survey, it’s to gather information, and it’s for precisely this reason that I chose not to ask if people were ‘EAs’ in your loose sense—almost everyone would say yes. I’m curious as to what uses do you think the criterion’s problematic for.
It’s a matter of a degree, but in the EA context (which sets a high bar), I personally call people ‘altruistic’ if (but not only if) they’ve donated >=10% of a real income for over a year or they’ve consistently spent over an hour a week doing something they’d otherwise rather not do to help others.
That’s right, if by ‘conceptually entangled’ you mean ‘necessarily connected’, or even ‘commonly accepted by both groups of people’. For example, I believe utilitarianism’s widely accepted by EAs (though the survey may show otherwise!), but not entangled with merely valuing altruism and the effectiveness of altruism.
I see no harm in thread-cluttering, at least here—go for it.
Well, one issue is recruiting/evangelism/outreach/PR/etc. If you want to convince people[1] to both be altruistic and to attempt to optimize their altruism (i.e., the general form of the “effective altruism” concept), it does not do to conflate that general form with your specific form (which involves the specific, idiosyncratic ideas I listed in that comment I linked — a particular form of utilitarianism, a particular set of values including e.g. the welfare of animals, etc.).
Take me, for instance. I find the general concept to be almost obvious. (I’m an altruistic person by temperament, though I remain agnostic on whether certain forms of direct action are in fact the best way to bring about the sort of world toward which such action is ostensibly aimed, as compared with e.g. a more libertarian approach. As for the “effective” part — well, duh.) However, if you were to say: “Hey, Said Achmiz, want to join this-and-such EA group / organization / etc.? Or donate to it? Or otherwise contribute to its success?” I would demur, because in my experience, groups and organizations that self-identify as EA tend to have the aforementioned specific form of EA as their aim — and I have significant disagreements with many components of that specific form.
If you (this hypothetical organization) do not make it clear that you have, as your goal, the general form of effective altruism, and that the specific form is merely one way in which your members express it, then I won’t join/contribute/etc.
If you in fact have only the specific, and not the general, form as your goal, then not only will I not join, but I will be quite cross about the fact that you would thereby be appropriating the term “effective altruism” (which would otherwise describe a perfectly reasonable concept with which I agree and a general ethical and practical stance which I support), and using it to describe something which I do not support and about which I have strong reservations, and leaving me (and others like me) without what would otherwise be the best term for a position I do support.
I have another concern, which I will discuss in a sibling comment.
Edit: Whoops, forgot to resolve the footnote:
[1] When I say “convince people”, I mean both convincing non-altruists to become altruistic, and convincing ineffective altruists (“I’m a high-powered lawer who spends every weeknight volunteering at my local soup kitchen, while giving no money to charity!”) to be more effective in their altruism. I realize these two aims may require different approaches; I think those differences are tangential to my points here.
Here is the promised other issue I see with the conflation of the general[1] and specific[2] forms of effective altruism.
You do not actually ever argue for the ideas making up that specific form.
It seems to go like this:
“We all think being altruistic is good, right? Of course we do. And we think it’s important to be effective in our altruism, don’t we? Of course. Good! Now, onwards to the fight for animal rights, the saving of children in Africa, the application of utilitarian principles to our charity work, and all the rest.”
Now, as I say in my other comments, one issue is that potential newcomers to the movement might assent to those first two questions, but to the “Now, onwards …” say — “whoa, whoa, where did that suddenly come from?”. But the other issue is that it seems like you yourselves haven’t given much thought to those positions. How do you know they’re right, those philosophical and moral ideas? A lot of EA writing seems not to even consider the question! It’s not like these are obvious principles you’re assuming — many intelligent people, on LessWrong and elsewhere, do not agree with them!
Of course I don’t actually think you’ve simply accepted these ideas out of some sort of blind go-alonging with some liberal crowd. This is LessWrong; I think better of you folks than that. (Although some EA-ers without an LW-or-similar background may well have given the matter just as little thought as that.) Presumably, you were, at some point, convinced of these ideas, in some way, by some arguments or evidence or considerations.
But I have no idea what those considerations are. I have no idea what convinced you; I don’t know why you believe what you believe, because you hardly even acknowledge that you believe these things. In most EA writings I’ve seen, they are breezily assumed. That is not good for the epistemic health of the movement, I think.
I think it would be good to have some effort to clearly delineate the ideas that are held by, and commonly taken as background assumptions by, the majority of people in the EA movement; to acknowledge that these are nontrivial philosophical and moral positions, which are not shared by all people or even all who identify as rationalists; to explain how it was that you[3] became convinced of these ideas; and to lay out some arguments for said ideas, for potential disagreers to debate, if desired.
[1] “Being altruistic is good, and we should be effective in our altruistic actions.”
[2] The specific cluster of ideas held by a specific community of people who describe themselves as the EA community.
[3] By “you” I don’t necessarily mean you, personally, but: as many prominent figures in the EA movement as possible, and more generally, anyone who undertakes to write things intended to build the EA movement, recruit, etc.
Global poverty don’t generally state or imply utilitarianism or similar views, though x-riskers do (at least those who value non-existent people). I personally favour global poverty charities, and am quite tentative in my attitudes to many mainstream ethical theories, and don’t think being more so would affect my donations (though being less so might).
The degree of thought varies a lot, sure. I agree that people should spend more time on them when they’re action relevant, as they are for people who’d act to prevent x-risk if they accepted them.
Breezy assumption isn’t optimal, but detailed writing about ethical theory isn’t either.
I apply a similarly high bar for altruism—many EAs don’t count as altruistic based on this.