I haven’t read the other comments here and I know this post is >10yrs old, but…
For me, (what I’ll now call) effective-altruism-like values are mostly second-order, in the sense that a lot of my revealed behavior shows that a lot of the time I don’t want to help strangers, animals, future people, etc. But I think I “want to want to” help strangers, and sometimes the more goal-directed rational side of my brain wins out and I do something to help strangers at personal sacrifice to myself (though I do this less than e.g. Will MacAskill). But I don’t really detect in myself a symmetrical second-order want to NOT want to help strangers. So that’s one thing that “Shut up and multiply” has over “shut up and divide,” at least for me.
That said, I realize now that I’m often guilty of ignoring this second-orderness when e.g. making the case for effective altruism. I will often appeal to my interlocutor’s occasional desire to help strangers and suggest they generalize it, but I don’t symmetrically appeal to their clearer and more common disinterest in helping strangers and suggest they generalize THAT. To be more honest and accurate while still making the case for EA, I should be appealing to their second-order desires, though of course that’s a more complicated conversation.
What do you think your second order “want to want to help” desire is based on or came from? For example one possibility is that someone previously appealed to your occasional (first order) desire to help strangers and suggested you generalize it, which caused you to have a cached thought that that’s what you “should” do. I mean this seems to be exactly what Peter Singer’s Drowning Child argument tries to do, and a lot of people cite it as their introduction/conversion to EA. (And you also say that you personally did it to others.)
Or suppose you only have your second order desire because it’s useful for gaining/maintaining your social status. I imagine it might be hard to work with or socialize with other EAs, if you told them that you didn’t even “want to want to help” :)
For me personally, I feel like I already “help” a decent amount (motivated by my first order desires), given my moral credences/uncertainties. My second order desires include both doing more and less, depending on whether I feel like I’ve done too much or too little “altruism” recently or overall, although they don’t kick in much and I mostly just go with doing whatever I want (e.g., find interesting) at the moment.
I haven’t read the other comments here and I know this post is >10yrs old, but…
For me, (what I’ll now call) effective-altruism-like values are mostly second-order, in the sense that a lot of my revealed behavior shows that a lot of the time I don’t want to help strangers, animals, future people, etc. But I think I “want to want to” help strangers, and sometimes the more goal-directed rational side of my brain wins out and I do something to help strangers at personal sacrifice to myself (though I do this less than e.g. Will MacAskill). But I don’t really detect in myself a symmetrical second-order want to NOT want to help strangers. So that’s one thing that “Shut up and multiply” has over “shut up and divide,” at least for me.
That said, I realize now that I’m often guilty of ignoring this second-orderness when e.g. making the case for effective altruism. I will often appeal to my interlocutor’s occasional desire to help strangers and suggest they generalize it, but I don’t symmetrically appeal to their clearer and more common disinterest in helping strangers and suggest they generalize THAT. To be more honest and accurate while still making the case for EA, I should be appealing to their second-order desires, though of course that’s a more complicated conversation.
What do you think your second order “want to want to help” desire is based on or came from? For example one possibility is that someone previously appealed to your occasional (first order) desire to help strangers and suggested you generalize it, which caused you to have a cached thought that that’s what you “should” do. I mean this seems to be exactly what Peter Singer’s Drowning Child argument tries to do, and a lot of people cite it as their introduction/conversion to EA. (And you also say that you personally did it to others.)
Or suppose you only have your second order desire because it’s useful for gaining/maintaining your social status. I imagine it might be hard to work with or socialize with other EAs, if you told them that you didn’t even “want to want to help” :)
For me personally, I feel like I already “help” a decent amount (motivated by my first order desires), given my moral credences/uncertainties. My second order desires include both doing more and less, depending on whether I feel like I’ve done too much or too little “altruism” recently or overall, although they don’t kick in much and I mostly just go with doing whatever I want (e.g., find interesting) at the moment.