I think this is runaway philosophizing where our desire to believe something coherent trumps what types of beliefs we have been selected for, and the types of beliefs that will continue to keep us alive.
Why should there be a normative ethics at all? What part of rationality requires normative ethics?
I, like you and everyone else, have a monkey-sphere. I only care about the monkeys in my tribe that are closest to me, and I might as well admit it because it’s there. So, nevermind cows and pigs, if push came to shove I’ll protect my friends and family in preference to strangers. However, it protects me and my monkey-sphere if we can all agree to keep expropriation and force to a bare minimum and within strictly prescribed guidelines.
So I recognize the rights of entities capable of retaliating and at the same time capable of being bound by an agreement not to. Them and their monkey spheres.
In short, the reason I’d rather have dinner with you than of you is some combination of me liking you and my pre-commitment to peaceful and civilized coexistence. It’s not exactly something I feel like a nice person for admitting, but I don’t see why that should be enough to make it a tough issue.
I think this is runaway philosophizing where our desire to believe something coherent trumps what types of beliefs we have been selected for, and the types of beliefs that will continue to keep us alive.
Why should I believe what humans have been selected for? Why would I want to keep “us” alive?
I think those two questions are at least as begging as the reasons for my view, if not more.
What I know for sure is that I dislike my own suffering, not because I’m sapient and have it happening to me, but because it is suffering. And I want to do something in life that is about more than just me. Ultimately, this might not be a “more true” reason than “what I have been selected for”, but it does appeal to me more than anything else.
Why should there be a normative ethics at all? What part of rationality requires normative ethics?
All rationality requires is a goal. You may not share the same goals I have. I have noticed, however, that some people haven’t thought through all the implications of their stated goals. Especially on LW, people are very quick to declare something to be of terminal value to them, which serves as a self-fulfilling prophecy unfortunately.
I, like you and everyone else, have a monkey-sphere. I only care about the monkeys in my tribe that are closest to me, and I might as well admit it because it’s there.
I discovered that intuitions are easy to change. People definitely have stronger emotional reactions to things happening to those that are close, but do they really, on an abstract level, care less about those that are distant? Do they want to care less about those that are distant, or would they take a pill that turned them into universal altruists?
However, it protects me and my monkey-sphere if we can all agree to keep expropriation and force to a bare minimum and within strictly prescribed guidelines.
And how do you do that?
So I recognize the rights of entities capable of retaliating and at the same time capable of being bound by an agreement not to. Them and their monkey spheres.
If a situation arises where you can benefit your self-interest by defecting, the rational thing to do is to defect. Don’t tell yourself that you’re being a decent person only because of pure self-interest, you’d be deceiving yourself. Yes, if everyone followed some moral code written for societal interaction among moral agents, then everyone would be doing well (but not perfectly well). However, given that you cannot expect others to follow through, your decision to not “break the rules” is an altruistic decision for (at least) all the cases where you are unlikely enough to get caught.
You may also ask yourself whether you would press a button that inflicts suffering on a child (or a cow) far away, give you ten dollars, and makes you forget about all that happened. Would you want to self-modify to be the person who easily pushes the button? If not, just how much altruism is it going to be, and why not go for the (non-arbitrary) whole cake?
You may also ask yourself whether you would press a button that inflicts suffering on a child (or a cow) far away, give you ten dollars, and makes you forget about all that happened. Would you want to self-modify to be the person who easily pushes the button? If not, just how much altruism is it going to be, and why not go for the (non-arbitrary) whole cake?
I don’t know, and I feel it’s important that I admit that. My code of conduct is incomplete. It’s better that it be clearly incomplete than have the illusion of completeness created by me deciding what a hypothetical me in a hypothetical situation ought to want.
It does seem to me the payoff for pushing the button should be equal to how much it would take to bribe you not to make all your purchasing decisions contingent on a thorough investigation of the human/animal rights practices of every company you buy from and all their upstream suppliers. Those who don’t currently do this (me included) are apparently already being compensated sufficiently, however much that is.
It does seem to me the payoff for pushing the button should be equal to how much it would take to bribe you not to make all your purchasing decisions contingent on a thorough investigation of the human/animal rights practices of every company you buy from and all their upstream suppliers. Those who don’t currently do this (me included) are apparently already being compensated sufficiently, however much that is.
Perhaps you are setting the demands too high. I think the button scenario is relevantly different in the amount of sacrifice/inconvenience it requires. Making all-things-concerned ethical purchases is a lot more difficult than resisting the temptation of ten dollars (although the difference does become smaller the more you press it in some given timescale).
Maybe this is something you view as “cheating” or a rationalization of cognitive dissonance as you explain in the other comment, but I genuinely think that a highly altruistic life may still involve making lots of imperfect choices. The amount of money one donates, for instance, and where to, is probably more important in terms of suffering prevented than the effects of personal consumption.
Being an altruist makes you your own most important resource. Preventing loss of motivation or burnout is then a legitimate concern that warrants keeping a suitable amount of self-interested comfort. And it is also worth noting that people differ individually in how easily altruism comes to them. Some may simply enjoy doing it or may enjoy the signalling aspects, while others might have trouble motivating themselves or even be uncomfortable with talking to others about ethics. One’s social circle is also a huge influence. These are all things to take into account; it would be unreasonable to compare yourself to a utility-maximizing robot.
Obviously this needn’t be an all-or-nothing kind of thing. Pushing the button just once a week is already much better than never pushing it.
The amount of money one donates, for instance, and where to, is probably more important in terms of suffering prevented than the effects of personal consumption.
That’s a testable assertion. How confident are you that you would follow the path of self consistency if upon being tested the assertion turned out to be false? Someone who chooses pragmatism only needs to fight their own ignorance to be self consistent while someone who does not has to fight both their own ignorance and all too often their own pragmatism in order to be slf-consistent.
Yes, it’s testable and the estimates so far strongly support my claim. (I’m constantly on the lookout for data of this kind to improve my effectiveness.) I wouldn’t have trouble adjusting because I’m already trying to reduce my unethical consumption through habit forming (which basically comes down to being vegan and avoiding expensive stuff). Even if its not very effective compared to other things, as long as I don’t have opportunity costs, it is still something positive. I’m just saying that even for people who won’t, for whatever reasons, make changes to the kind of stuff they buy, these people could still reduce a lot of suffering by donating to the most effective cause.
I wonder if pragmatists are less likely to reject information they don’t want to hear since their self interest is their terminal goal, so for example entertaining the possibility that Malthus can be right in some instances does not imply that they must unilaterally sacrifice themselves.
Perhaps the reason so many transhumanists are peak oil deniers and global warming deniers is that both of these are Malthusian scenarios that would put the immediate needs of those less fortunate in direct and obvious opposition to the costly, delayed-payoff projects we advocate.
Ultimately, this might not be a “more true” reason than “what I have been selected for”, but it does appeal to me more than anything else.
Experience and observation of others has taught me that when one tries to derive a normative code of behavior from the top-down, they often end up with something that is in subtle ways incompatible with selfish drives. They will therefore be tempted to cheat on their high-minded morals, and react to this cognitive dissonance either by coming up with reasons why it’s not really cheating or working ever harder to suppress their temptations.
I’ve been down the egalitarian altruist route, it came crashing down (several times) until I finally learned to admit that I’m a bastard. Now instead of agonizing whether my right to FOO outweighs Bob’s right to BAR, I have the simpler problem of optimizing my long-term FOO and trusting Bob to optimize his own BAR.
I still cheat, but I don’t waste time on moral posturing. I try to treat it as a sign that perhaps I still don’t fully understand my own utility function. Imagine how far off the mark I’d be if I was simultaneously trying to optimize Bob’s!
Nonhuman animals are integrated with human “monkey spheres”—e.g. people live with their pets, bond with them and give them names.
A second mistake is that you decry normative ethics, only to implicitly establish a norm in the next paragraph as if it were a fact:
I, like you and everyone else, have a monkey-sphere. I only care about the monkeys in my tribe that are closest to me, and I might as well admit it because it’s there. So, nevermind cows and pigs...
Obviously, there are people whose preferences include the welfare of cows and pigs, hence this discussion and the well-funded existence of PETA etc. By prescribing to a monkey-sphere that “everyone” has and that doesn’t include nonhuman animals, you are effectively telling us what we should care about, not what we actually care about.
Even if you don’t care about animal welfare, the fact that others do has an influence on your “monkey-sphere”, even if it’s weak.
The term “monkeysphere”, which is a nickname for Dunbar’s Number, originates from thisCracked.com article. The term relates not only to the studies done on monkeys (and apes), but also the idea of there existing a limit on the number of named, cutely dressed monkeys about which a hypothetical person could really care.
Nonhuman animals are integrated with human “monkey spheres”—e.g. people live with their pets, bond with them and give them names.
Oh yeah, absolutely. I trust my friend’s judgment how much members of her monkeysphere are worth to her, and utility to my friend is weighed against utility to others in my monkeysphere proportional to how close they are to me.
My monkeysphere has long tails extending by default to all members of my species whose interests are not at odds with my own or those closer to me in the monkeysphere. Since I would be willing to use force against a human to defend myself or others at the core of my monkeysphere, it seems that I should be even more willing to use force against such a human and save the lives of several cattle in the process.
Obviously, there are people whose preferences include the welfare of cows and pigs, hence this discussion and the well-funded existence of PETA etc.
Cults are well-funded too. I don’t dispute that people care about both them and animal rights. What I dispute is whether supporting either of them offers enough benefits to the supporter that I would consider it a rational choice to make.
I think this is runaway philosophizing where our desire to believe something coherent trumps what types of beliefs we have been selected for, and the types of beliefs that will continue to keep us alive.
Why should there be a normative ethics at all? What part of rationality requires normative ethics?
I, like you and everyone else, have a monkey-sphere. I only care about the monkeys in my tribe that are closest to me, and I might as well admit it because it’s there. So, nevermind cows and pigs, if push came to shove I’ll protect my friends and family in preference to strangers. However, it protects me and my monkey-sphere if we can all agree to keep expropriation and force to a bare minimum and within strictly prescribed guidelines.
So I recognize the rights of entities capable of retaliating and at the same time capable of being bound by an agreement not to. Them and their monkey spheres.
In short, the reason I’d rather have dinner with you than of you is some combination of me liking you and my pre-commitment to peaceful and civilized coexistence. It’s not exactly something I feel like a nice person for admitting, but I don’t see why that should be enough to make it a tough issue.
Why should I believe what humans have been selected for? Why would I want to keep “us” alive?
I think those two questions are at least as begging as the reasons for my view, if not more.
What I know for sure is that I dislike my own suffering, not because I’m sapient and have it happening to me, but because it is suffering. And I want to do something in life that is about more than just me. Ultimately, this might not be a “more true” reason than “what I have been selected for”, but it does appeal to me more than anything else.
All rationality requires is a goal. You may not share the same goals I have. I have noticed, however, that some people haven’t thought through all the implications of their stated goals. Especially on LW, people are very quick to declare something to be of terminal value to them, which serves as a self-fulfilling prophecy unfortunately.
I discovered that intuitions are easy to change. People definitely have stronger emotional reactions to things happening to those that are close, but do they really, on an abstract level, care less about those that are distant? Do they want to care less about those that are distant, or would they take a pill that turned them into universal altruists?
And how do you do that?
If a situation arises where you can benefit your self-interest by defecting, the rational thing to do is to defect. Don’t tell yourself that you’re being a decent person only because of pure self-interest, you’d be deceiving yourself. Yes, if everyone followed some moral code written for societal interaction among moral agents, then everyone would be doing well (but not perfectly well). However, given that you cannot expect others to follow through, your decision to not “break the rules” is an altruistic decision for (at least) all the cases where you are unlikely enough to get caught.
You may also ask yourself whether you would press a button that inflicts suffering on a child (or a cow) far away, give you ten dollars, and makes you forget about all that happened. Would you want to self-modify to be the person who easily pushes the button? If not, just how much altruism is it going to be, and why not go for the (non-arbitrary) whole cake?
I don’t know, and I feel it’s important that I admit that. My code of conduct is incomplete. It’s better that it be clearly incomplete than have the illusion of completeness created by me deciding what a hypothetical me in a hypothetical situation ought to want.
It does seem to me the payoff for pushing the button should be equal to how much it would take to bribe you not to make all your purchasing decisions contingent on a thorough investigation of the human/animal rights practices of every company you buy from and all their upstream suppliers. Those who don’t currently do this (me included) are apparently already being compensated sufficiently, however much that is.
I appreciate the honest reply!
Perhaps you are setting the demands too high. I think the button scenario is relevantly different in the amount of sacrifice/inconvenience it requires. Making all-things-concerned ethical purchases is a lot more difficult than resisting the temptation of ten dollars (although the difference does become smaller the more you press it in some given timescale).
Maybe this is something you view as “cheating” or a rationalization of cognitive dissonance as you explain in the other comment, but I genuinely think that a highly altruistic life may still involve making lots of imperfect choices. The amount of money one donates, for instance, and where to, is probably more important in terms of suffering prevented than the effects of personal consumption.
Being an altruist makes you your own most important resource. Preventing loss of motivation or burnout is then a legitimate concern that warrants keeping a suitable amount of self-interested comfort. And it is also worth noting that people differ individually in how easily altruism comes to them. Some may simply enjoy doing it or may enjoy the signalling aspects, while others might have trouble motivating themselves or even be uncomfortable with talking to others about ethics. One’s social circle is also a huge influence. These are all things to take into account; it would be unreasonable to compare yourself to a utility-maximizing robot.
Obviously this needn’t be an all-or-nothing kind of thing. Pushing the button just once a week is already much better than never pushing it.
That’s a testable assertion. How confident are you that you would follow the path of self consistency if upon being tested the assertion turned out to be false? Someone who chooses pragmatism only needs to fight their own ignorance to be self consistent while someone who does not has to fight both their own ignorance and all too often their own pragmatism in order to be slf-consistent.
Yes, it’s testable and the estimates so far strongly support my claim. (I’m constantly on the lookout for data of this kind to improve my effectiveness.) I wouldn’t have trouble adjusting because I’m already trying to reduce my unethical consumption through habit forming (which basically comes down to being vegan and avoiding expensive stuff). Even if its not very effective compared to other things, as long as I don’t have opportunity costs, it is still something positive. I’m just saying that even for people who won’t, for whatever reasons, make changes to the kind of stuff they buy, these people could still reduce a lot of suffering by donating to the most effective cause.
I wonder if pragmatists are less likely to reject information they don’t want to hear since their self interest is their terminal goal, so for example entertaining the possibility that Malthus can be right in some instances does not imply that they must unilaterally sacrifice themselves.
Perhaps the reason so many transhumanists are peak oil deniers and global warming deniers is that both of these are Malthusian scenarios that would put the immediate needs of those less fortunate in direct and obvious opposition to the costly, delayed-payoff projects we advocate.
Experience and observation of others has taught me that when one tries to derive a normative code of behavior from the top-down, they often end up with something that is in subtle ways incompatible with selfish drives. They will therefore be tempted to cheat on their high-minded morals, and react to this cognitive dissonance either by coming up with reasons why it’s not really cheating or working ever harder to suppress their temptations.
I’ve been down the egalitarian altruist route, it came crashing down (several times) until I finally learned to admit that I’m a bastard. Now instead of agonizing whether my right to FOO outweighs Bob’s right to BAR, I have the simpler problem of optimizing my long-term FOO and trusting Bob to optimize his own BAR.
I still cheat, but I don’t waste time on moral posturing. I try to treat it as a sign that perhaps I still don’t fully understand my own utility function. Imagine how far off the mark I’d be if I was simultaneously trying to optimize Bob’s!
Nonhuman animals are integrated with human “monkey spheres”—e.g. people live with their pets, bond with them and give them names.
A second mistake is that you decry normative ethics, only to implicitly establish a norm in the next paragraph as if it were a fact:
Obviously, there are people whose preferences include the welfare of cows and pigs, hence this discussion and the well-funded existence of PETA etc. By prescribing to a monkey-sphere that “everyone” has and that doesn’t include nonhuman animals, you are effectively telling us what we should care about, not what we actually care about.
Even if you don’t care about animal welfare, the fact that others do has an influence on your “monkey-sphere”, even if it’s weak.
Btw, aren’t humans apes rather than monkeys?
The term “monkeysphere”, which is a nickname for Dunbar’s Number, originates from this Cracked.com article. The term relates not only to the studies done on monkeys (and apes), but also the idea of there existing a limit on the number of named, cutely dressed monkeys about which a hypothetical person could really care.
Yes, precisely. Thanks for finding the link.
Although I think of mine as a density function rather than a fixed number. Everyone has a little bit of my monkey-sphere associated with them. hug
Oh yeah, absolutely. I trust my friend’s judgment how much members of her monkeysphere are worth to her, and utility to my friend is weighed against utility to others in my monkeysphere proportional to how close they are to me.
My monkeysphere has long tails extending by default to all members of my species whose interests are not at odds with my own or those closer to me in the monkeysphere. Since I would be willing to use force against a human to defend myself or others at the core of my monkeysphere, it seems that I should be even more willing to use force against such a human and save the lives of several cattle in the process.
Cults are well-funded too. I don’t dispute that people care about both them and animal rights. What I dispute is whether supporting either of them offers enough benefits to the supporter that I would consider it a rational choice to make.