Historically, allowing your ethical system to arbitrarily promote the interests of those similar to you has led to very bad results.
No, historically speaking it is how the human species survived and is still a core principle around which humans organize.
Arbitrarily promoting the interests of those more similar to yourself is the basis of families, nations, religions and even ideologies, while all of these go wrong occasionally (especially religion and ideology are unlikely to have many friend here), I think that overall they are a net gain. People help those more similar to themselves practically all the time, it is just that when this leads to hurting others it is far more attention grabbing than say the feeling of solidarity that lets the Swedes run their cosy welfare state or the solidarity of a tribe somewhere in the New Guinea sharing their food so other tribe members don’t starve.
I am pretty sure that if you removed the urge to help those more similar to oneself from humans right now by pressing a reality modification button(TM), the actual number of people helping each other and being helped would be reduced drastically.
Arbitrarily promoting the interests of those more similar to yourself is the basis of families, nations, religions and even ideologies, while all of these go wrong occasionally (especially religion and ideology are unlikely to have many friend here), I think that overall they are a net gain. People help those more similar to themselves practically all the time
Yes, this makes a lot of sense due to coordination costs; we are less informed about folks dissimilar to ourselves, so helping them effectively is harder. Reflexive consistency may also play a role: folks similar to myself are probably facing similar strategic situations and expressing similar algorithms, so a TDT decision rule will lead to me promoting their interests more. It is easier to avoid hurting others when such situations arise than to try and help everyone equally.
I think you need to be careful about describing human behaviour, and defining what you mean by “ethical”.
Obviously the most self-similar person to help is actually yourself (100% gene match), then your identical twin, parent/children/sibling/etc. It’s no surprise that this is the hierarchy of our preferences. The evolutionary reason for that is plain.
But evolution doesn’t make it “right”. For whatever reason, we also have this sense of generalised morality. Undoubtedly, the evolutionary history of this sense is closely linked to those urges to protect our kin. And equally, when we talk about what’s “moral”, in this general sense, we’re not actually describing anything in objective reality.
If you’re comfortable ignoring this idea of morality as evolutionary baggage that’s been hijacked by our society’s non-resemblance to our ancestral environment, then okay. If you can make that peace with yourself stick, then go ahead. Personally though, I find the concept much harder to get rid of. In other words, I need to believe I’m making the world a better place—using a standard of “better” that doesn’t depend on my own individual perspective.
Obviously the most self-similar person to help is actually yourself (100% gene match), then your identical twin, parent/children/sibling/etc. It’s no surprise that this is the hierarchy of our preferences. The evolutionary reason for that is plain.
I’m (mostly) not my genes. Remember I don’t see a big difference between flesh-me and brain-emulation-me. These two entities share no DNA molecules. But sure I probably do somewhat imperfectly, by proxy, weight genes in themselves, kin selection and all that has probably made sure of it.
But evolution doesn’t make it “right”.
No, it being in my brain makes it right. Whether it got there by the process of evolution or from the semen of an angry philandering storm god dosen’t really make a difference to me.
If you’re comfortable ignoring this idea of morality as evolutionary baggage that’s been hijacked by our society’s non-resemblance to our ancestral environment, then okay. If you can make that peace with yourself stick, then go ahead. Personally though, I find the concept much harder to get rid of. In other words, I need to believe I’m making the world a better place—using a standard of “better” that doesn’t depend on my own individual perspective
I don’t think you are getting that a universe that dosen’t contain me or my nephews or my brain ems might still be sufficiently better according to my preferences that I’d pick it over us. There is nothing beyond your own preferences, “ethics” of any kind are preferences too.
I think you yearn for absolute morality. That’s ok, we all do to varying extents. Like I said somewhere else on LessWrong my current preferences are set up in such a way that if I received mathematical proof that the universe actually does have a “objectively right” ethical system that is centred on making giant cheesecakes, I think I’d probably dedicate a day or so during the weekends for that purpose. Ditto for paper-clips.
Maybe I’d even organize with like-minded people for a two hour communal baking or materials gathering event. But I’d probably spend the rest of the week much like I do now.
I’m (mostly) not my genes. Remember I don’t see a big difference between flesh-me and brain-emulation-me. These two entities share no DNA molecules. But sure I probably do somewhat imperfectly, by proxy, weight genes in themselves, kin selection and all that has probably made sure of it.
Right, sorry. The genes tack was in error, I should’ve read more closely.
I think I’ve understood the problem a bit better, and I’m trying to explain where I think we differ in reply to the “taboo” comment.
There is nothing beyond your own preferences, “ethics” of any kind are preferences too.
I don’t think I believe this, although I suspect the source of our disagreement is over terminology rather than facts.
I tend to think of ethics as a complex set of facts about the well-being of yourself and others. So something is ethical if it makes people happy, helps them achieve their aspirations, treats them fairly, etc. So if, when ranking your preferences, you find that the universe you have the greatest preference for isn’t one in which peoples’ well-being, along certain measures, is as high as possible, that doesn’t mean that improving people’s well-being along these various measures isn’t ethical. It just means that you don’t prefer to act 100% ethically.
To make an analogy, the fact that an object is the color green is a fact about what wavelengths of light it reflects and absorbs. You may prefer that certain green-colored objects not be colored green. Your preference does not change the objective and absolute fact that these objects are green. It just means you don’t prefer them to be green.
“I think you yearn for absolute morality. That’s ok, we all do to varying extents”
I think syllogism’s preference is for unbiased morality.
Yearns is in quotes because he decided on his ethics before deciding he cared. his reasoning probably has nothing to do with yearning or similiar, as you seem to be implying.
also “That’s ok, we all do to varying extents” I don’t think it is. i think it’s silly, and there are almost certainly people who don’t (and they count). “absolute morality” in the sense “objectively (universally) right” shouldn’t even parse
No, historically speaking it is how the human species survived and is still a core principle around which humans organize.
Arbitrarily promoting the interests of those more similar to yourself is the basis of families, nations, religions and even ideologies, while all of these go wrong occasionally (especially religion and ideology are unlikely to have many friend here), I think that overall they are a net gain. People help those more similar to themselves practically all the time, it is just that when this leads to hurting others it is far more attention grabbing than say the feeling of solidarity that lets the Swedes run their cosy welfare state or the solidarity of a tribe somewhere in the New Guinea sharing their food so other tribe members don’t starve.
I am pretty sure that if you removed the urge to help those more similar to oneself from humans right now by pressing a reality modification button(TM), the actual number of people helping each other and being helped would be reduced drastically.
Yes, this makes a lot of sense due to coordination costs; we are less informed about folks dissimilar to ourselves, so helping them effectively is harder. Reflexive consistency may also play a role: folks similar to myself are probably facing similar strategic situations and expressing similar algorithms, so a TDT decision rule will lead to me promoting their interests more. It is easier to avoid hurting others when such situations arise than to try and help everyone equally.
I think you need to be careful about describing human behaviour, and defining what you mean by “ethical”.
Obviously the most self-similar person to help is actually yourself (100% gene match), then your identical twin, parent/children/sibling/etc. It’s no surprise that this is the hierarchy of our preferences. The evolutionary reason for that is plain.
But evolution doesn’t make it “right”. For whatever reason, we also have this sense of generalised morality. Undoubtedly, the evolutionary history of this sense is closely linked to those urges to protect our kin. And equally, when we talk about what’s “moral”, in this general sense, we’re not actually describing anything in objective reality.
If you’re comfortable ignoring this idea of morality as evolutionary baggage that’s been hijacked by our society’s non-resemblance to our ancestral environment, then okay. If you can make that peace with yourself stick, then go ahead. Personally though, I find the concept much harder to get rid of. In other words, I need to believe I’m making the world a better place—using a standard of “better” that doesn’t depend on my own individual perspective.
I’m (mostly) not my genes. Remember I don’t see a big difference between flesh-me and brain-emulation-me. These two entities share no DNA molecules. But sure I probably do somewhat imperfectly, by proxy, weight genes in themselves, kin selection and all that has probably made sure of it.
No, it being in my brain makes it right. Whether it got there by the process of evolution or from the semen of an angry philandering storm god dosen’t really make a difference to me.
I don’t think you are getting that a universe that dosen’t contain me or my nephews or my brain ems might still be sufficiently better according to my preferences that I’d pick it over us. There is nothing beyond your own preferences, “ethics” of any kind are preferences too.
I think you yearn for absolute morality. That’s ok, we all do to varying extents. Like I said somewhere else on LessWrong my current preferences are set up in such a way that if I received mathematical proof that the universe actually does have a “objectively right” ethical system that is centred on making giant cheesecakes, I think I’d probably dedicate a day or so during the weekends for that purpose. Ditto for paper-clips.
Maybe I’d even organize with like-minded people for a two hour communal baking or materials gathering event. But I’d probably spend the rest of the week much like I do now.
Right, sorry. The genes tack was in error, I should’ve read more closely.
I think I’ve understood the problem a bit better, and I’m trying to explain where I think we differ in reply to the “taboo” comment.
I don’t think I believe this, although I suspect the source of our disagreement is over terminology rather than facts.
I tend to think of ethics as a complex set of facts about the well-being of yourself and others. So something is ethical if it makes people happy, helps them achieve their aspirations, treats them fairly, etc. So if, when ranking your preferences, you find that the universe you have the greatest preference for isn’t one in which peoples’ well-being, along certain measures, is as high as possible, that doesn’t mean that improving people’s well-being along these various measures isn’t ethical. It just means that you don’t prefer to act 100% ethically.
To make an analogy, the fact that an object is the color green is a fact about what wavelengths of light it reflects and absorbs. You may prefer that certain green-colored objects not be colored green. Your preference does not change the objective and absolute fact that these objects are green. It just means you don’t prefer them to be green.
...and also about how certain cells in your eye function. Which doesn’t change your analogy at all, but it’s sometims a useful thing to remember.
“I think you yearn for absolute morality. That’s ok, we all do to varying extents”
I think syllogism’s preference is for unbiased morality.
Yearns is in quotes because he decided on his ethics before deciding he cared. his reasoning probably has nothing to do with yearning or similiar, as you seem to be implying.
also “That’s ok, we all do to varying extents” I don’t think it is. i think it’s silly, and there are almost certainly people who don’t (and they count). “absolute morality” in the sense “objectively (universally) right” shouldn’t even parse
An unbiased morality may be one centred on cheesecake.