The problem with your moral theory, as I see it, is that it also fails to meet (c), because there could be many plausible, but horrific in my view, arguments you could make [...]
I was expecting this response either from you or someone else, but didn’t want to make my previous comment too long (a habit of mine) by preempting it. It’s a totally valid next question, and I’ve considered it before.
Criterion (c) is that the principles of my moral system must not lead when taken to their logical extent to a society that I, the proponent of the system, would consider dystopian. The crux of my counter-argument is that most of what you’d consider horrific, I would also probably consider horrific, as would most people—and humans don’t do well in societies that horrify them. Taking any path that leads to a “dystopia” is inconsistent with the goal.
(I’m trying to prevent this comment from turning into a prohibitively massive essay so I’ll try to restrain myself and keep this broad—please feel free to request further detail about anything I say.)
Eugenics, first of all, doesn’t work. (I take you to mean “negative eugenics”—killing or sterilizing those you consider undesirable, rather than “positive eugenics”—tinkering with DNA to produce kids with traits you find desirable, which hasn’t really been tried and only very recently became a real possibility to consider.) We suck at guessing whether a given individual’s progeny will be “good humans” or not. Too many factors, too many ways a human can be valuable, and even then all you have is a baby with a good genetic start in life—there’s still all the “nurture” to come. It’s like herding cats with a blindfold on. I could go on for pages about all the ways negative eugenics doesn’t work—but say we were capable of making useful judgments about which humans would produce “bad” offspring. You’d then have to make the case that the principle “negative eugenics is fine to do” furthers the goal (helping humanity to survive) to such an extent that it outweighs the necessary hits taken by other goal-furthering principles like “don’t murder people”, “don’t maim people”, “don’t give too much power to too few people” and, on an even more basic level, “don’t suppress empathy”.
Do you and I consider negative eugenics “horrific” because we think we (or at least our genitals) would be on the chopping block? Probably not, though we might fear it a bit. It horrifies us because we feel empathy for those who would suffer it. Empathy is hard-wired in most people. Measure your brain activity while you watch me getting hit with a hammer and your pain centers will show activity. You can feel for me (though measurably less if we’re not the same race—these brains evolved in little tribes and are playing catch-up with the very recent states of national and global inter-dependence). Giving weight—a lot of weight—to principles protective or supportive of empathy is consistent with the goal because empathy helps us survive as a species. Numb or suppress it too much and we’re screwed. Run counter to it too much without successfully suppressing it and you’ve got a society full of horrified, outraged people. Not great for social co-operation.
Which brings me to your other example, assigning jobs based on ability without regard to choice. Again, won’t work. Gives you a society full of miserable resentful people who don’t give their forced-jobs the full passion or creativity of which they are capable, or actively direct their energies towards trying to get re-assigned to the job they want. Would go further into this but this is already too long!
I know those two were only examples on your part but my point is that the question “does this help humanity to survive” is always a case of trying to balance “does it help in this way to an extent that outweighs how it harms in these other ways”. That has to be taken into account when considering a “horrible scenario”. People having empathy—caring for and helping each other—helps us to survive. People being physically and mentally healthy (“happy” is a big part of both, by the way) helps. People having personal freedom to create and invent and try things helps. People being ambitious and competing and seeking to become better helps. We need principles that take all that value into account—and sometimes those principles are going to be up against each other and we have to look for the least-worst answer. It’s never simple, we get it wrong all the time, but we must deal with it. If morality was easy we wouldn’t have spent the last ten thousand years arguing about it.
Now, I noticed that elsewhere you said it was bothering you that people were going off on tangents to your main issues, so I’ll try to circle back to your original point. You’re trying to devise a framework for evaluating a moral system, and I do think your criteria raise some useful lines of inquiry, but I don’t see how it’s possible to “evaluate” something without expressing or defining what it is you want it to do. My evaluation of my hairdryer depends totally on whether I want it to dry my hair or tell me amusing anecdotes. Evaluation comes up “pretty good” on the former and “totally crap” on the latter. Now “figuring out a way to evaluate a moral system” is something I’m all for and the best help I have to give with that is to suggest that you define what it is you want a moral system to do first—a base on which build your evaluation framework.
[Edited to add: I got through two paragraphs on eugenics without bringing up the you-know-whozis! Where should I pick up my medal?]
I was expecting this response either from you or someone else, but didn’t want to make my previous comment too long (a habit of mine) by preempting it. It’s a totally valid next question, and I’ve considered it before.
Criterion (c) is that the principles of my moral system must not lead when taken to their logical extent to a society that I, the proponent of the system, would consider dystopian. The crux of my counter-argument is that most of what you’d consider horrific, I would also probably consider horrific, as would most people—and humans don’t do well in societies that horrify them. Taking any path that leads to a “dystopia” is inconsistent with the goal.
(I’m trying to prevent this comment from turning into a prohibitively massive essay so I’ll try to restrain myself and keep this broad—please feel free to request further detail about anything I say.)
Eugenics, first of all, doesn’t work. (I take you to mean “negative eugenics”—killing or sterilizing those you consider undesirable, rather than “positive eugenics”—tinkering with DNA to produce kids with traits you find desirable, which hasn’t really been tried and only very recently became a real possibility to consider.) We suck at guessing whether a given individual’s progeny will be “good humans” or not. Too many factors, too many ways a human can be valuable, and even then all you have is a baby with a good genetic start in life—there’s still all the “nurture” to come. It’s like herding cats with a blindfold on. I could go on for pages about all the ways negative eugenics doesn’t work—but say we were capable of making useful judgments about which humans would produce “bad” offspring. You’d then have to make the case that the principle “negative eugenics is fine to do” furthers the goal (helping humanity to survive) to such an extent that it outweighs the necessary hits taken by other goal-furthering principles like “don’t murder people”, “don’t maim people”, “don’t give too much power to too few people” and, on an even more basic level, “don’t suppress empathy”.
Do you and I consider negative eugenics “horrific” because we think we (or at least our genitals) would be on the chopping block? Probably not, though we might fear it a bit. It horrifies us because we feel empathy for those who would suffer it. Empathy is hard-wired in most people. Measure your brain activity while you watch me getting hit with a hammer and your pain centers will show activity. You can feel for me (though measurably less if we’re not the same race—these brains evolved in little tribes and are playing catch-up with the very recent states of national and global inter-dependence). Giving weight—a lot of weight—to principles protective or supportive of empathy is consistent with the goal because empathy helps us survive as a species. Numb or suppress it too much and we’re screwed. Run counter to it too much without successfully suppressing it and you’ve got a society full of horrified, outraged people. Not great for social co-operation.
Which brings me to your other example, assigning jobs based on ability without regard to choice. Again, won’t work. Gives you a society full of miserable resentful people who don’t give their forced-jobs the full passion or creativity of which they are capable, or actively direct their energies towards trying to get re-assigned to the job they want. Would go further into this but this is already too long!
I know those two were only examples on your part but my point is that the question “does this help humanity to survive” is always a case of trying to balance “does it help in this way to an extent that outweighs how it harms in these other ways”. That has to be taken into account when considering a “horrible scenario”. People having empathy—caring for and helping each other—helps us to survive. People being physically and mentally healthy (“happy” is a big part of both, by the way) helps. People having personal freedom to create and invent and try things helps. People being ambitious and competing and seeking to become better helps. We need principles that take all that value into account—and sometimes those principles are going to be up against each other and we have to look for the least-worst answer. It’s never simple, we get it wrong all the time, but we must deal with it. If morality was easy we wouldn’t have spent the last ten thousand years arguing about it.
Now, I noticed that elsewhere you said it was bothering you that people were going off on tangents to your main issues, so I’ll try to circle back to your original point. You’re trying to devise a framework for evaluating a moral system, and I do think your criteria raise some useful lines of inquiry, but I don’t see how it’s possible to “evaluate” something without expressing or defining what it is you want it to do. My evaluation of my hairdryer depends totally on whether I want it to dry my hair or tell me amusing anecdotes. Evaluation comes up “pretty good” on the former and “totally crap” on the latter. Now “figuring out a way to evaluate a moral system” is something I’m all for and the best help I have to give with that is to suggest that you define what it is you want a moral system to do first—a base on which build your evaluation framework.
[Edited to add: I got through two paragraphs on eugenics without bringing up the you-know-whozis! Where should I pick up my medal?]