First of all, that was intended as a general statement, not an absolute description of every case. Experiments have been done on people to see if, for example, they stop being opposed to incest in fictional scenarios where the incest is stated outright to be harmless.
Before the scenario was presented, people offered utilitarian justifications for the incest taboo, but even when those were stripped away, they insisted that incest is still “just wrong”. My point is that this is what generally happens when someone points out incoherency in a moral system. People generally switch to offering an axiomatic rationalization for their moral sentiments instead of a utilitarian one.
Also, I have to say:
I have changed my mind about my values due to noticing that my values were inconsistent.
Do you mean that you made a judgement elevating one value above another you had in cases where they conflict? Or do you mean you actually gained a new value? It seems like you must have used some sort of higher level value preference to make that meta-level moral judgement.
Do you mean that you made a judgment elevating one value above another you had in cases where they conflict? Or do you mean you actually gained a new value? It seems like you must have used some sort of higher level value preference to make that meta-level moral judgment.
I noticed that my values were inconsistent, and I decided that one of them needed to be expunged. I removed a “value” that had been created at too high a level of abstraction, one which conflicted with the rest of my values and whose actual, important content could be derived from lower level moral concepts.
First of all, that was intended as a general statement, not an absolute description of every case. Experiments have been done on people to see if, for example, they stop being opposed to incest in fictional scenarios where the incest is stated outright to be harmless.
Before the scenario was presented, people offered utilitarian justifications for the incest taboo, but even when those were stripped away, they insisted that incest is still “just wrong”. My point is that this is what generally happens when someone points out incoherency in a moral system. People generally switch to offering an axiomatic rationalization for their moral sentiments instead of a utilitarian one.
Such a person isn’t going against Academian’s advice. They’ve been led through the correct procedure of analysis, though they’ve only gone part of the way. They’ve found evidence that, all else being equal it’s better not to give into a desire to commit incest. The incest itself is what they find bad, not some consequence of it. You haven’t identified an incoherence in their final position.
To continue the analysis, they should see what bad consequences would follow from not doing the incest. They should check whether this badness outweighs the badness of doing the incest. They should be able to identify the hypothetical scenarios where it’s worse not to commit the incest than to do it.
In the end, they may decide that people shouldn’t commit incest in most typical situations, even when there are no distinct bad consequences of the incest. Whether or not you agree with them, they would still be vastly more reflective about their morality than most people are. It would be great if more people were so reflective, even if they ended up disagreeing with you about which things are harms-in-themselves.
I have changed my mind about my values due to noticing that my values were inconsistent.
Same here (at least twice).
Yeah, but that makes you really really weird.
For which I am truly grateful.
First of all, that was intended as a general statement, not an absolute description of every case. Experiments have been done on people to see if, for example, they stop being opposed to incest in fictional scenarios where the incest is stated outright to be harmless.
Before the scenario was presented, people offered utilitarian justifications for the incest taboo, but even when those were stripped away, they insisted that incest is still “just wrong”. My point is that this is what generally happens when someone points out incoherency in a moral system. People generally switch to offering an axiomatic rationalization for their moral sentiments instead of a utilitarian one.
Also, I have to say:
Do you mean that you made a judgement elevating one value above another you had in cases where they conflict? Or do you mean you actually gained a new value? It seems like you must have used some sort of higher level value preference to make that meta-level moral judgement.
I noticed that my values were inconsistent, and I decided that one of them needed to be expunged. I removed a “value” that had been created at too high a level of abstraction, one which conflicted with the rest of my values and whose actual, important content could be derived from lower level moral concepts.
Such a person isn’t going against Academian’s advice. They’ve been led through the correct procedure of analysis, though they’ve only gone part of the way. They’ve found evidence that, all else being equal it’s better not to give into a desire to commit incest. The incest itself is what they find bad, not some consequence of it. You haven’t identified an incoherence in their final position.
To continue the analysis, they should see what bad consequences would follow from not doing the incest. They should check whether this badness outweighs the badness of doing the incest. They should be able to identify the hypothetical scenarios where it’s worse not to commit the incest than to do it.
In the end, they may decide that people shouldn’t commit incest in most typical situations, even when there are no distinct bad consequences of the incest. Whether or not you agree with them, they would still be vastly more reflective about their morality than most people are. It would be great if more people were so reflective, even if they ended up disagreeing with you about which things are harms-in-themselves.