I have a fundamental set of morals from which I build my views. They aren’t explicit, but my moral decisions all form a consistent web. Sometimes one of these moral-elements must be altered because of some inconsistency it presents, and sometimes my judgement of a situation must be altered because it was inconsistent with the foundation. But ultimately, consistency is what I aim for. I know this is super vague, and for that I apologize.
So far, this has worked for me 100% of the time. There have been some sticky situations, but even those have been worked out (i.e. - Answering the question, for example, “Is it ok for a father to leave if his [very] mentally unstable significant other tricked him into impregnating her?” This did not happen to me, but it was a question I had to answer none the less.)
Perhaps to my discredit according to some LWers: I often think trying to quantify morals with numbers has enough uncertainty associated with it that it is useless.
There are, of course, inconsistencies that I’m unaware of: These are known unknowns. The idea, though, is that when I’m presented with a situation, any such relevant inconsistencies come up and are eliminated (either by a change of the foundation or a change of the judgement).
That is, inconsistencies that exist but don’t come up aren’t relevant.
An example—extreme but illustrative: Say an element of this foundational set is “I want to ‘treat everyone equally’”. I interview a Blue man for a job and, upon reflecting, think very negatively of him, even though he’s more qualified than others. When I review the interview as if I were a 3rd party [ignorant of any differences between Blue people and regular people], I come to the conclusion that the interview was actually pretty solid.
I now have a choice to make. Do I actually want to treat people equally? If so, then I must think differently of this Blue man, his Blue people, give him this job, and make a very conscious effort to incorperate Blue people into my “everybody” perception. This is a change in judgement. Or, maybe I don’t want to treat everyone equally—maybe I want to treat everyone who’s not Blue equally. This is a change in foundation (but this change in foundation would have to coincide with the other elements in the foundation-set; or those, too, would change).
But, until now, my perception of Blue people was irrelevant.
Perhaps it would have been best to say: The process by which I make moral decisions is built to maximize for consistency. A lot goes into this. everything from honing the ability to look at a situation as a 3rd party, to comparing a decision with decisions I’ve made in the past. As a result, there’s a very practiced part of me that immediately responds to nigh all situations with “Is this inconsistent?”
(An unrelated note: Are there things in this post I could have eliminated to get the same point across, but be more succint? I often feel as if my responses [in general] are too long.)
I have a fundamental set of morals from which I build my views. They aren’t explicit, but my moral decisions all form a consistent web. Sometimes one of these moral-elements must be altered because of some inconsistency it presents, and sometimes my judgement of a situation must be altered because it was inconsistent with the foundation. But ultimately, consistency is what I aim for. I know this is super vague, and for that I apologize.
So far, this has worked for me 100% of the time. There have been some sticky situations, but even those have been worked out (i.e. - Answering the question, for example, “Is it ok for a father to leave if his [very] mentally unstable significant other tricked him into impregnating her?” This did not happen to me, but it was a question I had to answer none the less.)
Perhaps to my discredit according to some LWers: I often think trying to quantify morals with numbers has enough uncertainty associated with it that it is useless.
How do you know? (Or, if the answer is “I can just tell” or something: How do you know that your consistency is any better than anyone else’s?)
Trial-and-error.
There are, of course, inconsistencies that I’m unaware of: These are known unknowns. The idea, though, is that when I’m presented with a situation, any such relevant inconsistencies come up and are eliminated (either by a change of the foundation or a change of the judgement).
That is, inconsistencies that exist but don’t come up aren’t relevant.
An example—extreme but illustrative: Say an element of this foundational set is “I want to ‘treat everyone equally’”. I interview a Blue man for a job and, upon reflecting, think very negatively of him, even though he’s more qualified than others. When I review the interview as if I were a 3rd party [ignorant of any differences between Blue people and regular people], I come to the conclusion that the interview was actually pretty solid.
I now have a choice to make. Do I actually want to treat people equally? If so, then I must think differently of this Blue man, his Blue people, give him this job, and make a very conscious effort to incorperate Blue people into my “everybody” perception. This is a change in judgement. Or, maybe I don’t want to treat everyone equally—maybe I want to treat everyone who’s not Blue equally. This is a change in foundation (but this change in foundation would have to coincide with the other elements in the foundation-set; or those, too, would change).
But, until now, my perception of Blue people was irrelevant.
Perhaps it would have been best to say: The process by which I make moral decisions is built to maximize for consistency. A lot goes into this. everything from honing the ability to look at a situation as a 3rd party, to comparing a decision with decisions I’ve made in the past. As a result, there’s a very practiced part of me that immediately responds to nigh all situations with “Is this inconsistent?”
(An unrelated note: Are there things in this post I could have eliminated to get the same point across, but be more succint? I often feel as if my responses [in general] are too long.)