Credibility warning: All of this is wild post-hoc speculation based on my vague intuitions. Don’t read it as if it has any semblance of authority. Feel free to bring up actual evidence if it confirms or denies my speculations, though. Also, this is my first post on LW so please point out if I violated any conventions, norms, etc.
Readability warning: This post was not very carefully edited, so the clarity, grammar, formatting, etc. might be a disaster, and there’s a good chance it’s nearly unreadable at times. Feel free to ask for clarification.
Word abuse warning: I might have unintentionally equivocated the meaning of “moral” between “not immoral” and “actively good” at some point. I think I consistently used it as “not immoral”, but let me know if I equivocated and I’ll fix it. I also might have unintentionally equivocated “immoral” between “bad but can be balanced out by good things” and “so bad that it can never be balanced out by any amount of good things”. Oh, I also might have used the word “axiom” in a slightly unintuitive way. Oops.
Note: When I refer to morality in this post, I do so in the sense of “these are our most fundamental goals/things we should avoid” or “this is the fundamental goodness/badness of these actions”, not “we should act like these are our goals in order to achieve the real fundamental goals”.
Morality is fundamentally subjective and feelings-based but there seem to be ways that I can be persuaded of fundamental moral claims that isn’t just showing me pictures of starving African children. I’m currently a utilitarian, and there was some way I was talked into it, and I can articulate the thoughts and arguments that led me here, but I can’t describe exactly why or how they led me here. This a half-attempt at answering that question by discovering the underlying process that makes me convinced by some moral arguments and unconvinced by others.
I think of moral frameworks as sets of axioms about what is “moral” or “immoral” that follow deductive logic. There are a few ways I think I can be convinced or unconvinced of a moral axiom:
An axiom can have a strong emotional appeal on its own, and that’s enough to start. Examples:
The idea of killing someone and stealing their organs to save others feels bad to me, which makes it immoral to me until disproven through the methods below. [1]
The idea of causing nothing but suffering feels bad to me, which makes it immoral to me until disproven through the methods below.
Definition: Two axioms being “analogous” means that you have to invent some other stupid-feeling axiom for them not to have the same level of moral consideration. Examples:
Killing someone at 1:00 is analogous to killing the same person in the exact same circumstances in the exact same world except at 1:01, because for them not to have the same level of moral consideration, you need to invent an axiom about the inherent morality of 1:00 vs. 1:01, which feels stupid.
Killing (in the sense of ending a conscious experience) a human is analogous to killing a dog with the same human brain, because it feels stupid (to me, at least) to invent an axiom about the inherent morality of the positioning of the hair, skin, muscle, bones, DNA, etc. near the brain that generates the consciousness.
Rerouting an overflowing dam from a larger city to a smaller city is analogous to forcefully drowning a group of people in the ocean to save a larger group, like if there are a ton of human-hungry sharks nearby (assume suffering, financial resources lost, etc etc are all the same) because the only real difference is that one occurs within a city whereas one doesn’t, and it seems stupid to have an axiom that cares about “city-ness”. [2]
Analogous axioms must be brought to the same level of moral consideration. This is decided by choosing the one with the strongest emotional appeal. For the dog-with-human-brain example, you [3] likely feel more strongly that killing the human is immoral than you do that killing the dog with the human brain is moral, and since they’re analogous, you accept that both are immoral.
(Even more speculative than the rest of this post) The exact way these are reconciled might vaguely approximate the following: I feel with strength “+5” (positively) about scenario A, I feel with strength “-2“ (negatively) about scenario B, and they are analogous, so I readjust my feelings to 5-2=”+3” for both of them, meaning I think both of them are moral. I recognize the problem of there being infinite possible A-like [4] scenarios and infinite possible B-like scenarios, making this calculation impossible [5], but maybe there’s something to the addition idea.
Two axioms can also be “contradictory” (literally logically inconsistent or relying on a stupid-feeling axiom to make them have the same level of moral consideration). Too lazy to come up with an example right now but hopefully you get the idea; “contradictory” axioms are resolved more or less the same way “analogous” ones are.
I still have no idea how I derive more general principles; maybe it has something to do with recognizing patterns about which axioms are analogous to other axioms? Not sure.
I have no idea how I decide when my moral feelings about an axiom are “trustworthy” or “untrustworthy” (in the sense that I adjust its weight downward when having it “battle” against an analogous axiom). It seems obvious that I should trust it more with, for example, magnitudes of one person than magnitudes of a billion people, but what’s the underlying principle that causes me to feel this way? Can this be derived from the above “analogous axioms” strategy somehow?
Footnotes:
[1]: For what it’s worth, I do consider this one to be “disproven” in the sense that I’m no longer convinced it’s immoral, because I feel more strongly about increasing utility in all cases than I do about not killing people for their organs, which falls under the process I mentioned for resolving contradictory axioms, I think?
[2]: Now that I think about it, this is probably a horrible example because our intuitions on flooding the city are probably based on actual features about the city like buildings and culture and whatnot, so it’s kinda cheating to say “hold everything between the ocean and city constant and ignore all that”. But what makes this feel like valid reasoning for why this is a horrible example? I should have to justify that since it’s kind of what this post is about. Hmm.
[3] Phrased as “you” because I have mixed feelings about whether killing a human (i.e. causing net human death) is necessarily bad in the first place and didn’t want to mislead about my actual beliefs, but I assume most people here feel that killing a human is necessarily bad unless it’s balanced out by a positive.
[4] “A-like” here meaning that it’s analogous to A and carries the same “emotional reaction number” as A
[5] Uhh, maybe there’s some way to say they’re the same degree of infinity and it somehow cancels? Probably not.
Speculative Model For How Moral Arguments Work
Credibility warning: All of this is wild post-hoc speculation based on my vague intuitions. Don’t read it as if it has any semblance of authority. Feel free to bring up actual evidence if it confirms or denies my speculations, though. Also, this is my first post on LW so please point out if I violated any conventions, norms, etc.
Readability warning: This post was not very carefully edited, so the clarity, grammar, formatting, etc. might be a disaster, and there’s a good chance it’s nearly unreadable at times. Feel free to ask for clarification.
Word abuse warning: I might have unintentionally equivocated the meaning of “moral” between “not immoral” and “actively good” at some point. I think I consistently used it as “not immoral”, but let me know if I equivocated and I’ll fix it. I also might have unintentionally equivocated “immoral” between “bad but can be balanced out by good things” and “so bad that it can never be balanced out by any amount of good things”. Oh, I also might have used the word “axiom” in a slightly unintuitive way. Oops.
_________________________________________________________
Note: When I refer to morality in this post, I do so in the sense of “these are our most fundamental goals/things we should avoid” or “this is the fundamental goodness/badness of these actions”, not “we should act like these are our goals in order to achieve the real fundamental goals”.
Morality is fundamentally subjective and feelings-based but there seem to be ways that I can be persuaded of fundamental moral claims that isn’t just showing me pictures of starving African children. I’m currently a utilitarian, and there was some way I was talked into it, and I can articulate the thoughts and arguments that led me here, but I can’t describe exactly why or how they led me here. This a half-attempt at answering that question by discovering the underlying process that makes me convinced by some moral arguments and unconvinced by others.
I think of moral frameworks as sets of axioms about what is “moral” or “immoral” that follow deductive logic. There are a few ways I think I can be convinced or unconvinced of a moral axiom:
An axiom can have a strong emotional appeal on its own, and that’s enough to start. Examples:
The idea of killing someone and stealing their organs to save others feels bad to me, which makes it immoral to me until disproven through the methods below. [1]
The idea of causing nothing but suffering feels bad to me, which makes it immoral to me until disproven through the methods below.
Definition: Two axioms being “analogous” means that you have to invent some other stupid-feeling axiom for them not to have the same level of moral consideration. Examples:
Killing someone at 1:00 is analogous to killing the same person in the exact same circumstances in the exact same world except at 1:01, because for them not to have the same level of moral consideration, you need to invent an axiom about the inherent morality of 1:00 vs. 1:01, which feels stupid.
Killing (in the sense of ending a conscious experience) a human is analogous to killing a dog with the same human brain, because it feels stupid (to me, at least) to invent an axiom about the inherent morality of the positioning of the hair, skin, muscle, bones, DNA, etc. near the brain that generates the consciousness.
Rerouting an overflowing dam from a larger city to a smaller city is analogous to forcefully drowning a group of people in the ocean to save a larger group, like if there are a ton of human-hungry sharks nearby (assume suffering, financial resources lost, etc etc are all the same) because the only real difference is that one occurs within a city whereas one doesn’t, and it seems stupid to have an axiom that cares about “city-ness”. [2]
Analogous axioms must be brought to the same level of moral consideration. This is decided by choosing the one with the strongest emotional appeal. For the dog-with-human-brain example, you [3] likely feel more strongly that killing the human is immoral than you do that killing the dog with the human brain is moral, and since they’re analogous, you accept that both are immoral.
(Even more speculative than the rest of this post) The exact way these are reconciled might vaguely approximate the following: I feel with strength “+5” (positively) about scenario A, I feel with strength “-2“ (negatively) about scenario B, and they are analogous, so I readjust my feelings to 5-2=”+3” for both of them, meaning I think both of them are moral. I recognize the problem of there being infinite possible A-like [4] scenarios and infinite possible B-like scenarios, making this calculation impossible [5], but maybe there’s something to the addition idea.
Two axioms can also be “contradictory” (literally logically inconsistent or relying on a stupid-feeling axiom to make them have the same level of moral consideration). Too lazy to come up with an example right now but hopefully you get the idea; “contradictory” axioms are resolved more or less the same way “analogous” ones are.
I still have no idea how I derive more general principles; maybe it has something to do with recognizing patterns about which axioms are analogous to other axioms? Not sure.
I have no idea how I decide when my moral feelings about an axiom are “trustworthy” or “untrustworthy” (in the sense that I adjust its weight downward when having it “battle” against an analogous axiom). It seems obvious that I should trust it more with, for example, magnitudes of one person than magnitudes of a billion people, but what’s the underlying principle that causes me to feel this way? Can this be derived from the above “analogous axioms” strategy somehow?
Footnotes:
[1]: For what it’s worth, I do consider this one to be “disproven” in the sense that I’m no longer convinced it’s immoral, because I feel more strongly about increasing utility in all cases than I do about not killing people for their organs, which falls under the process I mentioned for resolving contradictory axioms, I think?
[2]: Now that I think about it, this is probably a horrible example because our intuitions on flooding the city are probably based on actual features about the city like buildings and culture and whatnot, so it’s kinda cheating to say “hold everything between the ocean and city constant and ignore all that”. But what makes this feel like valid reasoning for why this is a horrible example? I should have to justify that since it’s kind of what this post is about. Hmm.
[3] Phrased as “you” because I have mixed feelings about whether killing a human (i.e. causing net human death) is necessarily bad in the first place and didn’t want to mislead about my actual beliefs, but I assume most people here feel that killing a human is necessarily bad unless it’s balanced out by a positive.
[4] “A-like” here meaning that it’s analogous to A and carries the same “emotional reaction number” as A
[5] Uhh, maybe there’s some way to say they’re the same degree of infinity and it somehow cancels? Probably not.