Epistemic Status: I feel like I stumbled over this; it has passed a few filters for correctness; I have not rigorously explored it, and I cannot adequately defend it, but I think that is more my own failing than the failure of the idea.
I have heard said that “Good and Evil are Social Constructs”, or “Who’s really to say?”, or “Morality is relative”. I do not like those at all, and I think they are completely wrong. Since then, I either found, developed, or came across (I don’t remember how I got this) a model of Good and Evil, which has so far seemed accurate in every situation I have applied it to. I don’t think I’ve seen this model written explicitly anywhere, but I have seen people quibble about the meaning of Good & Evil in many places, so whether this turns out to be useful, or laughably naïve, or utterly obvious to everyone but me, I’d rather not keep it to myself anymore.
The purpose of this, I guess, is that when the map has become so smudged and smeared, and some people question whether it ever corresponded to the territory at all, to now figure out what part of the territory this part of the map was supposed to refer to. I will assume that we have all seen or heard examples of things which are Good, things which are Evil, things which are neither, and things which are somewhere in between. An accurate description of Good & Evil should accurately match those experiences a vast majority (all?) of the time.
It seems to me, that among the clusters of things in possibility space, the core of Good is “to help others at one’s own expense” while the core of Evil is “to harm others for one’s own benefit”.
In my limited attempts at verifying this, the Goodness or Evilness of an action or situation has so far seemed to correlate with the presence, absence, and intensity of these versions of Good & Evil. Situations where one does great harm to others for one’s own gain seem clearly evil, like executing political opposition. Situations where one helps others at a cost to oneself seem clearly good, like carrying people out of a burning building. Situations where no harm nor help is done, and no benefit is gained nor cost expended, seem neither Good nor Evil, such as a rock sitting in the sun, doing nothing. Situations where both harm is done & help is given, and where both a cost is expended and a benefit is gained, seem both Good and Evil, or somewhere in between, such as rescuing an unconscious person from a burning building, and then taking their wallet.
The correctness of this explanation depends on whether it matches others’ judgements of specific instances of Good or Evil, so I can’t really prove its correctness from my armchair. The only counterexamples I have seen so far involved significant amounts of motivated reasoning (someone who was certain that theft wasn’t wrong when they did it).
I’m sure there are many things wrong with this, but I can’t expect to become better at rationality if I’m not willing to be crappy at it first.
For self-defense, that’s still a feature, and not a bug. It’s generally seen as more evil to do more harm when defending yourself, and in law, defending youself with lethal force is “justifyable homicide”, it’s specifically called out as something much like an “acceptable evil”. Would it be more or less evil to cause an attacker to change their ways without harming them? Would it be more or less evil to torture an attacker before killing them?
″...by not doing all the Good...” In the model, it’s actually really intentional that “a lack of Good” is not a part of the definition of Evil, because it really isn’t the same thing. There are idiosyncracies in this model which I have not found all of yet. Thank you for pointing them out!
Your intuitions about what’s good and what’s evil are fully consistent with morality being relative, and with them being social constructs. You are deeply embedded in many overlapping cultures and groups, and your beliefs about good and evil will naturally align with what they want you to believe (which in many cases is what they actually believe, but there’s a LOT of hypocrisy on this topic, so it’s not perfectly clear).
I personally like those guidelines. Though I’d call it good to help others EVEN if you benefit in doing so, and evil to do significant net harm, but with a pretty big carveout for small harms which can be modeled as benefits on different dimensions. And my liking them doesn’t make them real or objective.
The first paragraph is equivalent to saying that “all good & evil is socially constructed because we live in a society”, and I don’t want to call someone wrong, so let me try to explain...
An accurate model of Good & Evil will hold true, valid, and meaningful among any population of agents: human, animal, artificial, or otherwise. It is not at all depentent on existing in our current, modern society. Populations that do significant amounts of Good amongst each other generally thrive & are resilient (e.g. humans, ants, rats, wolves, cells in any body, many others), even though some individuals may fail or die horribly. Populations which do significant amounts of Evil tend to be less resilient, or destroy themselves (e.g. high crime areas, cancer cells), even though certain members of those populations may be wildly successful, at least temporarily.
This isn’t even a human-centric model, so it’s not “constructed by society”. It seems to me more likely to be a model that societies have to conform to, in order to exist in a form that is recognizeable as a society.
I apologize for being flippant, and thank you for replying, as having to overcome challenges to this helps me figure it out more!
An accurate model of Good & Evil will hold true, valid, and meaningful among any population of agents: human, animal, artificial, or otherwise.
I look forward to seeing such a model. Or even the foundation of such a model and an indication of how you know it’s truly about good and evil, rather than efficient and in-.
I think, to form an ethical system that passes basic muster, you can’t only take into account the immediate good/bad effects on people of an action. That would treat the two cases “you dump toxic waste on other people’s lawns because you find it funny” and “you enjoy peacefully reading a book by yourself, and other people hate this because they hate you and they hate it when you enjoy yourself” the same.
If you start from a utilitarian perspective, I think you quickly figure out that there need to be rules—that having rules (which people treat as ends in themselves) leads to higher utility than following naive calculations. And I think some version of property rights is the only plausible rule set that anyone has come up with, or at least is the starting point. Then actions may be considered ethically bad to the extent that they violate the rules.
Regarding Good and Evil… I think I would use those words to refer to when someone is conscious of the choice between good and bad actions, and chooses one or the other, respectively. When I think of “monstrously evil”, I think of an intelligent person who understands good people and the system they’re in, and uses their intelligence and their resources to e.g. select the best people and hurt them specifically, or to find the weakest spots and sabotage the system most thoroughly and efficiently. I can imagine a dumb evil person, but I think they still have to know that an option is morally bad and choose it; if they don’t understand what they’re doing in that respect, then they’re not evil.
you enjoy peacefully reading a book by yourself, and other people hate this because they hate you and they hate it when you enjoy yourself
The problem with making hypothetical examples, is when you make them so unreal as to just be moving words around. Playing music/sound/whatever loud enough to be noise pollution would be similar to the first example. Less severe, but similar. Spreading manure on your lawn so that your entire neighborhood stinks would also be less severe, but similar. But if you’re going to say “reading” and then have hypothetical people not react to reading in the way that actual people actually do, then your hypothetical example isn’t going to be meaningful.
As for requiring consciousness, that’s why I was judging actions, not the agents themselves. Agents tend to do both, to some degree.
Ok, if you want more realistic examples, consider:
driving around in a fancy car that you legitimately earned the money to buy, and your neighbors are jealous and hate seeing it (and it’s not an eyesore, nor is their complaint about wear and tear on the road or congestion)
succeeding at a career (through skill and hard work) that your neighbors failed at, which reminds them of their failure and they feel regret
marrying someone of a race or sex that causes some of your neighbors great anguish due to their beliefs
maintaining a social relationship with someone who has opinions your neighbors really hate
having resources that they really want—I mean really really want, I mean need—no matter how much you like having it, I can always work myself up into a height of emotion such that I want it more than you, and therefore aggregate utility is optimized if you give it to me
The category is “peaceful things you should be allowed to do—that I would write off any ethical system that forbade you from doing—even though they (a) benefit you, (b) harm others, and (c) might even be net-negative (at least naively, in the short term) in aggregate utility”. The point is that other people’s psyches can work in arbitrary ways that assign negative payoffs to peaceful, benign actions of yours, and if the ethical system allows them to use this to control your behavior or grab your resources, then they’re incentivized to bend their psyches in that direction—to dwell on their envy and hatred and let them grow. (Also, since mind-reading isn’t currently practical, any implementation of the ethical system relies on people’s ability to self-report their preferences, and to be convincing about it.) The winners would be those who are best able to convince others of how needy they are (possibly by becoming that needy).
Therefore, any acceptable ethical system must be resistant to this kind of utilitarian coercion. As I say, rules—generally systems of rights, generally those that begin with the right to one’s self and one’s property—are the only plausible solution I’ve encountered.
Whom/what an agent is willing to do Evil to, vs whom/what it would prefer to do Good to, sort of defines an in-group/out-group divide, in a similar way to how the decision to cooperate or defect does in the Prisoner’s Dilemma. Hmmm...
The Definition of Good and Evil
Epistemic Status: I feel like I stumbled over this; it has passed a few filters for correctness; I have not rigorously explored it, and I cannot adequately defend it, but I think that is more my own failing than the failure of the idea.
I have heard said that “Good and Evil are Social Constructs”, or “Who’s really to say?”, or “Morality is relative”. I do not like those at all, and I think they are completely wrong. Since then, I either found, developed, or came across (I don’t remember how I got this) a model of Good and Evil, which has so far seemed accurate in every situation I have applied it to. I don’t think I’ve seen this model written explicitly anywhere, but I have seen people quibble about the meaning of Good & Evil in many places, so whether this turns out to be useful, or laughably naïve, or utterly obvious to everyone but me, I’d rather not keep it to myself anymore.
The purpose of this, I guess, is that when the map has become so smudged and smeared, and some people question whether it ever corresponded to the territory at all, to now figure out what part of the territory this part of the map was supposed to refer to. I will assume that we have all seen or heard examples of things which are Good, things which are Evil, things which are neither, and things which are somewhere in between. An accurate description of Good & Evil should accurately match those experiences a vast majority (all?) of the time.
It seems to me, that among the clusters of things in possibility space, the core of Good is “to help others at one’s own expense” while the core of Evil is “to harm others for one’s own benefit”.
In my limited attempts at verifying this, the Goodness or Evilness of an action or situation has so far seemed to correlate with the presence, absence, and intensity of these versions of Good & Evil. Situations where one does great harm to others for one’s own gain seem clearly evil, like executing political opposition. Situations where one helps others at a cost to oneself seem clearly good, like carrying people out of a burning building. Situations where no harm nor help is done, and no benefit is gained nor cost expended, seem neither Good nor Evil, such as a rock sitting in the sun, doing nothing. Situations where both harm is done & help is given, and where both a cost is expended and a benefit is gained, seem both Good and Evil, or somewhere in between, such as rescuing an unconscious person from a burning building, and then taking their wallet.
The correctness of this explanation depends on whether it matches others’ judgements of specific instances of Good or Evil, so I can’t really prove its correctness from my armchair. The only counterexamples I have seen so far involved significant amounts of motivated reasoning (someone who was certain that theft wasn’t wrong when they did it).
I’m sure there are many things wrong with this, but I can’t expect to become better at rationality if I’m not willing to be crappy at it first.
That makes self-defence Evil. It even makes cultivating one’s own garden Evil (by not doing all the Good that one might). And some argue that. Do you?
For self-defense, that’s still a feature, and not a bug. It’s generally seen as more evil to do more harm when defending yourself, and in law, defending youself with lethal force is “justifyable homicide”, it’s specifically called out as something much like an “acceptable evil”. Would it be more or less evil to cause an attacker to change their ways without harming them? Would it be more or less evil to torture an attacker before killing them?
″...by not doing all the Good...” In the model, it’s actually really intentional that “a lack of Good” is not a part of the definition of Evil, because it really isn’t the same thing. There are idiosyncracies in this model which I have not found all of yet. Thank you for pointing them out!
Your intuitions about what’s good and what’s evil are fully consistent with morality being relative, and with them being social constructs. You are deeply embedded in many overlapping cultures and groups, and your beliefs about good and evil will naturally align with what they want you to believe (which in many cases is what they actually believe, but there’s a LOT of hypocrisy on this topic, so it’s not perfectly clear).
I personally like those guidelines. Though I’d call it good to help others EVEN if you benefit in doing so, and evil to do significant net harm, but with a pretty big carveout for small harms which can be modeled as benefits on different dimensions. And my liking them doesn’t make them real or objective.
The first paragraph is equivalent to saying that “all good & evil is socially constructed because we live in a society”, and I don’t want to call someone wrong, so let me try to explain...
An accurate model of Good & Evil will hold true, valid, and meaningful among any population of agents: human, animal, artificial, or otherwise. It is not at all depentent on existing in our current, modern society. Populations that do significant amounts of Good amongst each other generally thrive & are resilient (e.g. humans, ants, rats, wolves, cells in any body, many others), even though some individuals may fail or die horribly. Populations which do significant amounts of Evil tend to be less resilient, or destroy themselves (e.g. high crime areas, cancer cells), even though certain members of those populations may be wildly successful, at least temporarily.
This isn’t even a human-centric model, so it’s not “constructed by society”. It seems to me more likely to be a model that societies have to conform to, in order to exist in a form that is recognizeable as a society.
I apologize for being flippant, and thank you for replying, as having to overcome challenges to this helps me figure it out more!
I look forward to seeing such a model. Or even the foundation of such a model and an indication of how you know it’s truly about good and evil, rather than efficient and in-.
I think, to form an ethical system that passes basic muster, you can’t only take into account the immediate good/bad effects on people of an action. That would treat the two cases “you dump toxic waste on other people’s lawns because you find it funny” and “you enjoy peacefully reading a book by yourself, and other people hate this because they hate you and they hate it when you enjoy yourself” the same.
If you start from a utilitarian perspective, I think you quickly figure out that there need to be rules—that having rules (which people treat as ends in themselves) leads to higher utility than following naive calculations. And I think some version of property rights is the only plausible rule set that anyone has come up with, or at least is the starting point. Then actions may be considered ethically bad to the extent that they violate the rules.
Regarding Good and Evil… I think I would use those words to refer to when someone is conscious of the choice between good and bad actions, and chooses one or the other, respectively. When I think of “monstrously evil”, I think of an intelligent person who understands good people and the system they’re in, and uses their intelligence and their resources to e.g. select the best people and hurt them specifically, or to find the weakest spots and sabotage the system most thoroughly and efficiently. I can imagine a dumb evil person, but I think they still have to know that an option is morally bad and choose it; if they don’t understand what they’re doing in that respect, then they’re not evil.
The problem with making hypothetical examples, is when you make them so unreal as to just be moving words around. Playing music/sound/whatever loud enough to be noise pollution would be similar to the first example. Less severe, but similar. Spreading manure on your lawn so that your entire neighborhood stinks would also be less severe, but similar. But if you’re going to say “reading” and then have hypothetical people not react to reading in the way that actual people actually do, then your hypothetical example isn’t going to be meaningful.
As for requiring consciousness, that’s why I was judging actions, not the agents themselves. Agents tend to do both, to some degree.
Ok, if you want more realistic examples, consider:
driving around in a fancy car that you legitimately earned the money to buy, and your neighbors are jealous and hate seeing it (and it’s not an eyesore, nor is their complaint about wear and tear on the road or congestion)
succeeding at a career (through skill and hard work) that your neighbors failed at, which reminds them of their failure and they feel regret
marrying someone of a race or sex that causes some of your neighbors great anguish due to their beliefs
maintaining a social relationship with someone who has opinions your neighbors really hate
having resources that they really want—I mean really really want, I mean need—no matter how much you like having it, I can always work myself up into a height of emotion such that I want it more than you, and therefore aggregate utility is optimized if you give it to me
The category is “peaceful things you should be allowed to do—that I would write off any ethical system that forbade you from doing—even though they (a) benefit you, (b) harm others, and (c) might even be net-negative (at least naively, in the short term) in aggregate utility”. The point is that other people’s psyches can work in arbitrary ways that assign negative payoffs to peaceful, benign actions of yours, and if the ethical system allows them to use this to control your behavior or grab your resources, then they’re incentivized to bend their psyches in that direction—to dwell on their envy and hatred and let them grow. (Also, since mind-reading isn’t currently practical, any implementation of the ethical system relies on people’s ability to self-report their preferences, and to be convincing about it.) The winners would be those who are best able to convince others of how needy they are (possibly by becoming that needy).
Therefore, any acceptable ethical system must be resistant to this kind of utilitarian coercion. As I say, rules—generally systems of rights, generally those that begin with the right to one’s self and one’s property—are the only plausible solution I’ve encountered.
Whom/what an agent is willing to do Evil to, vs whom/what it would prefer to do Good to, sort of defines an in-group/out-group divide, in a similar way to how the decision to cooperate or defect does in the Prisoner’s Dilemma. Hmmm...