I don’t have enough data about it, but I think it is possible that these horrible mass behaviors start by some dark individuals doing it first… and others gradually joining them after observing that the behavior wasn’t punished, and maybe that they kinda need to do the same thing in order to remain competitive.
In other words, the average person is quite happy to join some evil behavior that is socially approved, but there are individuals who are quite happy to initiate it. Removing those individuals from the positions of power could stop many such avalanches.
(In my model, the average person is kinda amoral—happy to copy most behaviors of their neighbors, good and bad alike—and then we have small fractions of genuinely good and genuinely bad people, who act outside the Overton window; plus we can make the society better or worse by incentives and propaganda. For example, punishing bad behavior will deter most people, and stories about heroes will inspire some.)
EDIT:
For example, you mention colonialism. Maybe most people approved of it, but only some of them made the decisions and organized it. Remove the organizers, and there is no colonialism. More importantly, I think that most people approved of having the colonies simply because it was the status quo. The average person’s moral compass could probably be best described as “don’t do weird things”.
I think a big part of the problem is that in a situation of power imbalance, there’s a large reward lying around for someone to do bad things—plunder colonies for gold, slaves, and territory; raise and slaughter animals in factory farms—as long as the rest can enjoy the fruits of it without feeling personally responsible. There’s no comparable gradient in favor of good things (“good” is often unselfish, uncompetitive, unprofitable).
In theory, the reward for doing good should be prestige. (Which in turn may translate to more tangible rewards.) But that mostly works in small groups and doesn’t scale well.
Some aspect of this seems like a coordination problem. Whatever is your personal definition of “good”, you would probably approve of a system that gives good people some kind of prestige, at least among other good people.
For example, people may disagree about whether veganism is good or bad, but from a perspective of a vegan, it would be nice if vegans could have some magical “vegan mark” that would be unfalsifiable and immediately visible to other vegans. That way, you could promote your values not just by practicing and preaching your values, but also by rewarding other people who practice the same values. (For example, if you sell some products, you could give discounts to vegans. If many people start doing that, veganism may become more popular. Perhaps some people would criticize that as doing things for the wrong reasons, but the animals probably wouldn’t mind.) Similarly, effective altruists would approve of rewarding effective altruists, open source developers would approve of rewarding open source developers, etc.
These things exist to some degree (e.g. the open source developers can put a link to their projects in a profile), but often the existing solutions don’t scale well. If you only have dozen effective altruists, they know each other by name, but if you get thousands, this stops working.
One problem here is the association of “good” with “unselfish” and “non-judgmental”, which suggests that good people rewarding other good people is somehow… bad? In my opinion, we need to rethink that, because from the perspective of incentives and reinforcement, that is utterly stupid. The reason for these memes is that the past attempts to reward good often led to… people optimizing to be seen as good, rather than actually being good. That is a serious problem that I don’t know how to solve; I just have a strong feeling that going to the opposite extreme is not the right answer.
I don’t have enough data about it, but I think it is possible that these horrible mass behaviors start by some dark individuals doing it first… and others gradually joining them after observing that the behavior wasn’t punished, and maybe that they kinda need to do the same thing in order to remain competitive.
In other words, the average person is quite happy to join some evil behavior that is socially approved, but there are individuals who are quite happy to initiate it. Removing those individuals from the positions of power could stop many such avalanches.
(In my model, the average person is kinda amoral—happy to copy most behaviors of their neighbors, good and bad alike—and then we have small fractions of genuinely good and genuinely bad people, who act outside the Overton window; plus we can make the society better or worse by incentives and propaganda. For example, punishing bad behavior will deter most people, and stories about heroes will inspire some.)
EDIT:
For example, you mention colonialism. Maybe most people approved of it, but only some of them made the decisions and organized it. Remove the organizers, and there is no colonialism. More importantly, I think that most people approved of having the colonies simply because it was the status quo. The average person’s moral compass could probably be best described as “don’t do weird things”.
I think a big part of the problem is that in a situation of power imbalance, there’s a large reward lying around for someone to do bad things—plunder colonies for gold, slaves, and territory; raise and slaughter animals in factory farms—as long as the rest can enjoy the fruits of it without feeling personally responsible. There’s no comparable gradient in favor of good things (“good” is often unselfish, uncompetitive, unprofitable).
In theory, the reward for doing good should be prestige. (Which in turn may translate to more tangible rewards.) But that mostly works in small groups and doesn’t scale well.
Some aspect of this seems like a coordination problem. Whatever is your personal definition of “good”, you would probably approve of a system that gives good people some kind of prestige, at least among other good people.
For example, people may disagree about whether veganism is good or bad, but from a perspective of a vegan, it would be nice if vegans could have some magical “vegan mark” that would be unfalsifiable and immediately visible to other vegans. That way, you could promote your values not just by practicing and preaching your values, but also by rewarding other people who practice the same values. (For example, if you sell some products, you could give discounts to vegans. If many people start doing that, veganism may become more popular. Perhaps some people would criticize that as doing things for the wrong reasons, but the animals probably wouldn’t mind.) Similarly, effective altruists would approve of rewarding effective altruists, open source developers would approve of rewarding open source developers, etc.
These things exist to some degree (e.g. the open source developers can put a link to their projects in a profile), but often the existing solutions don’t scale well. If you only have dozen effective altruists, they know each other by name, but if you get thousands, this stops working.
One problem here is the association of “good” with “unselfish” and “non-judgmental”, which suggests that good people rewarding other good people is somehow… bad? In my opinion, we need to rethink that, because from the perspective of incentives and reinforcement, that is utterly stupid. The reason for these memes is that the past attempts to reward good often led to… people optimizing to be seen as good, rather than actually being good. That is a serious problem that I don’t know how to solve; I just have a strong feeling that going to the opposite extreme is not the right answer.