Ok, if you want more realistic examples, consider:
driving around in a fancy car that you legitimately earned the money to buy, and your neighbors are jealous and hate seeing it (and it’s not an eyesore, nor is their complaint about wear and tear on the road or congestion)
succeeding at a career (through skill and hard work) that your neighbors failed at, which reminds them of their failure and they feel regret
marrying someone of a race or sex that causes some of your neighbors great anguish due to their beliefs
maintaining a social relationship with someone who has opinions your neighbors really hate
having resources that they really want—I mean really really want, I mean need—no matter how much you like having it, I can always work myself up into a height of emotion such that I want it more than you, and therefore aggregate utility is optimized if you give it to me
The category is “peaceful things you should be allowed to do—that I would write off any ethical system that forbade you from doing—even though they (a) benefit you, (b) harm others, and (c) might even be net-negative (at least naively, in the short term) in aggregate utility”. The point is that other people’s psyches can work in arbitrary ways that assign negative payoffs to peaceful, benign actions of yours, and if the ethical system allows them to use this to control your behavior or grab your resources, then they’re incentivized to bend their psyches in that direction—to dwell on their envy and hatred and let them grow. (Also, since mind-reading isn’t currently practical, any implementation of the ethical system relies on people’s ability to self-report their preferences, and to be convincing about it.) The winners would be those who are best able to convince others of how needy they are (possibly by becoming that needy).
Therefore, any acceptable ethical system must be resistant to this kind of utilitarian coercion. As I say, rules—generally systems of rights, generally those that begin with the right to one’s self and one’s property—are the only plausible solution I’ve encountered.
Ok, if you want more realistic examples, consider:
driving around in a fancy car that you legitimately earned the money to buy, and your neighbors are jealous and hate seeing it (and it’s not an eyesore, nor is their complaint about wear and tear on the road or congestion)
succeeding at a career (through skill and hard work) that your neighbors failed at, which reminds them of their failure and they feel regret
marrying someone of a race or sex that causes some of your neighbors great anguish due to their beliefs
maintaining a social relationship with someone who has opinions your neighbors really hate
having resources that they really want—I mean really really want, I mean need—no matter how much you like having it, I can always work myself up into a height of emotion such that I want it more than you, and therefore aggregate utility is optimized if you give it to me
The category is “peaceful things you should be allowed to do—that I would write off any ethical system that forbade you from doing—even though they (a) benefit you, (b) harm others, and (c) might even be net-negative (at least naively, in the short term) in aggregate utility”. The point is that other people’s psyches can work in arbitrary ways that assign negative payoffs to peaceful, benign actions of yours, and if the ethical system allows them to use this to control your behavior or grab your resources, then they’re incentivized to bend their psyches in that direction—to dwell on their envy and hatred and let them grow. (Also, since mind-reading isn’t currently practical, any implementation of the ethical system relies on people’s ability to self-report their preferences, and to be convincing about it.) The winners would be those who are best able to convince others of how needy they are (possibly by becoming that needy).
Therefore, any acceptable ethical system must be resistant to this kind of utilitarian coercion. As I say, rules—generally systems of rights, generally those that begin with the right to one’s self and one’s property—are the only plausible solution I’ve encountered.