The more removed someone is from me, the fewer resources I should expend per unit of their suffering.
We could make this ethical theory quantifiable, by using some constant (a coefficient in the exponent of the distance-care function) such that E=1 means you care about everyone’s suffering equally, E=0 means you do not care about anyone else’s suffering at all, and then we could argue that the optimal value is e.g. E=0.7635554, and perhaps conclude that people with E > 0.7635554 are just virtue signalling, and people with E<0.7635554 are selfish assholes.
We could make this ethical theory quantifiable, by using some constant (a coefficient in the exponent of the distance-care function) such that E=1 means you care about everyone’s suffering equally, E=0 means you do not care about anyone else’s suffering at all, and then we could argue that the optimal value is e.g. E=0.7635554, and perhaps conclude that people with E > 0.7635554 are just virtue signalling, and people with E<0.7635554 are selfish assholes.