They may not care about you, but your atoms are useful to them in their current configuration.
There are ways of hurting people other than stabbing them, I just used a simple example.
I think there is a confusion about what exactly “selfish” means, and I blame Ayn Rand for it. The heroes in her novels are given the label “selfish” because they do not care about possibilities to actively do something good for other people unless there is also some profit for them (which is what a person with zero value for others in their preference function would do), but at the same time they avoid actively harming other people in ways that could bring them some profit (which is not what a perfectly selfish person would do).
As a result, we get quite unrealistic characters who on one hand are described as rational profit maximizers who don’t care about others (except instrumentally), but on the other hand they follow an independently reinvented deontological framework that seems like designed by someone who actually cares about other people but is in deep denial about it (i.e. Ayn Rand).
A truly selfish person (someone who truly does not care about others) would hurt others in situations where doing so is profitable (including second-order effects). A truly selfish person would not arbitrarily invent a deontological code against hurting other people, because such code is merely a rationalization invented by someone who already has an emotional reason not to hurt other people but wants to pretend that instead this is a logical conclusion derived from first principles.
Interacting with a psychopath with likely get you hurt. It will likely not get you killed, because some other way of hurting you has a better risk:benefit profile. Perhaps the most profitable way is to scam you of some money and use you to get introduced to your friends. Only once in a while a situation will arise when raping someone is sufficiently safe, or killing someone is extremely profitable, e.g. because that person stands in a way of a grand business.
I’m not sure what our disagreement actually is—I agree with your summary of Ayn Rand, I agree that there are lots of ways to hurt people without stabbing. I’m not sure you’re claiming this, but I think that failure to help is selfish too, though I’m not sure it’s comparable with active harm.
It may be that I’m reacting badly to the use of “truly selfish”—I fear a motte-and-bailey argument is coming, where we define it loosely, and then categorize actions inconsistently as “truly selfish” only in extremes, but then try to define policy to cover far more things.
I think we’re agreed that the world contains a range of motivated behaviors, from sadistic psychopaths (who have NEGATIVE nonzero terms for others’ happiness) to saints (whose utility functions weight very heavily toward other’s happiness over their own). I don’t know if we agree that “second-order effect” very often dominate the observed behaviors over most of this range. I hope we agree that almost everyone changes their behavior to some extent based on visible incentives.
I still disagree with your post that a coefficient of 0 for you in someone’s mind implies murder for pocket change. And I disagree with the implication that murder for pocket change is impossible even if the coefficient is above 0 - circumstances matter more than innate utility function.
To the OP’s point, it’s hard to know how to accomplish “make people less selfish”, but “make the environment more conducive to positive-sum choices so selfish people take cooperative actions” is quite feasible.
I still disagree with your post that a coefficient of 0 for you in someone’s mind implies murder for pocket change.
I believe this is exactly what it means, unless there is a chance of punishment or being hurt by victim’s self-defense or a chance of better alternative interaction with given person. Do you assume that there is always a more profitable interaction? (What if the target says “hey, I just realized that you are a psychopath, and I do not want to interact with you anymore”, and they mean it.)
Could you please list the pros and cons of deciding whether to murder a stranger who refuses to interact with you, if there is zero risk of being punished, from the perspective of a psychopath? As I see it, the “might get some pocket change” in the pro column is the only nonzero item in this model.
unless there is a chance of punishment or being hurt by victim’s self-defense or a chance of better alternative interaction with given person.
There always is that chance. That’s mostly our disagreement. Using real-world illustrations (murder) for motivational models (utility) really needs to acknowledge the uncertainty and variability, which the vast majority of the time “adds up to normal”. There really aren’t that many murders among strangers. And there are a fair number of people who don’t value others’ very highly.
There are ways of hurting people other than stabbing them, I just used a simple example.
I think there is a confusion about what exactly “selfish” means, and I blame Ayn Rand for it. The heroes in her novels are given the label “selfish” because they do not care about possibilities to actively do something good for other people unless there is also some profit for them (which is what a person with zero value for others in their preference function would do), but at the same time they avoid actively harming other people in ways that could bring them some profit (which is not what a perfectly selfish person would do).
As a result, we get quite unrealistic characters who on one hand are described as rational profit maximizers who don’t care about others (except instrumentally), but on the other hand they follow an independently reinvented deontological framework that seems like designed by someone who actually cares about other people but is in deep denial about it (i.e. Ayn Rand).
A truly selfish person (someone who truly does not care about others) would hurt others in situations where doing so is profitable (including second-order effects). A truly selfish person would not arbitrarily invent a deontological code against hurting other people, because such code is merely a rationalization invented by someone who already has an emotional reason not to hurt other people but wants to pretend that instead this is a logical conclusion derived from first principles.
Interacting with a psychopath with likely get you hurt. It will likely not get you killed, because some other way of hurting you has a better risk:benefit profile. Perhaps the most profitable way is to scam you of some money and use you to get introduced to your friends. Only once in a while a situation will arise when raping someone is sufficiently safe, or killing someone is extremely profitable, e.g. because that person stands in a way of a grand business.
I’m not sure what our disagreement actually is—I agree with your summary of Ayn Rand, I agree that there are lots of ways to hurt people without stabbing. I’m not sure you’re claiming this, but I think that failure to help is selfish too, though I’m not sure it’s comparable with active harm.
It may be that I’m reacting badly to the use of “truly selfish”—I fear a motte-and-bailey argument is coming, where we define it loosely, and then categorize actions inconsistently as “truly selfish” only in extremes, but then try to define policy to cover far more things.
I think we’re agreed that the world contains a range of motivated behaviors, from sadistic psychopaths (who have NEGATIVE nonzero terms for others’ happiness) to saints (whose utility functions weight very heavily toward other’s happiness over their own). I don’t know if we agree that “second-order effect” very often dominate the observed behaviors over most of this range. I hope we agree that almost everyone changes their behavior to some extent based on visible incentives.
I still disagree with your post that a coefficient of 0 for you in someone’s mind implies murder for pocket change. And I disagree with the implication that murder for pocket change is impossible even if the coefficient is above 0 - circumstances matter more than innate utility function.
To the OP’s point, it’s hard to know how to accomplish “make people less selfish”, but “make the environment more conducive to positive-sum choices so selfish people take cooperative actions” is quite feasible.
I believe this is exactly what it means, unless there is a chance of punishment or being hurt by victim’s self-defense or a chance of better alternative interaction with given person. Do you assume that there is always a more profitable interaction? (What if the target says “hey, I just realized that you are a psychopath, and I do not want to interact with you anymore”, and they mean it.)
Could you please list the pros and cons of deciding whether to murder a stranger who refuses to interact with you, if there is zero risk of being punished, from the perspective of a psychopath? As I see it, the “might get some pocket change” in the pro column is the only nonzero item in this model.
There always is that chance. That’s mostly our disagreement. Using real-world illustrations (murder) for motivational models (utility) really needs to acknowledge the uncertainty and variability, which the vast majority of the time “adds up to normal”. There really aren’t that many murders among strangers. And there are a fair number of people who don’t value others’ very highly.