Hopefully it’s a hypothetical or I’m misreading it, but at a gut level this is pretty sad, and pretty horrible.
Plenty of people would disagree, so do I. What now? Your gut versus my gut? To each his own, I’d say.
(Ever tried the “no false sincerity” approach in a job interview? Never feigned interest in some of what a romantic interest was telling you, even if in fact you couldn’t care less?)
Never feigned interest in some of what a romantic interest was telling you, even if in fact you couldn’t care less?
Huh, no I haven’t—my romantic interests have tended to tell me things that I was genuinely interested in, or at least to be able to shortly realize when I wasn’t and change the topic consequently. Which is probably a large part of the difference between a romantic interest and someone I’d like to have a one-night stand with.
Now that I remember about it, I did that until I was about 17 -- towards people of the gender I’m not attracted to; for example, I would force myself to watch soccer matches even though I found them boring as hell.
Then I started hanging around people who didn’t give a damn either (mainly females, the kind of people whom Paul Graham here calls freaks, etc.).
I was asked where I draw the line, and this to me is beyond the line. So far beyond the line that it’s a dot to you etc. etc.. There may be plenty of people who agree that this sort of thing is fine, although I’m not clear on what they’re agreeing with. Are you saying it can be a moral good to ‘manipulate people who you hold into contempt into sleeping with you’ because everyone wins: they get the benefit of your (false) empathy and nice-guyness and you get sex? Or that this is a misrepresentation and everything’s more innocent than that.
But my problem is not any feigned interest at all—that’s part of a whole host of social dynamics—it’s about treating people as things. Feigning interest can actually be part of recognising that people are people. But doing it to get what you came for? Not so much.
Are you saying it can be a moral good to ‘manipulate people who you hold into contempt into sleeping with you’ because everyone wins: they get the benefit of your (false) empathy and nice-guyness and you get sex?
We can go with that example. “Moral good” is a lofty term, in the example it certainly gives the guy utilons, the girl utilons, seems like a win-win to me. Where’s the downside? Or are you thinking of some personified “honest truth never manipulate people to your benefit” avatar, an idea made flesh, who’s crying in a corner?
You could argue on the basis of “spoiling the common good by furthering a society full of dishonest manipulation”. But then again, you could do the same with non-vegetarians, or car-drivers, on similar grounds (common good). Personally, if I could optimize the world I would mandate honesty for everyone (minus myself), could you imagine?
Does that “people are people” paradigm mean it’s “pretty sad and pretty horrible” to press a secret button you see—but the other human does not --, the pressing of which will help both people involved? Because that’s just inefficient.
Where’s the benefit in losing optimizing power to adhere to some vague-cultural-norms-reified paradigm about “must not analyze human behavior and act accordingly”? You can of course value it for its own sake, but why should others?
The fact I’m not convinced it gives the girl utilons is basically my point: I’m a consequentialist at heart. My experience suggests that people sleeping with people who pretend to like them but actually hold them in contempt does not lead to good things. Not sure where avatars or crying in the corner comes into that.
The ‘people are people’ paradigm is translating an incredibly detailed, recursive consequentialist approach into a rule you can actually live by. It’s theoretically possible this person who screws people they detest is maximising mutual happiness, but it’s implausible to me. That person could probably sleep with someone who doesn’t hold them in contempt, for starters. I get the attraction of acting as a disinterest benefit-maximiser for the world—in my most recent real-world moral dispute I got in trouble for precisely this sort of attempt to tinker—but when it happens to correlate with getting laid I’m not convinced many people are good at analysing the situation.
If that’s the genuine intention, I think it shows a failure to allow for human nature. But I think in practice, one of the problems of the approach is that it makes it all too easy to justify whatever you feel like doing in terms of utilons.
Since your preferences already include the preferences of others to the degree that you care about other-preferences, just evaluating which course of action most satisfies your own preferences and then going with that is tautologically the course of action you should follow. (We probably agree thus far.)
Would you like everyone to have some equilibrium, converged utility function which values all other (humans? sentients?) preferences equal to “its own” (lacking a few qualifiers because of interdependencies)? “I will break this piece of bread into 7 billion pieces, or in as many pieces as I can effectively distribute”?
Do you go around comparing your net worth with every stranger walking you by, then equalize? Since you do not, apparently you also prefer some non-equal trade-off, you also haven’t incorporated the preferences of others into your own, giving equal weight. Having established that, we just need to haggle about the line. (But even if we don’t agree on that, what makes my line better or worse than yours? I think yours is worse, you think mine is, now what?)
So I guess you’d argue for considering the preferences of others as highly important, as opposed to tangentially important? Just concerning people you see face to face? Including nameless Chinese peasants, or do you privilege people who can talk to you? Do you reject the advertisement industry, plainly trying to manipulate people as things?
If you had a button which I could press and which would cause you to sign over all your resources to me, would I press it? Of course I would. You wouldn’t? I hope you never interact with the corporate world, inc.
Again, to each his/her own. What I reject is this valuing of some “treat people like people” self-crippling abandonment of efficiently optimizing other goals above other utility functions in some objective manner.
Then again, I object to any utility function being regarded as “good” or “bad” in a global sense, which does not preclude me from wanting to avoid certain utility functions from being implemented (think paperclip maximizer). But I do so for blatantly selfish reasons (it would impact me fulfilling my own utility function), and so do you (even if your utility function values other-preferences more highly).
An effective altruist who treats people like things instrumentally in accomplishing his/her goals can do more “good” (as judged by him/herself) overall. People are complex, treating people like things (if that’s what you’d mean by taking opportunities which clearly present themselves) doesn’t preclude you from valuing people.
A truly effective altruist might be justified in doing all sorts of things on consequentialist grounds. But I think incredibly few people are effective altruists. Once someone has reached the point of giving away almost all their money, doing things that make them clearly unhappy etc. for the point of the greater good, then I would see their manipulative actions in a different way. Where people actually inhabit a border-world where they can wield wider influence outside of a social context I think some radical positions can be justified: at the apex of revolution, killing the children can be justified for the greater good etc.
I just think there’s a natural scepticism about these sort of reasons when they’re used to justified to trick your way into getting laid.
Using ‘selfish’ to mean maximising your own utility function also bleeds meaning out of things: by what possible definition would someone not be selfish here? Does selfish simply mean incompetent: a person who actually values others but in practice tries to accumulate wealth and power is unselfish in this sense? You’re doing violence to language here. It might be justified if the main tension in life was between different models of utility, but given that for most people the immediate tension is between what you should do and what suits you, redefining selfishness is incredibly unhelpful.
I’d also be interested in what ‘valuing people’ you got from what I was responding to:
“I empathize with some girl about whatever dopey thing she and her girlfriends have got in to, which I couldn’t have the least bit of interest in, but I am a nice guy. Later we have sex.”
Using ‘selfish’ to mean maximising your own utility function also bleeds meaning out of things: by what possible definition would someone not be selfish here?
Someone can fail to maximize their own utility function due to akrasia, irrationality, or incorrect information. (But I agree that “selfish” is an extremely poor choice for a word for that. See e.g. this about “sacrificing one’s own happiness for the sake of others” vs “gaining one’s happiness through the happiness of others”.)
What is so much better about altruism than about just living your life however the hell you want? Whence the superiority?
I see two utility functions, I’m gonna prefer the one which is more in tune with my own needs. That is, I too would like everyone else to be a perfect altruist if that translates to them helping me fulfill my own preferences. The impression you exude is that you have a more objective, general notion in mind of why incorporating other-preferences into your own so strongly is somehow superior.
Would you share it? Why doesn’t it apply to the girl? If you were the girl in the situation, should you not oblige the guy and sleep with him, without him having to listen to her dopey? Clearly it’s his strong preference, no? So why wouldn’t you?
Using ‘selfish’ to mean maximising your own utility function also bleeds meaning out of things: by what possible definition would someone not be selfish here?
That’s just it, you can’t. I don’t see it as removing pertinent meaning, but more of as the removal of a misconception. Someone helping others out, because he likes the social affirmation he was taught to associate with it, is doing so because he originally liked the social affirmation and has grown to expect it, no different from the bell chiming for the Pavlovian dog. I still like people helping me out for doing so, but that doesn’t mean they don’t do so for selfish reasons (maybe they like me liking them) and/or having been conditioned.
In another sense, we can of course use ‘selfish’ to denote how much or how little other-preferences play a role in your own preferences, as long as we don’t forget that you maximize other-preferences because that’s what your own preferences are.
I’d also be interested in what ‘valuing people’ you got from what I was responding to:
Not to be facetious: He could rob her, drug her (in scenarios without palpable legal repercussions for him), use her insecurities to convince her to drop out of school to be at his beck and call. Instead, his behavior is more balanced: He seems like he’s willing to compromise some of her preferences (“only want to be with an honest guy”), help out some of her other preferences (“spent a great evening with a guy who takes my issues seriously, feel affirmed”), but does not strictly disregard all her preferences.
Seems like valuing her preferences—above being indifferent to them—to me. Let’s not underestimate the gravity of what “I dont care what happens to her at all” would actually mean. Empathy, to some degree, is ingrained in all of us. So are social boundaries, by ways of early conditioning. But, as always, just because we can explain the way our norms and instincts evolved does not make them superior in some objective sense to any others (that would be a naturalistic fallacy). Which does not preclude us from preferring others with values that more benefit our own, without putting such altruism on some pseudo-objective pedestal.
Someone helping others out, because he likes the social affirmation he was taught to associate with it, is doing so because he originally liked the social affirmation and has grown to expect it, no different from the bell chiming for the Pavlovian dog. I still like people helping me out for doing so, but that doesn’t mean they don’t do so for selfish reasons (maybe they like me liking them) and/or having been conditioned.
I do think altruism is superior: I’m not sure how exactly to unpack ethical statements, but I believe altruism is better than egoism, definitely. I also think that ‘selfishness’ has a very well understood meaning about maximising your own happiness/power/whatever and that redefining it so that it’s selfish to do what you think is right is fairly pointless. ‘Preferences’ is a ridiculously broad term and you seem to be treating ‘people follow their preferences’ as true by definition, meaning that ‘people are selfish’ doesn’t have much content as a claim.
In practice, people aren’t perfect altruists: but defining however you act as maximising your utility function and therefore just as good as anything else is just a refusal to engage on ethics: you end up reverting to brute force (‘I cannot object ethically to the fact your utility function involves rape and murder: but I can oppose you based on my utility function’). Not sure what good moving all ethical debate to this level achieves.
Oh, and on the altrustically having sex approach: again, we live in a society where we reasonably expect non-inteference and non-deception but don’t usually expect people to actively do what they don’t want to do: a theoretical utility-maximiser might have sex with people they’re not attracted to, sure.
On valuing people: I would understand valuing someone to go beyond the level of ‘I won’t actively harm and abuse you on a whim’. Although even in the hard sense of valuing (does he care about her at all) the statement that kicked this off doesn’t demonstrate any consideration for her experience. As you note, raping/drugging etc. have bad consequences for him, and as for getting her to drop out, I imagine it would be far more effort, have far more unpredictable results (her or friends might end up getting revenge for you screwing up her life) and not worth it if he just wants sex.
a theoretical utility-maximiser might have sex with people they’re not attracted to, sure
It depends on what their utility function is—assuming the orthogonality thesis, for any X whatsoever there’s a theoretical utility maximiser who might do X, so that’s not terribly informative about X.
Does that “people are people” paradigm mean it’s “pretty sad and pretty horrible” to press a secret button you see—but the other human does not --, the pressing of which will help both people involved?
Why couldn’t I just tell the other human about the button?
Plenty of people would disagree, so do I. What now? Your gut versus my gut? To each his own, I’d say.
(Ever tried the “no false sincerity” approach in a job interview? Never feigned interest in some of what a romantic interest was telling you, even if in fact you couldn’t care less?)
Huh, no I haven’t—my romantic interests have tended to tell me things that I was genuinely interested in, or at least to be able to shortly realize when I wasn’t and change the topic consequently. Which is probably a large part of the difference between a romantic interest and someone I’d like to have a one-night stand with.
Not even as a teenager?
Now that I remember about it, I did that until I was about 17 -- towards people of the gender I’m not attracted to; for example, I would force myself to watch soccer matches even though I found them boring as hell.
Then I started hanging around people who didn’t give a damn either (mainly females, the kind of people whom Paul Graham here calls freaks, etc.).
Not that I can remember of, but then again I was a helluva dork back then.
I was asked where I draw the line, and this to me is beyond the line. So far beyond the line that it’s a dot to you etc. etc.. There may be plenty of people who agree that this sort of thing is fine, although I’m not clear on what they’re agreeing with. Are you saying it can be a moral good to ‘manipulate people who you hold into contempt into sleeping with you’ because everyone wins: they get the benefit of your (false) empathy and nice-guyness and you get sex? Or that this is a misrepresentation and everything’s more innocent than that.
But my problem is not any feigned interest at all—that’s part of a whole host of social dynamics—it’s about treating people as things. Feigning interest can actually be part of recognising that people are people. But doing it to get what you came for? Not so much.
We can go with that example. “Moral good” is a lofty term, in the example it certainly gives the guy utilons, the girl utilons, seems like a win-win to me. Where’s the downside? Or are you thinking of some personified “honest truth never manipulate people to your benefit” avatar, an idea made flesh, who’s crying in a corner?
You could argue on the basis of “spoiling the common good by furthering a society full of dishonest manipulation”. But then again, you could do the same with non-vegetarians, or car-drivers, on similar grounds (common good). Personally, if I could optimize the world I would mandate honesty for everyone (minus myself), could you imagine?
Does that “people are people” paradigm mean it’s “pretty sad and pretty horrible” to press a secret button you see—but the other human does not --, the pressing of which will help both people involved? Because that’s just inefficient.
Where’s the benefit in losing optimizing power to adhere to some vague-cultural-norms-reified paradigm about “must not analyze human behavior and act accordingly”? You can of course value it for its own sake, but why should others?
The fact I’m not convinced it gives the girl utilons is basically my point: I’m a consequentialist at heart. My experience suggests that people sleeping with people who pretend to like them but actually hold them in contempt does not lead to good things. Not sure where avatars or crying in the corner comes into that.
The ‘people are people’ paradigm is translating an incredibly detailed, recursive consequentialist approach into a rule you can actually live by. It’s theoretically possible this person who screws people they detest is maximising mutual happiness, but it’s implausible to me. That person could probably sleep with someone who doesn’t hold them in contempt, for starters. I get the attraction of acting as a disinterest benefit-maximiser for the world—in my most recent real-world moral dispute I got in trouble for precisely this sort of attempt to tinker—but when it happens to correlate with getting laid I’m not convinced many people are good at analysing the situation.
If that’s the genuine intention, I think it shows a failure to allow for human nature. But I think in practice, one of the problems of the approach is that it makes it all too easy to justify whatever you feel like doing in terms of utilons.
Since your preferences already include the preferences of others to the degree that you care about other-preferences, just evaluating which course of action most satisfies your own preferences and then going with that is tautologically the course of action you should follow. (We probably agree thus far.)
Would you like everyone to have some equilibrium, converged utility function which values all other (humans? sentients?) preferences equal to “its own” (lacking a few qualifiers because of interdependencies)? “I will break this piece of bread into 7 billion pieces, or in as many pieces as I can effectively distribute”?
Do you go around comparing your net worth with every stranger walking you by, then equalize? Since you do not, apparently you also prefer some non-equal trade-off, you also haven’t incorporated the preferences of others into your own, giving equal weight. Having established that, we just need to haggle about the line. (But even if we don’t agree on that, what makes my line better or worse than yours? I think yours is worse, you think mine is, now what?)
So I guess you’d argue for considering the preferences of others as highly important, as opposed to tangentially important? Just concerning people you see face to face? Including nameless Chinese peasants, or do you privilege people who can talk to you? Do you reject the advertisement industry, plainly trying to manipulate people as things?
If you had a button which I could press and which would cause you to sign over all your resources to me, would I press it? Of course I would. You wouldn’t? I hope you never interact with the corporate world, inc.
Again, to each his/her own. What I reject is this valuing of some “treat people like people” self-crippling abandonment of efficiently optimizing other goals above other utility functions in some objective manner.
Then again, I object to any utility function being regarded as “good” or “bad” in a global sense, which does not preclude me from wanting to avoid certain utility functions from being implemented (think paperclip maximizer). But I do so for blatantly selfish reasons (it would impact me fulfilling my own utility function), and so do you (even if your utility function values other-preferences more highly).
An effective altruist who treats people like things instrumentally in accomplishing his/her goals can do more “good” (as judged by him/herself) overall. People are complex, treating people like things (if that’s what you’d mean by taking opportunities which clearly present themselves) doesn’t preclude you from valuing people.
A truly effective altruist might be justified in doing all sorts of things on consequentialist grounds. But I think incredibly few people are effective altruists. Once someone has reached the point of giving away almost all their money, doing things that make them clearly unhappy etc. for the point of the greater good, then I would see their manipulative actions in a different way. Where people actually inhabit a border-world where they can wield wider influence outside of a social context I think some radical positions can be justified: at the apex of revolution, killing the children can be justified for the greater good etc.
I just think there’s a natural scepticism about these sort of reasons when they’re used to justified to trick your way into getting laid.
Using ‘selfish’ to mean maximising your own utility function also bleeds meaning out of things: by what possible definition would someone not be selfish here? Does selfish simply mean incompetent: a person who actually values others but in practice tries to accumulate wealth and power is unselfish in this sense? You’re doing violence to language here. It might be justified if the main tension in life was between different models of utility, but given that for most people the immediate tension is between what you should do and what suits you, redefining selfishness is incredibly unhelpful.
I’d also be interested in what ‘valuing people’ you got from what I was responding to:
“I empathize with some girl about whatever dopey thing she and her girlfriends have got in to, which I couldn’t have the least bit of interest in, but I am a nice guy. Later we have sex.”
Someone can fail to maximize their own utility function due to akrasia, irrationality, or incorrect information. (But I agree that “selfish” is an extremely poor choice for a word for that. See e.g. this about “sacrificing one’s own happiness for the sake of others” vs “gaining one’s happiness through the happiness of others”.)
What is so much better about altruism than about just living your life however the hell you want? Whence the superiority?
I see two utility functions, I’m gonna prefer the one which is more in tune with my own needs. That is, I too would like everyone else to be a perfect altruist if that translates to them helping me fulfill my own preferences. The impression you exude is that you have a more objective, general notion in mind of why incorporating other-preferences into your own so strongly is somehow superior.
Would you share it? Why doesn’t it apply to the girl? If you were the girl in the situation, should you not oblige the guy and sleep with him, without him having to listen to her dopey? Clearly it’s his strong preference, no? So why wouldn’t you?
That’s just it, you can’t. I don’t see it as removing pertinent meaning, but more of as the removal of a misconception. Someone helping others out, because he likes the social affirmation he was taught to associate with it, is doing so because he originally liked the social affirmation and has grown to expect it, no different from the bell chiming for the Pavlovian dog. I still like people helping me out for doing so, but that doesn’t mean they don’t do so for selfish reasons (maybe they like me liking them) and/or having been conditioned.
In another sense, we can of course use ‘selfish’ to denote how much or how little other-preferences play a role in your own preferences, as long as we don’t forget that you maximize other-preferences because that’s what your own preferences are.
Not to be facetious: He could rob her, drug her (in scenarios without palpable legal repercussions for him), use her insecurities to convince her to drop out of school to be at his beck and call. Instead, his behavior is more balanced: He seems like he’s willing to compromise some of her preferences (“only want to be with an honest guy”), help out some of her other preferences (“spent a great evening with a guy who takes my issues seriously, feel affirmed”), but does not strictly disregard all her preferences.
Seems like valuing her preferences—above being indifferent to them—to me. Let’s not underestimate the gravity of what “I dont care what happens to her at all” would actually mean. Empathy, to some degree, is ingrained in all of us. So are social boundaries, by ways of early conditioning. But, as always, just because we can explain the way our norms and instincts evolved does not make them superior in some objective sense to any others (that would be a naturalistic fallacy). Which does not preclude us from preferring others with values that more benefit our own, without putting such altruism on some pseudo-objective pedestal.
“So which of these two is the actual altruist? Whichever one actually holds open doors for little old ladies.”
I do think altruism is superior: I’m not sure how exactly to unpack ethical statements, but I believe altruism is better than egoism, definitely. I also think that ‘selfishness’ has a very well understood meaning about maximising your own happiness/power/whatever and that redefining it so that it’s selfish to do what you think is right is fairly pointless. ‘Preferences’ is a ridiculously broad term and you seem to be treating ‘people follow their preferences’ as true by definition, meaning that ‘people are selfish’ doesn’t have much content as a claim.
In practice, people aren’t perfect altruists: but defining however you act as maximising your utility function and therefore just as good as anything else is just a refusal to engage on ethics: you end up reverting to brute force (‘I cannot object ethically to the fact your utility function involves rape and murder: but I can oppose you based on my utility function’). Not sure what good moving all ethical debate to this level achieves.
Oh, and on the altrustically having sex approach: again, we live in a society where we reasonably expect non-inteference and non-deception but don’t usually expect people to actively do what they don’t want to do: a theoretical utility-maximiser might have sex with people they’re not attracted to, sure.
On valuing people: I would understand valuing someone to go beyond the level of ‘I won’t actively harm and abuse you on a whim’. Although even in the hard sense of valuing (does he care about her at all) the statement that kicked this off doesn’t demonstrate any consideration for her experience. As you note, raping/drugging etc. have bad consequences for him, and as for getting her to drop out, I imagine it would be far more effort, have far more unpredictable results (her or friends might end up getting revenge for you screwing up her life) and not worth it if he just wants sex.
It depends on what their utility function is—assuming the orthogonality thesis, for any X whatsoever there’s a theoretical utility maximiser who might do X, so that’s not terribly informative about X.
Why couldn’t I just tell the other human about the button?