After asking that cousin_it abandon charged words like “victim” that I suspect he is just using for shock value, I am actually going to rewrite his statement seriously and examine it seriously:
The goal of pickup is to engineer the most desirable outcome for the user of pickup, not the most desirable outcome for the other participant.
On the face of it, this statement might make pickup sound zero-sum, but that’s not the only interpretation. Pickup is about attempting to bring about the most desirable outcome for the user of pickup, yes, but that doesn’t mean that it creates an undesirable outcome for the other person (from their perspective). I would propose a slightly altered summary:
“The goal of pickup is to engineer the most desirable outcome for the user of pickup, without harming the other participant.”
You have a comparative advantage for advocating for your own preferences. Social interaction (of which sexuality is only a subset) works best when people advocate for their own preferences, attempting to align others’ preferences with theirs, and without harming others.
Of course, this process is bilateral (which is why I changed “victim” to “participant”), so both participants are actually trying to engineer the outcome towards their preferences at the same time (and also engineer each other’s preferences to align with theirs!). With two people of similar ability, the result will be some sort of intersection or union of their preferences.
But this compromise only comes about when both people mainly advocate for their own preferences. Sexuality and romance are a form of negotiation. Pickup teaches negotiation skills, but it is hardly the only source of them. Many people already have sexual negotiation skills, and certain segments of men may be deficient, which is why pickup is necessary for them.
So yes, the goal is pickup is to advance towards your most desirable outcome… and if you are a decent person, then your most desirable outcome won’t include absolutely trampling over the other person if they are a crappy negotiator and can’t handle you. Simultaneously, the other person’s goal is to advance towards their most desirable outcome.
Unfortunately, the cultural bias towards villainous men abusive damsels in distress makes male sexual negotiation skills seem a lot more suspect than women’s. As I pointed out recently, nobody worries about innocent, insecure beginner PUAs getting used by women for sex and validation… thanks to the unwarranted assumption that PUAs are so far ahead of women in negotiation skill that they are performing some kind of black magic or mind control on women.
If I wanted to make women feel better, I’d just buy them flowers instead of doing pickup.
Personally, I find it much easier to make women feel good through pickup than through flowers.
Social interaction (of which sexuality is only a subset) works best when people advocate for their own preferences, attempting to align others’ preferences with theirs, and without harming others.
This is exactly the kind of argument that I wanted to shoot down.
IMO we shouldn’t have a norm of requiring people to give altruistic justifications whenever they discuss better ways of maximizing their own utility function, even if that utility function may be repugnant to some. Discussions of morality (ends) should not intrude on discussions of rationality (means), especially not here on LW! If you allow a field to develop its instrumental rationality for a while without moralists sticking their noses in, you get something awesome like Schelling, or PUA, or pretty butterflies. If you get stuck discussing morals, you get… nothing much.
If you allow a field to develop its instrumental rationality for a while without moralists sticking their noses in, you get something awesome like Schelling, or PUA, or pretty butterflies. If you get stuck discussing morals, you get… nothing much.
You may be on to something here; this may be a very useful heuristic against which to check our moral intuitions.
On the other hand, one still has to be careful: you probably wouldn’t want to encourage people to refine the art of taking over a country as a genocidal dictator, for example.
On the other hand, one still has to be careful: you probably wouldn’t want to encourage people to refine the art of taking over a country as a genocidal dictator, for example.
Although it is interesting to study in theory. For example, in the Art of War, Laws of Power, history itself or computer simulations. Just so long as it doesn’t involve much real world experimentation. :)
Just so long as it doesn’t involve much real world experimentation. :)
But this is the fundamental problem: you don’t want to let the theory in any field get too far ahead of the real world experimentation. If it does, it makes it harder for the people who eventually do good (and ethical) research to have their work integrated properly into the knowledge. And knowledge that is not based on research is likely to be false. So an important question in any field should be “is there some portion of this that can be studied ethically?”
If we “develop its instrumental rationality for a while without moralists sticking their noses in”, we run the risk of letting theories run wild without sufficient evidence [evo-psych, I’m looking at you] or of relying on unethically-obtained (and therefore less-trustworthy) evidence.
How so? When scientists perform studies, they can sometimes benefit (money, job, or simply reputation) by inventing data or otherwise skipping steps in their research. At other times, they can benefit by failing to publish a result when they can benefit by refraining to publish. A scientist who is willing to violate certain ethical principles (lying, cheating, etc) is surely more willing to act unethically in publishing (or declining to publish) their studies.
Possibly more willing. They might be willing to sacrifice moral standards for the sake of furthering human knowledge that they wouldn’t break for personal gain. It would still be evidence of untrustworthiness though.
I like what you are saying in the second paragraph there… but I also agree with the quote from Hugh. So the whole ‘wanted to shoot down’ part doesn’t seem to fit in between.
I agree with this in the abstract, but in all particular situations the ‘morality’ is part of the content of the ‘utility function’ so is directly relevant to whether something really is a better way of maximizing the utility function.
If you’re talking about behaviors, morality is relevant.
I agree with this in the abstract, but if you adopt the view that morality is already factored into your utility function (as I do), then you probably don’t need to pay attention when other people say your behavior is immoral (as many critics of PUA here do). I think when Alice calls Bob’s behavior immoral, she’s not setting out to help Bob maximize his utility function more effectively, she’s trying to enforce a perceived social contract or just score points.
if you adopt the view that morality is already factored into your utility function
(You are not necessarily able to intuitively feel what your “utility function” specifies, and moral arguments can point out to you that you are not paying attention, for example, to its terms that refer to experience of specific other people.)
I disagree, especially here on Lw! When user-Bob tells user-Alice that her behavior is immoral, he’s probably setting out to help her maximize her utility function more effectively.
Or at least, that’s why I do it. A virtue is a trait of character that is good for the person who has it.
ETA: Otherwise, the argument is fully general. For humanity in general, when Alice says x to Bob, she is trying to enforce a perceived social contract, or score points, or signal tribal affiliation. So, you shouldn’t listen to anybody about anything w.r.t. becoming more instrumentally effective. And that seems obviously wrong, at least here.
I disagree, especially here on Lw! When user-Bob tells user-Alice that her behavior is immoral, he’s probably setting out to help her maximize her utility function more effectively.
My historical observations do not support this prediction.
I submit that if I say, “you should x”, and it is not the case that “x is rational”, then I’m doing something wrong. Your putative observations should have been associated with downvotes, and the charitable interpretation remains that comments here are in support of rationality.
After asking that cousin_it abandon charged words like “victim” that I suspect he is just using for shock value, I am actually going to rewrite his statement seriously and examine it seriously:
On the face of it, this statement might make pickup sound zero-sum, but that’s not the only interpretation. Pickup is about attempting to bring about the most desirable outcome for the user of pickup, yes, but that doesn’t mean that it creates an undesirable outcome for the other person (from their perspective). I would propose a slightly altered summary:
“The goal of pickup is to engineer the most desirable outcome for the user of pickup, without harming the other participant.”
You have a comparative advantage for advocating for your own preferences. Social interaction (of which sexuality is only a subset) works best when people advocate for their own preferences, attempting to align others’ preferences with theirs, and without harming others.
Of course, this process is bilateral (which is why I changed “victim” to “participant”), so both participants are actually trying to engineer the outcome towards their preferences at the same time (and also engineer each other’s preferences to align with theirs!). With two people of similar ability, the result will be some sort of intersection or union of their preferences.
But this compromise only comes about when both people mainly advocate for their own preferences. Sexuality and romance are a form of negotiation. Pickup teaches negotiation skills, but it is hardly the only source of them. Many people already have sexual negotiation skills, and certain segments of men may be deficient, which is why pickup is necessary for them.
So yes, the goal is pickup is to advance towards your most desirable outcome… and if you are a decent person, then your most desirable outcome won’t include absolutely trampling over the other person if they are a crappy negotiator and can’t handle you. Simultaneously, the other person’s goal is to advance towards their most desirable outcome.
Unfortunately, the cultural bias towards villainous men abusive damsels in distress makes male sexual negotiation skills seem a lot more suspect than women’s. As I pointed out recently, nobody worries about innocent, insecure beginner PUAs getting used by women for sex and validation… thanks to the unwarranted assumption that PUAs are so far ahead of women in negotiation skill that they are performing some kind of black magic or mind control on women.
Personally, I find it much easier to make women feel good through pickup than through flowers.
This is exactly the kind of argument that I wanted to shoot down.
IMO we shouldn’t have a norm of requiring people to give altruistic justifications whenever they discuss better ways of maximizing their own utility function, even if that utility function may be repugnant to some. Discussions of morality (ends) should not intrude on discussions of rationality (means), especially not here on LW! If you allow a field to develop its instrumental rationality for a while without moralists sticking their noses in, you get something awesome like Schelling, or PUA, or pretty butterflies. If you get stuck discussing morals, you get… nothing much.
You may be on to something here; this may be a very useful heuristic against which to check our moral intuitions.
On the other hand, one still has to be careful: you probably wouldn’t want to encourage people to refine the art of taking over a country as a genocidal dictator, for example.
Although it is interesting to study in theory. For example, in the Art of War, Laws of Power, history itself or computer simulations. Just so long as it doesn’t involve much real world experimentation. :)
But this is the fundamental problem: you don’t want to let the theory in any field get too far ahead of the real world experimentation. If it does, it makes it harder for the people who eventually do good (and ethical) research to have their work integrated properly into the knowledge. And knowledge that is not based on research is likely to be false. So an important question in any field should be “is there some portion of this that can be studied ethically?” If we “develop its instrumental rationality for a while without moralists sticking their noses in”, we run the risk of letting theories run wild without sufficient evidence [evo-psych, I’m looking at you] or of relying on unethically-obtained (and therefore less-trustworthy) evidence.
“Unethically obtained evidence is less trustworthy” is the wrongest thing I’ve heard in this whole discussion :-)
How so? When scientists perform studies, they can sometimes benefit (money, job, or simply reputation) by inventing data or otherwise skipping steps in their research. At other times, they can benefit by failing to publish a result when they can benefit by refraining to publish. A scientist who is willing to violate certain ethical principles (lying, cheating, etc) is surely more willing to act unethically in publishing (or declining to publish) their studies.
Possibly more willing. They might be willing to sacrifice moral standards for the sake of furthering human knowledge that they wouldn’t break for personal gain. It would still be evidence of untrustworthiness though.
I like what you are saying in the second paragraph there… but I also agree with the quote from Hugh. So the whole ‘wanted to shoot down’ part doesn’t seem to fit in between.
I agree with this in the abstract, but in all particular situations the ‘morality’ is part of the content of the ‘utility function’ so is directly relevant to whether something really is a better way of maximizing the utility function.
If you’re talking about behaviors, morality is relevant.
I agree with this in the abstract, but if you adopt the view that morality is already factored into your utility function (as I do), then you probably don’t need to pay attention when other people say your behavior is immoral (as many critics of PUA here do). I think when Alice calls Bob’s behavior immoral, she’s not setting out to help Bob maximize his utility function more effectively, she’s trying to enforce a perceived social contract or just score points.
(You are not necessarily able to intuitively feel what your “utility function” specifies, and moral arguments can point out to you that you are not paying attention, for example, to its terms that refer to experience of specific other people.)
I disagree, especially here on Lw! When user-Bob tells user-Alice that her behavior is immoral, he’s probably setting out to help her maximize her utility function more effectively.
Or at least, that’s why I do it. A virtue is a trait of character that is good for the person who has it.
ETA: Otherwise, the argument is fully general. For humanity in general, when Alice says x to Bob, she is trying to enforce a perceived social contract, or score points, or signal tribal affiliation. So, you shouldn’t listen to anybody about anything w.r.t. becoming more instrumentally effective. And that seems obviously wrong, at least here.
My historical observations do not support this prediction.
I submit that if I say, “you should x”, and it is not the case that “x is rational”, then I’m doing something wrong. Your putative observations should have been associated with downvotes, and the charitable interpretation remains that comments here are in support of rationality.