What about just “until someone proves scientifically”?
Even that weaker position still seems incompatible actually being a utility-maximising agent, since there is prima facie evidence that inducing women to enter into a one-night-stand against their better judgment leads to subsequent distress on the part of the women reasonably often.
A disciple of Bayes and Bentham doesn’t go around causing harm up until someone else shows that it’s scientifically proven that they are causing harm. They do whatever maximises expected utility for all stakeholders based on the best evidence available at the time.
Note that this judgment holds regardless of the relative effectiveness of PUA techniques compared to placebo. Even if PUA is completely useless, which would be surprising given placebo effects alone, it would still be unethical to seek out social transactions that predictably lead to harm for a stakeholder without greater counterbalancing benefits being obtained somehow.
Even that weaker position still seems incompatible actually being a utility-maximising agent, since there is prima facie evidence that inducing women to enter into a one-night-stand against their better judgment leads to subsequent distress on the part of the women reasonably often.
That isn’t a utility maximising agent regardless of whether it demands your ‘proof beyond any doubt’ or just the ‘until someone proves scientifically’. Utility maximising agents shut up and multiply. They use the subjectively objective probabilities and multiply them by the utility of each case.
The utility maximising agent you are talking about is one that you have declared to be a ‘good utilitarian’. It’s maximising everybody’s utility equally. Which also happens to mean that if Bob gains more utility from a one night stand than a Carol loses through self-flaggelation then Bob is morally obliged to seduce her. This is something which I assume you would consider reprehensible. (This is one of the reasons I’m not a good utilitarian. It would disgust me.)
Neither “utility maximiser” nor “good utilitarian” are applause lights which match this proclamation.
(Edited out the last paragraph—it was a claim that was too strong.)
That isn’t a utility maximising agent regardless of whether it demands your ‘proof beyond any doubt’ or just the ‘until someone proves scientifically’. Utility maximising agents shut up and multiply. They use the subjectively objective probabilities and multiply them by the utility of each case.
I took it for granted that the disutility experienced by the hypothetical distressed woman is great enough that a utility-maximiser would seek to have one-night-stands only with women who actually enjoyed them.
The utility maximising agent you are talking about is one that you have declared to be a ‘good utilitarian’. It’s maximising everybody’s utility equally. Which also happens to mean that if Bob gains more utility from a one night stand than a Carol loses through self-flaggelation then Bob is morally obliged to seduce her. This is something which I assume you would consider reprehensible. (This is one of the reasons I’m not a good utilitarian. It would disgust me.)
Given that Bob has the option of creating greater average utility by asking Alices home instead I don’t see this as a problem. What you are saying is true only in a universe where picking up Carol and engaging in a win/lose, marginally-positive-sum interaction with her is the single best thing Bob can do to maximise utility in the universe, and that’s a pretty strange universe.
I also think that PUAs are going to have to justify their actions in utilitarian terms if they are going to do it at all, since I really struggle to see how they could find a deontological or virtue-ethical justification for deceiving people and playing on their cognitive biases to obtain sex without the partner’s fully informed consent. So if the utilitarian justification falls over I think all justifications fall over, although I’m open to alternative arguments on that point.
I don’t think the Weak Gor Hypothesis holds and I don’t think that you maximise a woman’s utility function by treating her the way the misogynistic schools of PUA adovcate, but if you did then I would buy PUA as a utility-maximising strategy. I think it’s about the only way I can see any coherent argument being made that PUA is ethical, excluding the warm-and-fuzzy PUA schools mentioned earlier which I already acknowledged as True Scotsmen.
The second sentence is correct… and conclusively refutes the first.
I cannot reconstruct how you are parsing the first sentence so that it contradicts the second, and I’ve just tried very hard.
Given that Bob has the option of creating greater average utility by asking Alices home instead I don’t see this as a problem.
This seems to be a straw man. I don’t recall ever hearing someone advocating having sex with people that would experience buyers remorse over those that would remember the experience positively. That would be a rather absurd position.
What you are saying is true only in a universe where picking up Carol and engaging in a win/lose, marginally-positive-sum interaction with her is the single best thing Bob can do to maximise utility in the universe, and that’s a pretty strange universe.
Yes, Bob should probably be spending all of his time earning money and gaining power that can be directed to mitigating existential risk. This objection seems to be a distraction from the point. The argument you made is neither utilitarian nor based on maximising utility. That’s ok, moral assertions don’t need to be reframed as utilitarian or utility-maximising. They can be just fine as they are.
This seems to be a straw man. I don’t recall ever hearing someone advocating having sex with people that would experience buyers remorse over those that would remember the experience positively. That would be a rather absurd position.
If so forgive me—I have not seen a PUA in the wild ever mentioning the issue of differentiating targets on the basis of whether or not being picked up would be psychologically healthy for them, so my provisional belief is that they attached no utility or disutility to the matter of whether the pick-up target would remember the experience positively. Am I wrong on that point?
Yes, Bob should probably be spending all of his time earning money and gaining power that can be directed to mitigating existential risk. This objection seems to be a distraction from the point.
This is a general argument which, if it worked, would serve to excuse all sorts of suboptimal behaviour. Just because someone isn’t directing all their efforts at existential risk mitigation or relieving the effects of Third World poverty doesn’t mean that they can’t be judged on the basis of whether they are treating other people’s emotional health recklessly.
The argument you made is neither utilitarian or based on maximising utility. That’s ok, deontological moral assertions don’t need to be reframed as utilitarian or utility-maximising. They can be just fine as they are.
I don’t see how you get to that reading of what I wrote.
I see this as a perfectly valid utilitarian argument-form: There is prima facie evidence X causes significant harm, hence continuing to do X right up until there is scientifically validated evidence that X causes significant harm is inconsistent with utility maximisation.
There’s a suppressed premise in there, that suppressed premise being “there are easily-available alternatives to X”, but since in the specific case under discussion there are easily-available alternatives to picking women up using PUA techniques I didn’t think it strictly necessary to make that premise explicit.
There are separate, potential deontological objections to PUA behaviour, some of which I have already stated, but I don’t see how you got to the conclusion that this particular argument was deontological in nature.
If so forgive me—I have not seen a PUA in the wild ever mentioning the issue of differentiating targets on the basis of whether or not being picked up would be psychologically healthy for them, so my provisional belief is that they attached no utility or disutility to the matter of whether the pick-up target would remember the experience positively. Am I wrong on that point?
The goalposts have moved again. But my answer would be yes anyway.
Strictly speaking you moved them first since I never claimed that anyone was ” advocating having sex with people that would experience buyers remorse over those that would remember the experience positively.” (Emphasis on over). As opposed to advocating having sex with people disregarding the issue of whether that person would experience remorse, which is what I’d seen PUA advocates saying. I just put the goalposts back where they were originally without making an undue fuss about it, since goalposts wander due to imprecisions in communication without any mendacity required.
I think this conversation is suffering, not for the first time, from the fuzziness of the PUA term. It covers AMF and Soporno (who has a name which is unfortunate but memorable, if it is his real name) who do not appear to be advocating exploiting others for one’s personal utility, and it also covers people like Roissy who revel in doing so.
So I think I phrased that last post poorly. I should have made the declarative statement “many but not all of the PUA writers I have viewed encourage reckless or actively malevolent behaviour with regard to the emotional wellbeing of potential sexual partners, and I think those people are bad utilitarians (and also bad people by almost any deontological or virtue-ethical standard). People who are members of the PUA set who do not do this are not the intended target of this particular criticism”.
Even that weaker position still seems incompatible actually being a utility-maximising agent, since there is prima facie evidence that inducing women to enter into a one-night-stand against their better judgment leads to subsequent distress on the part of the women reasonably often.
A disciple of Bayes and Bentham doesn’t go around causing harm up until someone else shows that it’s scientifically proven that they are causing harm. They do whatever maximises expected utility for all stakeholders based on the best evidence available at the time.
Note that this judgment holds regardless of the relative effectiveness of PUA techniques compared to placebo. Even if PUA is completely useless, which would be surprising given placebo effects alone, it would still be unethical to seek out social transactions that predictably lead to harm for a stakeholder without greater counterbalancing benefits being obtained somehow.
That isn’t a utility maximising agent regardless of whether it demands your ‘proof beyond any doubt’ or just the ‘until someone proves scientifically’. Utility maximising agents shut up and multiply. They use the subjectively objective probabilities and multiply them by the utility of each case.
The utility maximising agent you are talking about is one that you have declared to be a ‘good utilitarian’. It’s maximising everybody’s utility equally. Which also happens to mean that if Bob gains more utility from a one night stand than a Carol loses through self-flaggelation then Bob is morally obliged to seduce her. This is something which I assume you would consider reprehensible. (This is one of the reasons I’m not a good utilitarian. It would disgust me.)
Neither “utility maximiser” nor “good utilitarian” are applause lights which match this proclamation.
(Edited out the last paragraph—it was a claim that was too strong.)
I took it for granted that the disutility experienced by the hypothetical distressed woman is great enough that a utility-maximiser would seek to have one-night-stands only with women who actually enjoyed them.
Given that Bob has the option of creating greater average utility by asking Alices home instead I don’t see this as a problem. What you are saying is true only in a universe where picking up Carol and engaging in a win/lose, marginally-positive-sum interaction with her is the single best thing Bob can do to maximise utility in the universe, and that’s a pretty strange universe.
I also think that PUAs are going to have to justify their actions in utilitarian terms if they are going to do it at all, since I really struggle to see how they could find a deontological or virtue-ethical justification for deceiving people and playing on their cognitive biases to obtain sex without the partner’s fully informed consent. So if the utilitarian justification falls over I think all justifications fall over, although I’m open to alternative arguments on that point.
I don’t think the Weak Gor Hypothesis holds and I don’t think that you maximise a woman’s utility function by treating her the way the misogynistic schools of PUA adovcate, but if you did then I would buy PUA as a utility-maximising strategy. I think it’s about the only way I can see any coherent argument being made that PUA is ethical, excluding the warm-and-fuzzy PUA schools mentioned earlier which I already acknowledged as True Scotsmen.
I cannot reconstruct how you are parsing the first sentence so that it contradicts the second, and I’ve just tried very hard.
This seems to be a straw man. I don’t recall ever hearing someone advocating having sex with people that would experience buyers remorse over those that would remember the experience positively. That would be a rather absurd position.
Yes, Bob should probably be spending all of his time earning money and gaining power that can be directed to mitigating existential risk. This objection seems to be a distraction from the point. The argument you made is neither utilitarian nor based on maximising utility. That’s ok, moral assertions don’t need to be reframed as utilitarian or utility-maximising. They can be just fine as they are.
If so forgive me—I have not seen a PUA in the wild ever mentioning the issue of differentiating targets on the basis of whether or not being picked up would be psychologically healthy for them, so my provisional belief is that they attached no utility or disutility to the matter of whether the pick-up target would remember the experience positively. Am I wrong on that point?
This is a general argument which, if it worked, would serve to excuse all sorts of suboptimal behaviour. Just because someone isn’t directing all their efforts at existential risk mitigation or relieving the effects of Third World poverty doesn’t mean that they can’t be judged on the basis of whether they are treating other people’s emotional health recklessly.
I don’t see how you get to that reading of what I wrote.
I see this as a perfectly valid utilitarian argument-form: There is prima facie evidence X causes significant harm, hence continuing to do X right up until there is scientifically validated evidence that X causes significant harm is inconsistent with utility maximisation.
There’s a suppressed premise in there, that suppressed premise being “there are easily-available alternatives to X”, but since in the specific case under discussion there are easily-available alternatives to picking women up using PUA techniques I didn’t think it strictly necessary to make that premise explicit.
There are separate, potential deontological objections to PUA behaviour, some of which I have already stated, but I don’t see how you got to the conclusion that this particular argument was deontological in nature.
The goalposts have moved again. But my answer would be yes anyway.
Strictly speaking you moved them first since I never claimed that anyone was ” advocating having sex with people that would experience buyers remorse over those that would remember the experience positively.” (Emphasis on over). As opposed to advocating having sex with people disregarding the issue of whether that person would experience remorse, which is what I’d seen PUA advocates saying. I just put the goalposts back where they were originally without making an undue fuss about it, since goalposts wander due to imprecisions in communication without any mendacity required.
I think this conversation is suffering, not for the first time, from the fuzziness of the PUA term. It covers AMF and Soporno (who has a name which is unfortunate but memorable, if it is his real name) who do not appear to be advocating exploiting others for one’s personal utility, and it also covers people like Roissy who revel in doing so.
So I think I phrased that last post poorly. I should have made the declarative statement “many but not all of the PUA writers I have viewed encourage reckless or actively malevolent behaviour with regard to the emotional wellbeing of potential sexual partners, and I think those people are bad utilitarians (and also bad people by almost any deontological or virtue-ethical standard). People who are members of the PUA set who do not do this are not the intended target of this particular criticism”.