Social interaction (of which sexuality is only a subset) works best when people advocate for their own preferences, attempting to align others’ preferences with theirs, and without harming others.
This is exactly the kind of argument that I wanted to shoot down.
IMO we shouldn’t have a norm of requiring people to give altruistic justifications whenever they discuss better ways of maximizing their own utility function, even if that utility function may be repugnant to some. Discussions of morality (ends) should not intrude on discussions of rationality (means), especially not here on LW! If you allow a field to develop its instrumental rationality for a while without moralists sticking their noses in, you get something awesome like Schelling, or PUA, or pretty butterflies. If you get stuck discussing morals, you get… nothing much.
If you allow a field to develop its instrumental rationality for a while without moralists sticking their noses in, you get something awesome like Schelling, or PUA, or pretty butterflies. If you get stuck discussing morals, you get… nothing much.
You may be on to something here; this may be a very useful heuristic against which to check our moral intuitions.
On the other hand, one still has to be careful: you probably wouldn’t want to encourage people to refine the art of taking over a country as a genocidal dictator, for example.
On the other hand, one still has to be careful: you probably wouldn’t want to encourage people to refine the art of taking over a country as a genocidal dictator, for example.
Although it is interesting to study in theory. For example, in the Art of War, Laws of Power, history itself or computer simulations. Just so long as it doesn’t involve much real world experimentation. :)
Just so long as it doesn’t involve much real world experimentation. :)
But this is the fundamental problem: you don’t want to let the theory in any field get too far ahead of the real world experimentation. If it does, it makes it harder for the people who eventually do good (and ethical) research to have their work integrated properly into the knowledge. And knowledge that is not based on research is likely to be false. So an important question in any field should be “is there some portion of this that can be studied ethically?”
If we “develop its instrumental rationality for a while without moralists sticking their noses in”, we run the risk of letting theories run wild without sufficient evidence [evo-psych, I’m looking at you] or of relying on unethically-obtained (and therefore less-trustworthy) evidence.
How so? When scientists perform studies, they can sometimes benefit (money, job, or simply reputation) by inventing data or otherwise skipping steps in their research. At other times, they can benefit by failing to publish a result when they can benefit by refraining to publish. A scientist who is willing to violate certain ethical principles (lying, cheating, etc) is surely more willing to act unethically in publishing (or declining to publish) their studies.
Possibly more willing. They might be willing to sacrifice moral standards for the sake of furthering human knowledge that they wouldn’t break for personal gain. It would still be evidence of untrustworthiness though.
I like what you are saying in the second paragraph there… but I also agree with the quote from Hugh. So the whole ‘wanted to shoot down’ part doesn’t seem to fit in between.
I agree with this in the abstract, but in all particular situations the ‘morality’ is part of the content of the ‘utility function’ so is directly relevant to whether something really is a better way of maximizing the utility function.
If you’re talking about behaviors, morality is relevant.
I agree with this in the abstract, but if you adopt the view that morality is already factored into your utility function (as I do), then you probably don’t need to pay attention when other people say your behavior is immoral (as many critics of PUA here do). I think when Alice calls Bob’s behavior immoral, she’s not setting out to help Bob maximize his utility function more effectively, she’s trying to enforce a perceived social contract or just score points.
if you adopt the view that morality is already factored into your utility function
(You are not necessarily able to intuitively feel what your “utility function” specifies, and moral arguments can point out to you that you are not paying attention, for example, to its terms that refer to experience of specific other people.)
I disagree, especially here on Lw! When user-Bob tells user-Alice that her behavior is immoral, he’s probably setting out to help her maximize her utility function more effectively.
Or at least, that’s why I do it. A virtue is a trait of character that is good for the person who has it.
ETA: Otherwise, the argument is fully general. For humanity in general, when Alice says x to Bob, she is trying to enforce a perceived social contract, or score points, or signal tribal affiliation. So, you shouldn’t listen to anybody about anything w.r.t. becoming more instrumentally effective. And that seems obviously wrong, at least here.
I disagree, especially here on Lw! When user-Bob tells user-Alice that her behavior is immoral, he’s probably setting out to help her maximize her utility function more effectively.
My historical observations do not support this prediction.
I submit that if I say, “you should x”, and it is not the case that “x is rational”, then I’m doing something wrong. Your putative observations should have been associated with downvotes, and the charitable interpretation remains that comments here are in support of rationality.
This is exactly the kind of argument that I wanted to shoot down.
IMO we shouldn’t have a norm of requiring people to give altruistic justifications whenever they discuss better ways of maximizing their own utility function, even if that utility function may be repugnant to some. Discussions of morality (ends) should not intrude on discussions of rationality (means), especially not here on LW! If you allow a field to develop its instrumental rationality for a while without moralists sticking their noses in, you get something awesome like Schelling, or PUA, or pretty butterflies. If you get stuck discussing morals, you get… nothing much.
You may be on to something here; this may be a very useful heuristic against which to check our moral intuitions.
On the other hand, one still has to be careful: you probably wouldn’t want to encourage people to refine the art of taking over a country as a genocidal dictator, for example.
Although it is interesting to study in theory. For example, in the Art of War, Laws of Power, history itself or computer simulations. Just so long as it doesn’t involve much real world experimentation. :)
But this is the fundamental problem: you don’t want to let the theory in any field get too far ahead of the real world experimentation. If it does, it makes it harder for the people who eventually do good (and ethical) research to have their work integrated properly into the knowledge. And knowledge that is not based on research is likely to be false. So an important question in any field should be “is there some portion of this that can be studied ethically?” If we “develop its instrumental rationality for a while without moralists sticking their noses in”, we run the risk of letting theories run wild without sufficient evidence [evo-psych, I’m looking at you] or of relying on unethically-obtained (and therefore less-trustworthy) evidence.
“Unethically obtained evidence is less trustworthy” is the wrongest thing I’ve heard in this whole discussion :-)
How so? When scientists perform studies, they can sometimes benefit (money, job, or simply reputation) by inventing data or otherwise skipping steps in their research. At other times, they can benefit by failing to publish a result when they can benefit by refraining to publish. A scientist who is willing to violate certain ethical principles (lying, cheating, etc) is surely more willing to act unethically in publishing (or declining to publish) their studies.
Possibly more willing. They might be willing to sacrifice moral standards for the sake of furthering human knowledge that they wouldn’t break for personal gain. It would still be evidence of untrustworthiness though.
I like what you are saying in the second paragraph there… but I also agree with the quote from Hugh. So the whole ‘wanted to shoot down’ part doesn’t seem to fit in between.
I agree with this in the abstract, but in all particular situations the ‘morality’ is part of the content of the ‘utility function’ so is directly relevant to whether something really is a better way of maximizing the utility function.
If you’re talking about behaviors, morality is relevant.
I agree with this in the abstract, but if you adopt the view that morality is already factored into your utility function (as I do), then you probably don’t need to pay attention when other people say your behavior is immoral (as many critics of PUA here do). I think when Alice calls Bob’s behavior immoral, she’s not setting out to help Bob maximize his utility function more effectively, she’s trying to enforce a perceived social contract or just score points.
(You are not necessarily able to intuitively feel what your “utility function” specifies, and moral arguments can point out to you that you are not paying attention, for example, to its terms that refer to experience of specific other people.)
I disagree, especially here on Lw! When user-Bob tells user-Alice that her behavior is immoral, he’s probably setting out to help her maximize her utility function more effectively.
Or at least, that’s why I do it. A virtue is a trait of character that is good for the person who has it.
ETA: Otherwise, the argument is fully general. For humanity in general, when Alice says x to Bob, she is trying to enforce a perceived social contract, or score points, or signal tribal affiliation. So, you shouldn’t listen to anybody about anything w.r.t. becoming more instrumentally effective. And that seems obviously wrong, at least here.
My historical observations do not support this prediction.
I submit that if I say, “you should x”, and it is not the case that “x is rational”, then I’m doing something wrong. Your putative observations should have been associated with downvotes, and the charitable interpretation remains that comments here are in support of rationality.