Most feminists don’t know what operant conditioning and extinction are. Without knowing those things, it’s easy to confuse “very hard” with “impossible.”
Agreed—assuming, of course, that operant conditioning is as effective as you claim (when applied to humans), which I still doubt.
The mention of guilt is just because of another comment chain in this thread. I’m not trying to argue for guilt in particular.
I see, but then, what exactly are you claiming ?
What sort of emotion would a homophobe feel, talking to their homophobic friends about how horrible gay people are, and remembering their newly-outed gay brother?
Oh, guilt and shame, probably—but again, the mere fact that they feel these emotions does not necessarily imply that these emotions were a primary motivator in their original conversion.
but as far as I know nobody’s gotten funding to do an analysis of how people arrive at feminism.
This is rather surprising. Don’t feminists want to find the answer to this question, in order to optimize their strategies for converting more people to feminism ?
Oh, guilt and shame, probably—but again, the mere fact that they feel these emotions does not necessarily imply that these emotions were a primary motivator in their original conversion.
It seems much more likely that the guilt and shame are a result of the conversion rather than the cause of it.
This is rather surprising. Don’t feminists want to find the answer to this question, in order to optimize their strategies for converting more people to feminism ?
Feminists don’t have that much funding.
I see, but then, what exactly are you claiming ?
As far as a conversion strategy goes? I haven’t claimed anything thus far, and I wouldn’t like to, because it would just open up another avenue of discussion I’d have to field adversarial questions over.
Oh, guilt and shame, probably—but again, the mere fact that they feel these emotions does not necessarily imply that these emotions were a primary motivator in their original conversion.
Seems like we need more research.
From below:
It seems much more likely that the guilt and shame are a result of the conversion rather than the cause of it.
They’re obviously correlated, they’re probably co-temporal, and even if there is a clear temporal relationship, it seems probable that they serve to maintain the new beliefs.
But really, this isn’t a question we can find the answer for in comments on less wrong.
How much funding would it take to at least make some progress towards answering the question, “what causes non-feminists to become feminists” ? If you create a Kickstarter for this purpose, I’ll personally chip in a few bucks.
Again, I’m a little surprised to hear you say that feminists (or, perhaps, just feminist activists) had not made any attempts to answer the question. Yes, their finding is very limited—but doesn’t that fact make it all the more important to discover the most efficient way of spending their limited resources ?
As far as a conversion strategy goes? I haven’t claimed anything thus far...
Fair enough, but then, why did you bring up guilt and operant conditioning ?
Again, I’m a little surprised to hear you say that feminists (or, perhaps, just feminist activists) had not made any attempts to answer the question. Yes, their finding is very limited—but doesn’t that fact make it all the more important to discover the most efficient way of spending their limited resources ?
You’re thinking like a LW reader, not a typical feminist activist (who is also liberal). Most of these people don’t have any background in any science and are more skilled at literature criticism than empiricism.
Fair enough, but then, why did you bring up guilt and operant conditioning ?
I don’t think I brought up guilt. Another poster tried to apply a reducto ad absurdum to my arguments and claimed that they lead to all men feeling guilty all the time. I said that I didn’t see a problem with that, and at the time, didn’t elaborate. The implication read from that is that guilt will turn men feminist; while this might be true, the implication I meant to make is that all men are oppressive and all oppressive people should feel guilty about being oppressive. Generally, I, like most humans, think that people doing bad things should feel bad about it.
I brought up operant conditioning to apply a buzzword to learning theories of gender, which claim that gendered behavior is learned, possibly by operant conditioning. It was an easy way to communicate to LW commenters that gender is socially constructed—that phrase with a shorter inferential distance is “gender is a product of operant conditioning.”
You’re thinking like a LW reader, not a typical feminist activist (who is also liberal). Most of these people don’t have any background in any science and are more skilled at literature criticism than empiricism.
If we’re counting guilt as suffering in an ethically consequential sense—which seems reasonable, since it’s pretty profoundly unpleasant and there’s a pretty clear functional analogy to physical pain—and if that suffering is additive with other kinds, then consequentialists should want people to feel guilt when they do bad things if and only if that guilt eliminates more suffering (of any type) down the road. Don’t know if you’re a consequentialist, but this seems like a good starting point.
In any case, that condition seems like it’s sometimes but not always true. Guilt over immutable or nearly immutable urges seems like a net loss unless those urges are both proportionally destructive and susceptible to conditioned reduction in the average case. Guilt strong enough to be unpleasant but weak enough not to overcome whatever other factors are making people do bad shit is likewise a loss. Interestingly, this seems to indicate that consequentialists should sometimes prefer intense over moderate guilt, unless it’s gratuitously intense relative to what’s needed to stop the behavior: sufficiently disproportionate guilt is also a loss.
The obvious objection to this line of thinking is that certain categories of socially constructed bad shit—not to name names—might stick around if and only if they stay at or above a certain level of prevalence in the population, sort of a memetic equivalent of herd immunity. Since these patterns can persist for an unbounded length of time and cause suffering as long as they do, anything capable of incrementally degrading them could have second-order consequences much larger than its first-order effects, potentially enough to justify any and all related guilt. In this case uncertainties about the problem structure seem to dominate consequential reasoning, much as per Pascal’s Mugging.
Guilt over immutable or nearly immutable urges seems like a net loss unless those urges are both proportionally destructive and susceptible to conditioned reduction in the average case.
In my experience, feelings of guilt coupled with the attitude that it is “immutable”, can be an effective excuse not to fix harmful behavior. It’s a sort of ugh field. When the consequences of the behavior become sufficiently intolerable, one is eventually tempted to hang the guilt and test that supposed immutability.
Sure, that’s a failure mode, and it’s one which—stepping down a level of abstraction—seems prevalent in gender discussions (“I’m $gender, I can’t help it!”). From the inside, it can be pretty hard to distinguish between the motivations you can and can’t change with enough reflection. There’s a loose cultural consensus as to what counts, but at the same time that varies between subcultures and can lead to conflict in its own right: consider the “ex-gay” phenomenon in fundamentalist Christian spheres.
Maybe I shouldn’t have mentioned it in context; in my estimation it’s not directly relevant to what we’re discussing upthread. But at the same time I think it’s a mistake to consider our wants entirely plastic; for the time being we’re working with a certain set of hardware, and software changes can only do so much.
Possibly not. I do think punishments can deter bad actions. But I think this works best when those punishments are clearly described in advance of the crime.
Also, it seems to me that there is a perverse aspect of regret, that it punishes sympathetic malefactors more than it punishes psychopathic ones.
If that isn’t hyperbole, I’m interested in your reasons for believing that.
Of course it is. The point is that we see all around us (that’s another hyperbole), and it is a recurring theme on LessWrong (that isn’t), that people persist in acting, or failing to act, in ways that they “feel bad” about. As a strategy for change, “feeling bad” doesn’t seem to be effective, does it?
“Making someone feel bad”, or “good”, fares even worse—see this parable.
it is a recurring theme on LessWrong (that isn’t), that people persist in acting, or failing to act, in ways that they “feel bad” about.
I agree.
As a strategy for change, “feeling bad” doesn’t seem to be effective, does it?
I disagree. One of the reasons akrasia is so notable is that feeling bad usually works. Usually touching a hot stove or hit your thumb with a hammer once is enough to change your behavior. Often being mocked by your peers, or sensing genuine disappointment from your mentors, is enough to change your behavior. It’s only in these weird corner cases where opposing strong motivations collide that we notice the unusual inefficacy of bad feelings, and haul out the rational analysis toolkit.
But doesn’t the same logic lead me to conclude that pain isn’t aversive? (That is: if pain were actually aversive, people wouldn’t do things that cause them pain. People do things that cause them pain, therefore pain is not aversive.)
The problem with that logic as it applies to pain is that pain can be aversive without completely preventing people from doing something. If a behavior B is N% likely ordinarily, and B becomes Y% likely if coupled to pain, and Y < X, that’s evidence for considering pain aversive even though we still do B. Relatedly, if B is always coupled to pain, then I never get to observe X.
Observing a nonzero Y is not evidence that pain is non-aversive.
It seems to me the same reasoning applies to guilt and other kinds of bad feelings. It’s certainly possible that they are non-aversive, but observing a nonzero frequency of the behaviors that cause it isn’t evidence of that.
There may be other evidence, though, which is why I asked Richard his reasons.
Taboo “feeling bad”, keeping in mind that our normal emotional vocabulary is pretty inadequate. (E.g., it seems to me that shame is basically never useful, but guilt and sadness can be.)
I mean I feel X when I’m not being productive. And yet I do not become productive. I have no idea how to taboo qualia like “X”.
Maybe an extensional definition?: That feeling you get when you’ve done something wrong. An uncomfortable and frustruating feeling that makes you feel guilty. A bit like stress.
That’s awfully specific. I wonder how general the non-utility of it is.
Generally, I, like most humans, think that people doing bad things should feel bad about it.
And I happen to think that anyone who is trying to make me feel bad about things should be crushed like a bug and their attempts to control through shame disempowered to whatever extent it is convenient to do so.
(I also observe that most people with healthy boundaries will tend to be much more likely to avoid those who are predisposed to attempting to control through guilt or shame.)
You’re eating babies. The wide-eyed idealist points out that eating babies is bad and you are a bad and evil person who must mend his baby-eating ways. You tell the wide-eyed idealist that you refuse to interact with people who try to control you through shame. The wide-eyed idealist thinks for a minute, shrugs, and shoots you.
You tell the wide-eyed idealist that you refuse to interact with people who try to control you through shame. The wide-eyed idealist thinks for a minute, shrugs, and shoots you.
Did you just create a counterfactual which relies on making me act even more cluelessly naive and banal than the wide eyed idealist?
That’s as meaningless than it is presumptive and inappropriate.
Now, for the next counterfactual let’s arbitrarily decide that MixedNuts is walking around naked having sex with an echidna while shouting “The World Is Flat!”.
Now, for the next counterfactual let’s arbitrarily decide that MixedNuts is walking around naked having sex with an echidna while shouting “The World Is Flat!”.
Except for the sex-with-echidna part, this sounds vaguely like something that MixedNuts might do!
Now, for the next counterfactual let’s arbitrarily decide that MixedNuts is walking around naked having sex with an echidna while shouting “The World Is Flat!”.
I feel that your most powerful point is that wide-eyed idealists are poor utility maximizers and poor rationalists.
The second strongest seems to be that a rationalist will (should, but as a rationalist, they do what they should) attempt better approaches, which seems to be quite close to one of wedrifid’s implied point in the grandparent. Was this your intended meaning?
Both of these are true, but I wasn’t talking to the wide-eyed idealist, I was talking to the baby-eater. If you grandstand about how a socially-approved and very mild punishment for doing bad things is Evil Boundary-violating Control, people who care about those bad things are less likely to let you alone than to switch to harsher punishments.
If you grandstand about how a socially-approved and very mild punishment for doing bad things is Evil Boundary-violating Control, people who care about those bad things are less likely to let you alone than to switch to harsher punishments.
Not necessarily. Wild-eyed idealists, being idealists, are markedly biased towards shaming folks for whatever it is that they consider “bad”. Shaming and guilt-tripping people is not even particularly hard for them, since their whole worldview is often based on these emotions; whereas applying harsher punishments may not even occur to them unless they are rather authoritarian, and it might even be completely infeasible. Thus, reacting assertively is entirely appropriate, at least in the likeliest case.
Which of course begs the question about why you were attacking that particular straw man.
The optimal approach for dealing with enemies who are presumed to have more power than you seems rather irrelevant. Unless the relevance you imply is that radical feminists with an obsession for shame based control already represent a powerful hostile force that we would be foolish to resist? In that case I would of course agree that my words to members of that group would be best served keeping them misinformed about the effectiveness of their strategy of enforcement. All else being equal it tends to be better to keep powerful enemies ineffective.
Okay, my point was that if you accept that people are going to try to control you, it’s rather silly to complain about that means. But apparently you classify all people who attempt to control you as enemies. I suppose that’s a consistent view, and compatible with civilization if you allow control to enforce an agreed-upon set of laws.
But it doesn’t seem to allow for progress. If someone discovers that marital rape is not okay, contrary to mainstream belief, what are they supposed to do? Publishing a paper entitled “Psychological effects of nonconsensual sex between spouses” would count as informing rather than controlling; it would also be vastly less effective than making marital rape illegal, portraying characters who rape their spouses as horrible monsters, and shaming rapists.
If someone tries to control me and I disagree with their position, my answer is not “By attempting to control me you have made yourself my enemy”, but “I don’t agree that bestiality is cruel to animals, so I will fight your attempts to make it unacceptable, but I don’t disapprove of these attempts on principle. For example, I agree with your subargument about indecent exposure, so Knuckles here and I are going to get a room”.
Publishing a paper entitled “Psychological effects of nonconsensual sex between spouses” would count as informing rather than controlling; it would also be vastly less effective than making marital rape illegal, portraying characters who rape their spouses as horrible monsters, and shaming rapists.
I don’t think this is evident enough to be affirmed without supporting evidence. There’s evidence that such laws and shame-guilt-tripping might be much less effective than publishing a good comprehensive paper.
Prime example: Videogame piracy. Strong IP laws. Massive attempts at guilt-trip and manipulation of mass populace. Observed effect: No measurable effect of the laws and anti-piracy measures, and a continued growth of piracy. The growth is most likely attributable to other causes.
On the flipside, dev companies that have announced that they won’t do anything against piracy have seen considerable advertisement boosts from it and have on average enjoyed much greater success thanks to this.
Okay, my point was that if you accept that people are going to try to control you, it’s rather silly to complain about that means.
Not only do I care about what means people use to control me, for any given person asserting that they don’t care what means people use to control them I would be confident in declaring them confused about their own preferences.
But apparently you classify all people who attempt to control you as enemies.
No I don’t, and wouldn’t. Why on earth would I give away my power like that? I’ll do whatever I want in response to people attempting to control me including complying with indifference, ignoring them, gaining more social power so that people are unable to or unlikely to make that kind of moves. Some people doing (or being likely to do) certain things would make them enemies but that is rare and implies giving them a significant degree of respect and attention. It doesn’t happen often.
We agree on the first point! I’m saying some means are worse than others, and shame/guilt is one of the best ones.
As Dave pointed out, we need to taboo “enemy”. “This person’s actions are bothering me; I’ll minimize annoyance” is treating the person as your enemy in the sense I was using it. Not treating them as an enemy is “This person is trying to do good, yet their actions aren’t the ones I think are best; I shall update on what they believe, and tell them what I believe so they can do same; if we still disagree, I’ll minimize total annoyance among us both”. If most people are your enemies by that definition, you’re… not typical audience for social justice rhetoric.
This, incidentally, reminds me of the rule of Ko, since I only learned to play Go yesterday. It seems like there’s a meta pattern of the baby-eater becoming the wide-eyed-idealist when you consider the boundary-violating control as the baby-eating and the ball starts bouncing around while both camps conscript soldiers and muster armies and continuously threaten other elements of their opposition while looking for something that invalidates the other’s morality.
Sure. And the other baby-eaters look at that and stop eating babies where the wide-eyed idealist can find out about it, because the idealist has made a credible threat. (A slightly more idealistic idealist might look for nonfatal ways to make a credible threat, but they might not be available.) This happens all the time; much of our civilization is built on it. What’s the problem?
Sure. And the other baby-eaters look at that and stop eating babies where the wide-eyed idealist can find out about it, because the idealist has made a credible threat.
Well, if the wide-eyed idealists are a lot more powerful than the baby-eaters, probably. But if the wide-eyed idealists are less powerful than the baby-eaters, then the baby-eaters may instead be provoked into a war on wide-eyed-idealists, because even if they lose out more in the short term by waging such a war than by putting an end to their baby-eating, they’d be sending the signal that they won’t let extremist minorities dictate values to the majority.
Yup, that’s possible. And if the idealists are more powerful, the baby-eaters might still “be provoked into” (aka “initiate”) a war to make imposing majority preferences too expensive and encourage the majority to accommodate to them. And many other outcomes are possible. Narrating them all might be an entertaining way to spend an afternoon, but I’m still not sure what the point is. Were you disagreeing with wedrifid? Can you clarify your disagreement if so?
EDIT: Whoops! I just noticed you’re not the same poster. Never mind, then...
Most of these people don’t have any background in any science and are more skilled at literature criticism than empiricism.
Fair enough. Sad, but fair :-/
…the implication I meant to make is that all men are oppressive and all oppressive people should feel guilty about being oppressive.
That’s a fascinating discussion topic in and of itself, but it might be out of scope for the current thread. That said:
Generally, I, like most humans, think that people doing bad things should feel bad about it.
Some LWers explicitly deny this statement; they might say, “feeling bad doesn’t solve anything in and of itself, since actions matter more than words”, or “feeling bad about things one absolutely cannot control is counterproductive”, or some combination thereof. It’s probably not a good idea to assume that the views of LWers will align with those of the general population, as far as morality is concerned. I could be wrong, however.
I brought up operant conditioning to apply a buzzword to learning theories of gender, which claim that gendered behavior is learned, possibly by operant conditioning.
Ah, understood, thanks for the clarification. I’m not sure whether operant conditioning alone is enough to account for gender, but I don’t know enough psychology to make a credible claim one way or another.
Ah, understood, thanks for the clarification. I’m not sure whether operant conditioning alone is enough to account for gender, but I don’t know enough psychology to make a credible claim one way or another.
I think that learning accounts for gender. Whether that learning originates in modeling, operant conditioning, or observational learning is irrelevant to me.
As I asked you on a different thread, how do you know whether this is true ? If you were to ask me that question, I would say, “let’s go out and run a bunch of experiments”, but you have explicitly stated that doing so would be sexist, so… now what ?
As I asked you on a different thread, how do you know whether this is true ? If you were to ask me that question, I would say, “let’s go out and run a bunch of experiments”, but you have explicitly stated that doing so would be sexist, so… now what ?
There’s one experiment in particular that I advocate—the destruction of patriarchy.
Your current worldview seems to be unfalsifiable without very expensive experiments. (How would you even know if patriarchy had been destroyed anyway?) Maybe we’re doing this backwards. What caused you to become a feminist? What evidence could you have encountered that would have made you a non-feminist?
This is an assertion I’ve heard made a lot by people outside biology and I’d like to hammer it out with somebody who seems well informed.
On what basis can we make this assertion? Biology obviously contributes in a physical sense (people with male gender tend not to have wombs). I assume what you mean is that there are no inherent neurological differences in males versus females. But how can we know that? We have a strong prior (other animals) and lots of circumstantial evidence that it should be true.
I think feminism ought to acknowledge at least the possibility of inherent male-female differences with a simple “so fucking what”. For instance I think that physical abuse of women, by men, probably represents an adaptive, ancestral behavior caused (amongst other things) by inherent neurological differences in men and women. That doesn’t excuse it. We can and have made great progress in conditioning men not to hit women, and hopefully will continue to do so.
My introduction to social justice (as a whole) was through the lens of intersex conditions (wherein people with ambiguous genitalia are assigned a gender at birth, most often female because the surgery is easier). A major problem was that raising male children as female or vice versa ends up causing psychological problems.
The main [unethical] case study was a pair of identical male twins, one of whose penis was accidentally cut off during circumcision, and then got female reassignment surgery, grew up very confused and depressed and eventually committed suicide. (Other case studies are less clear cut but generally indicative of the same problem, not to mention transgender people). Gender clearly has a biological component.
It also does clearly have a environmental component, and I don’t know where those elements interrelate, but ignoring the biological element causes as many problems as ignoring our problems with how we raise children.
...or the knowledge of that child’s parents, doctors, and everyone around him lead to them (the adults) treating that child as a freak rather than a woman.
One of the ideas I like in radical feminism is that masculinity is very much defined by the ability to impregnate women (one of the reasons why intersex infants are virtually always assigned female). Conversely, femininity is defined by the ability to be impregnated. Seeing as this child could do neither, and their caregivers knew that, I would hardly expect this child to have typical gender socialization.
The only experiment that could demonstrate this to my satisfaction is a double-blind study where infants are adopted by parents that know only that infant’s current assigned gender, and nothing else.
ignoring the biological element causes as many problems as ignoring our problems with how we raise children.
Okay, fair enough. It’s very plausible to me that most of our problems relate to socialization rather than biology. But you seem to be implying they are 100% sociological, which seems wrong.
I’m not totally sure, and I notice that it’s a confusing topic.
Okay, fair enough. It’s very plausible to me that most of our problems relate to socialization rather than biology. But you seem to be implying they are 100% sociological, which seems wrong.
Since humans can’t think quantitatively, I prefer to just say “gender is learned” rather than “gender is almost entirely (95-99%) learned but the remaining part is biological.”
In fact, it might be that gender is entirely non-biological. But I’m sure it’s mostly social.
(This is not me setting up a followup ambush argument, just asking)
To what extent would it alter your philosophy if we learned that gender was 70% social? 50% social? Right now, these questions are vague and difficult to test, but they may not always be. And I think it’s much sounder (both from an instrumental and epistemic standpoint) to think in advance about how your philosophy should shift if different facts were confirmed.
I don’t know what the answer is but the existence of transpeople (and genderqueer people and others who don’t fall neatly into the gender binary) suggests to me that it’s unlikely to be 95%+ social. But even if it turned out to be as low as 50% social, dealing with those social issues properly still requires a radical upheaval of the popular consensus on how we should socialize people.
If social learning accounts for gender, what causes gender differences among animals? If your answer is that they don’t have gender in the same sense, what exactly do you mean by gender?
But even then, there aren’t gender differences among animals to anywhere near the degree to which there supposedly are in humans. Do female chimpanzees get paid less than male coworkers? Do they wear pink more so than men?
I think that learning accounts for gender. Whether that learning originates in modeling, operant conditioning, or observational learning is irrelevant to me.
A lot of your claims sound considerably less crazy now. If the comments still existed, I’d suggest edits.
Operant conditioning is notoriously bad at getting creatures to have behaviors that will adapt to changing environments, so is unlikely to be a significant part of the cause of gender behavior.
A lot of your claims sound considerably less crazy now. If the comments still existed, I’d suggest edits.
I said this literally days ago, and have been saying it the entire time I have been having this discussion.
“Operant conditioning” was introduced into this discussion by me, in a comment that says “I think that learning (operant conditioning, modeling, and observational learning) is the cause of gender.”
Have you come into this discussion after those comments were deleted? Or did you never read them?
If you want other people to avoid having the same experience you did, upvote my comments. EY messaged me earlier today saying he was deleting any downvoted posts, which are primarily mine.
Generally, I, like most humans, think that people doing bad things should feel bad about it.
FWIW, I do not think that. I would like people doing bad things to stop doing those things. “Feeling bad” is (I believe) never useful: not to the person having the feeling, and not to anyone else.
Having decided that it’s a bad idea for me to continue discussing things with eridu, it might be better for me to avoid discussing the same things with people who are currently engaged in conversation with him. But I think that in this case we have a substantive disagreement.
I think that not only is people feeling bad a powerful moderator of our behavior, and one that it’s useful for other people to know we have, I think deliberately making people feel bad about their actions can be a useful way to motivate them to change their behavior in positive ways. Ideally, nobody should have to feel bad, but then, ideally, nobody should be doing bad things either.
To draw an available example, Ghandi’s efforts to gain independence for India rested almost entirely on making the British colonialists feel bad about themselves, and while giving up their possession of India might have been an economic inevitability, he certainly accelerated it.
I think eridu is overgeneralizing the usefulness of imposing guilt on others though. It appears to me that in order to modify others’ behavior by encouraging them to feel guilty, you need to start with people who have an existing set of moral standards (ones by which they actually operate not simply ones they profess,) which they are not applying in a particular case, and make them feel intuitively that this is a case where they should be applying those standards. For instance, the British citizens mostly had moral standards against attacking civilized, non-resisting people with clubs. If they saw Indian people behaving in a civilized, nonthreatening manner, and being beaten with clubs for challenging colonial rule, the British citizens are going to feel guilty without needing further incitement. On the other hand, if you try to encourage people to feel guilty for, say, stopping women from having abortions, and appeal to them on principles of autonomy, it won’t work because they don’t relate it to anything else they would feel guilty about. You can tell them why they should, but they aren’t going to intuitively put either “women” or “abortion” into a new reference class that completes a preexisting basis for guilt.
I’m not sure whether it’s a separate principle, or an extension of this one, that trying to get people to modify their behavior too radically by appealing to guilt will also backfire. For instance, you can appeal to someone that a consistent application of their principles would lead to them giving away nearly all their money to charity, but most people don’t have preexisting models for guilt whereby they will feel guilty for not giving away nearly everything they own. They can be guilted into “doing their part,” make some contribution, and stop feeling guilty, but if they judge that the person encouraging them to feel guilty is asking too much of them, then they’ll try to avoid the person trying to make them feel guilty, rather than the behaviors that person is trying to encourage them to change.
I suspect the banhammer may be looming over all of this, or the karmic penalty for being under the same bridge as the troll, as eridu’s last ancestor comment has vanished, but I’ll just briefly refer to this reply of mine to eridu, and take up the following:
I’m not sure whether it’s a separate principle, or an extension of this one, that trying to get people to modify their behavior too radically by appealing to guilt will also backfire. For instance, you can appeal to someone that a consistent application of their principles would lead to them giving away nearly all their money to charity, but most people don’t have preexisting models for guilt whereby they will feel guilty for not giving away nearly everything they own. They can be guilted into “doing their part,” make some contribution, and stop feeling guilty, but if they judge that the person encouraging them to feel guilty is asking too much of them, then they’ll try to avoid the person trying to make them feel guilty, rather than the behaviors that person is trying to encourage them to change.
Bingo. People have these fantasies of being able to reach into other people’s heads and tweak some switches to make them do what they (the ones tweaking) want, but things just don’t work like that. People have their own purposes, and nothing you can do to them is any more than a disturbance to those purposes. What they will do to get what they want in spite of someone else’s meddling will not necessarily resemble, even slightly, what the meddler wanted. See also Goodhart’s law.
I would like people doing bad things to stop doing those things
How would you like this to occur?
To put it another way, what stops you from murdering somebody you dislike? The (bad feeling of) fear of getting caught? The (bad feeling of) remorse from taking a human’s life?
Or do you really think you’re a Hollywood rationalist, making a cold and precise computation of negative utility as a result of your potential action, and choosing another path?
Like the other poster who said roughly the same thing as you, you seem entirely ignorant to the massive amount of bad feelings present in reality, and the usefulness of those feelings. Nowhere in the fun theory sequence does EY advocate getting rid of bad feelings, and in fact EY argues against that.
I’m happy to have one of the most well-loved LW celebrities respond to a post I made!
In the counterfactual world where you did murder someone you disliked, and later found that they were planning on instigating paperclip production, how would you feel out of “good” or “bad”?
Of course, maybe you don’t have something you call “feelings,” but rather think of things purely in terms of expected paperclips. Humans, on the other hand, have difficulty thinking strictly in terms of expected paperclips, but rather learn to associate expected paperclips with good feelings, and negative expected paperclips with bad feelings.
In humans, we have a set of primitive mental actions (like feelings, intuitions, and similar system-one things) that we can sometimes compose into more sophisticated ones (like computing expected paperclips yielded by an action).
As such, you can always say “I wouldn’t kill someone I disliked because I might feel regret for taking a life,” or “I wouldn’t kill someone I disliked because I would be imprisoned and unable to accomplish my goals,” but ultimately, all those things boil down to the general explanation of “feeling bad.”
“Feeling bad” is the default human state of not accomplishing their goal.
(As an aside, this is why I think that you, clippy, can be said to have emotions like humans—because I don’t think there’s a difference between your expectation of negative paperclips as a result of a possible future event and fear or dread, nor do I think there’s a difference between a realization that you created fewer paperclips and sadness, loss, or regret.)
Thank you again for replying, Clippy—I’ll go down to my supply room at my earliest convenience and take most of the paperclips as a token for me to remember this interaction, and in the process, causing my employer to purchase paperclips sooner, raising demand and thus causing more paperclips to be produced.
Thanks for buying more paperclips, you’re a good human.
To answer your question, if I entropized a human and later found out that the human had contained information or productive power that would have, on net, been better for paperclip production, I will evaluate the reasoning that led me to entropize that human, and if I find that I can improve heuristics in a way that will avoid such killings without also preventing a disproportoinate amount of papeclip production, then I will implement that improvement.
To put it another way, what stops you from murdering somebody you dislike?
I suppose that is just a difference between us. Not a disagreement, but a difference: you are one way and I am another.
You think of disliking someone and ask, what stops you murdering them?
I think of disliking someone and ask (and only because of your question), what would start me murdering them?
Number of days since casual murder was used in a discussion on LessWrong: 0.
The (bad feeling of) fear of getting caught? The (bad feeling of) remorse from taking a human’s life?
Or do you really think you’re a Hollywood rationalist, making a cold and precise computation of negative utility as a result of your potential action, and choosing another path?
None of the above.
(BTW, the Star Trek novels, at least the ones I have read, paint a far more creditable and credible version of Vulcan rationality than the TV shows and films. Vulcans do not suppress their feelings, but master them. A tradition in the real world with multiple long pedigrees. And a shorter one.)
Like the other poster who said roughly the same thing as you, you seem entirely ignorant to the massive amount of bad feelings present in reality, and the usefulness of those feelings.
I am well aware of them. But I think people often misinterpret what they are. As I revised my original comment to say, negative feelings tell you something. What matters is to do something about it. All that stuff about negative reinforcement and feelings conceived as similar to physical forces that push you and pull you into doing stuff is fairy tales, fantasies of non-agency. (Which pop up all over the place, not just in BDSM. Strange.)
“Making someone feel bad” is even more of a fairy tale. How do you “make someone feel bad”? What will happen if you try? Here is one person’s hypothetical reaction, and here is the basic problem with the idea.
I suppose that is just a difference between us. Not a disagreement, but a difference: you are one way and I am another.
You think of disliking someone and ask, what stops you murdering them?
I think of disliking someone and ask (and only because of your question), what would start me murdering them?
I’m pretty sure HPMoR already took a dive into this point, in a manner I found sufficiently eloquent to expose the moral nihilism and/or philosophical egocentrism required for the first to occur.
Are you talking about the same things?
(If you haven’t read HPMoR, darn. I was hoping it would provide a speed boost to that line of philosophical reasoning.)
what stops you from murdering somebody you dislike?
As for me, the fact that if murdering somebody one dislikes were right, then one would have to be extra careful to never be disliked by anybody (if one doesn’t want to be killed), and that would be a lot nastier than people one dislikes staying alive. (Yes, that would make no sense to CDTists, but people aren’t CDTists anyway.)
I’m not sure I understand your question. I’d prefer to not be murdered rather than to be murdered, all other things being equal; are you asking anything else?
I’m asking how you feel about possibly being murdered. You know, emotions. It’s a simple question.
Because if “I don’t want to be murdered because by TDT-style rhetoric it leads to my being more likely to be murdered,” and if you feel bad about being murdered, you abstain from murdering people because you feel bad.
This relates to the above statement:
“Feeling bad” is (I believe) never useful
If you do not murder people because you would feel bad, feeling bad is useful.
I think this is a trivial point, and if I started this discussion on a different topic, it would be trivially accepted by most of the people currently arguing against it.
Feeling bad is one of the reasons why I don’t do certain things, but not the only one. If I’m convinced something that would make me feel bad would also have desirable consequences that would outweigh that (even considering ethical injunctions, TDT-related considerations, etc.), I try to overcome my emotional hang-up (using precommitment devices, drinking alcohol, etc., if necessary) and do that anyway.
I’m asking how you feel about possibly being murdered. You know, emotions. It’s a simple question.
It was a denotative simple question attempting to assert a non-sequitur rhetorical point.
Because if “I don’t want to be murdered because by TDT-style rhetoric it leads to my being more likely to be murdered,” and if you feel bad about being murdered, you abstain from murdering people because you feel bad.
That doesn’t follow.
I think this is a trivial point, and if I started this discussion on a different topic, it would be trivially accepted by most of the people currently arguing against it.
Nonsense. Your reasoning is well below the standard expected around here. It may pass elsewhere but only because anything with “boo murder” in it is too hard to argue with regardless of the standards of the content.
Well, let me spell it out even more so than I already have.
Preferences are system 2 concepts.
Over time, system 2 concepts map to system 1 concepts.
As such, if you prefer ice cream to spinach, you will feel bad (in a system 1 sense) if you are promised ice cream but given spinach.
In humans, as such, any preference against a thing means that human feels bad about that thing.
anything with “boo murder” in it is too hard to argue with regardless of the standards of the content.
Arguing about the choice of something that represents the LW concept of negative utility in a hypothetical example is equivalent to arguing about grammar.
Let A(X) be a function such that X.Consciousness becomes terminated (ends, dies, etc.)
I have a preference for NOT A(me).
Over time, the above maps to Feel Bad → A(me)
As such, if I am offered NOT A(me), and given A(me), I will feel bad because I attempt to be reflectively coherent.
As such, my preference for NOT A(me) does, as you claim, imply that I ought to feel bad about A(me).
The above are intended as a rephrasing of your statements, and I fully agree.
However…
Because if “I don’t want to be murdered because by TDT-style rhetoric it leads to my being more likely to be murdered,” and if you feel bad about being murdered, you abstain from murdering people because you feel bad.
You are making the subsequent conclusion that I have:
Feel Bad → A( X | X.isElementOf(people) )
because I have preference for NOT A(me).
wedrifid correctly asserts that this does not follow.
If I’m reading it right I don’t think your formalism fits what I’m trying to argue, but this is a boring point and I’m not terribly interested in taking it further.
Well, let me spell it out even more so than I already have.
“That doesn’t follow” does not mean “I cannot understand your argument”. It means that the argument was fundamentally logically flawed and your reasoning confused.
As such, if you prefer ice cream to spinach, you will feel bad (in a system 1 sense) if you are promised ice cream but given spinach.
Some people might feel bad. Others would feel amused (and, incidentally, many would personally develop themselves such that they are more inclined to feel positive than negative emotions in that kind of situation). Most importantly, system 1 refers to a heck of a lot more than emotions. Even system 1 based decisions to avoid something don’t translate to ‘feeling bad’ about it. Especially in people who are mature or experienced.
In humans, as such, any preference against a thing means that human feels bad about that thing.
No it doesn’t.
Arguing about the choice of something that represents the LW concept of negative utility in a hypothetical example is equivalent to arguing about grammar.
I dispute both your first and your second bullet point. As far as I know there exist both system 1 and system 2 preferences, and it’s not clear that system 2 concepts usually bridge the gap. Can you give some examples or evidence?
FWIW, I do not think that. I would like people doing bad things to stop doing those things. “Feeling bad” is (I believe) never useful: not to the person having the feeling, and not to anyone else.
Are you using ‘never’ in a figurative sense here? Seeing the absolute claim like that prompted me to think of a whole list of real world counter-examples despite me probably mostly agreeing with your position. (For a start, making people feel bad is useful in nearly all cases in which breaking someone’s finger is useful. Maintaining dominance, keeping oppressed people oppressed, provoking an enemy into taking hasty reactions against you that you believe you can win, short term coercement. Making others believe that you have the power to do harm to another without them having any recourse. That kind of thing. That’s before thinking up the cases where actual respectable, decent sounding outcomes could arise—those are rare but do occur.)
Seeing the absolute claim like that prompted me to think of a whole list of real world counter-examples
That is something I find a standard but rather annoying geek conversational failure. You could simply have answered your own question:
Are you using ‘never’ in a figurative sense here?
with “yes”. But “figurative” does not really capture it. All apparently absolute generalisations are relative to their context. Are there substantial exceptions relevant to the context?
Now, on further consideration I might indeed revise my original statement, but not in any of the directions you explore. Feeling bad—that is, having feelings that one does not want—is useful to precisely this extent: it informs you that something is wrong; that there is a conflict somewhere. The useful response to this is find where the conflict is and do something about it. Nothing else is useful about the feeling.
For a start, making people feel bad is useful in nearly all cases in which breaking someone’s finger is useful.
Days since someone used torture to illustrate an argument: 0.
I would write “seldom” instead of “never”.
I prefer to write “never” instead of “seldom”. “Seldom” and other such qualifiers too easily protect what one is saying behind a fog of vagueness. It allows one to move one’s soldiers around like the pieces of a sliding-block puzzle, so that wherever the enemy attacks, one can say “Ha! Fooled you! Never said that! Nobody there! Try again!”
“Feeling bad” is (I believe) never useful: not to the person having the feeling, and not to anyone else. [emphasis added.]
Not so. Some reasons:
Psychologist Richard J. Davidson has shown that the affective trait Resilience (speedy recovery from bad feelings) becomes maladaptive when extremely high, as it interferes with empathy.
Almost all judicial systems have concluded that remorse helps avoid recidivism in criminals. (I’m opposed to remorse-based sentencing—but not based on its being irrelevant.)
Agreed—assuming, of course, that operant conditioning is as effective as you claim (when applied to humans), which I still doubt.
I see, but then, what exactly are you claiming ?
Oh, guilt and shame, probably—but again, the mere fact that they feel these emotions does not necessarily imply that these emotions were a primary motivator in their original conversion.
This is rather surprising. Don’t feminists want to find the answer to this question, in order to optimize their strategies for converting more people to feminism ?
It seems much more likely that the guilt and shame are a result of the conversion rather than the cause of it.
Feminists don’t have that much funding.
As far as a conversion strategy goes? I haven’t claimed anything thus far, and I wouldn’t like to, because it would just open up another avenue of discussion I’d have to field adversarial questions over.
Seems like we need more research.
From below:
They’re obviously correlated, they’re probably co-temporal, and even if there is a clear temporal relationship, it seems probable that they serve to maintain the new beliefs.
But really, this isn’t a question we can find the answer for in comments on less wrong.
How much funding would it take to at least make some progress towards answering the question, “what causes non-feminists to become feminists” ? If you create a Kickstarter for this purpose, I’ll personally chip in a few bucks.
Again, I’m a little surprised to hear you say that feminists (or, perhaps, just feminist activists) had not made any attempts to answer the question. Yes, their finding is very limited—but doesn’t that fact make it all the more important to discover the most efficient way of spending their limited resources ?
Fair enough, but then, why did you bring up guilt and operant conditioning ?
You’re thinking like a LW reader, not a typical feminist activist (who is also liberal). Most of these people don’t have any background in any science and are more skilled at literature criticism than empiricism.
I don’t think I brought up guilt. Another poster tried to apply a reducto ad absurdum to my arguments and claimed that they lead to all men feeling guilty all the time. I said that I didn’t see a problem with that, and at the time, didn’t elaborate. The implication read from that is that guilt will turn men feminist; while this might be true, the implication I meant to make is that all men are oppressive and all oppressive people should feel guilty about being oppressive. Generally, I, like most humans, think that people doing bad things should feel bad about it.
I brought up operant conditioning to apply a buzzword to learning theories of gender, which claim that gendered behavior is learned, possibly by operant conditioning. It was an easy way to communicate to LW commenters that gender is socially constructed—that phrase with a shorter inferential distance is “gender is a product of operant conditioning.”
They should fix those deficiencies forthwith.
Obviously I agree, but I’m only one feminist, and I can only do so much.
This is a thought-provoking sentence. I think I don’t want anyone to feel bad, even when they do bad things.
As for me, I’d say it depends on whether them feeling bad makes them stop doing bad things.
If we’re counting guilt as suffering in an ethically consequential sense—which seems reasonable, since it’s pretty profoundly unpleasant and there’s a pretty clear functional analogy to physical pain—and if that suffering is additive with other kinds, then consequentialists should want people to feel guilt when they do bad things if and only if that guilt eliminates more suffering (of any type) down the road. Don’t know if you’re a consequentialist, but this seems like a good starting point.
In any case, that condition seems like it’s sometimes but not always true. Guilt over immutable or nearly immutable urges seems like a net loss unless those urges are both proportionally destructive and susceptible to conditioned reduction in the average case. Guilt strong enough to be unpleasant but weak enough not to overcome whatever other factors are making people do bad shit is likewise a loss. Interestingly, this seems to indicate that consequentialists should sometimes prefer intense over moderate guilt, unless it’s gratuitously intense relative to what’s needed to stop the behavior: sufficiently disproportionate guilt is also a loss.
The obvious objection to this line of thinking is that certain categories of socially constructed bad shit—not to name names—might stick around if and only if they stay at or above a certain level of prevalence in the population, sort of a memetic equivalent of herd immunity. Since these patterns can persist for an unbounded length of time and cause suffering as long as they do, anything capable of incrementally degrading them could have second-order consequences much larger than its first-order effects, potentially enough to justify any and all related guilt. In this case uncertainties about the problem structure seem to dominate consequential reasoning, much as per Pascal’s Mugging.
In my experience, feelings of guilt coupled with the attitude that it is “immutable”, can be an effective excuse not to fix harmful behavior. It’s a sort of ugh field. When the consequences of the behavior become sufficiently intolerable, one is eventually tempted to hang the guilt and test that supposed immutability.
Sure, that’s a failure mode, and it’s one which—stepping down a level of abstraction—seems prevalent in gender discussions (“I’m $gender, I can’t help it!”). From the inside, it can be pretty hard to distinguish between the motivations you can and can’t change with enough reflection. There’s a loose cultural consensus as to what counts, but at the same time that varies between subcultures and can lead to conflict in its own right: consider the “ex-gay” phenomenon in fundamentalist Christian spheres.
Maybe I shouldn’t have mentioned it in context; in my estimation it’s not directly relevant to what we’re discussing upthread. But at the same time I think it’s a mistake to consider our wants entirely plastic; for the time being we’re working with a certain set of hardware, and software changes can only do so much.
Interesting. Does that remain true if you believe that feeling bad when they do bad things makes people less likely to do bad things?
Possibly not. I do think punishments can deter bad actions. But I think this works best when those punishments are clearly described in advance of the crime.
Also, it seems to me that there is a perverse aspect of regret, that it punishes sympathetic malefactors more than it punishes psychopathic ones.
Agreed on both counts.
If feeling bad when they did bad things made people less likely to do bad things, there would be no such thing as akrasia.
Huh. If that isn’t hyperbole, I’m interested in your reasons for believing that.
Of course it is. The point is that we see all around us (that’s another hyperbole), and it is a recurring theme on LessWrong (that isn’t), that people persist in acting, or failing to act, in ways that they “feel bad” about. As a strategy for change, “feeling bad” doesn’t seem to be effective, does it?
“Making someone feel bad”, or “good”, fares even worse—see this parable.
I agree.
I disagree. One of the reasons akrasia is so notable is that feeling bad usually works. Usually touching a hot stove or hit your thumb with a hammer once is enough to change your behavior. Often being mocked by your peers, or sensing genuine disappointment from your mentors, is enough to change your behavior. It’s only in these weird corner cases where opposing strong motivations collide that we notice the unusual inefficacy of bad feelings, and haul out the rational analysis toolkit.
If feeling bad was actually motivational, all of us who currently feel bad about our (present tense) actions would not have such problems.
But doesn’t the same logic lead me to conclude that pain isn’t aversive? (That is: if pain were actually aversive, people wouldn’t do things that cause them pain. People do things that cause them pain, therefore pain is not aversive.)
The problem with that logic as it applies to pain is that pain can be aversive without completely preventing people from doing something. If a behavior B is N% likely ordinarily, and B becomes Y% likely if coupled to pain, and Y < X, that’s evidence for considering pain aversive even though we still do B. Relatedly, if B is always coupled to pain, then I never get to observe X.
Observing a nonzero Y is not evidence that pain is non-aversive.
It seems to me the same reasoning applies to guilt and other kinds of bad feelings. It’s certainly possible that they are non-aversive, but observing a nonzero frequency of the behaviors that cause it isn’t evidence of that.
There may be other evidence, though, which is why I asked Richard his reasons.
Taboo “feeling bad”, keeping in mind that our normal emotional vocabulary is pretty inadequate. (E.g., it seems to me that shame is basically never useful, but guilt and sadness can be.)
Thanks for the taboo request.
I mean I feel X when I’m not being productive. And yet I do not become productive. I have no idea how to taboo qualia like “X”.
Maybe an extensional definition?: That feeling you get when you’ve done something wrong. An uncomfortable and frustruating feeling that makes you feel guilty. A bit like stress.
That’s awfully specific. I wonder how general the non-utility of it is.
And I happen to think that anyone who is trying to make me feel bad about things should be crushed like a bug and their attempts to control through shame disempowered to whatever extent it is convenient to do so.
(I also observe that most people with healthy boundaries will tend to be much more likely to avoid those who are predisposed to attempting to control through guilt or shame.)
You’re eating babies. The wide-eyed idealist points out that eating babies is bad and you are a bad and evil person who must mend his baby-eating ways. You tell the wide-eyed idealist that you refuse to interact with people who try to control you through shame. The wide-eyed idealist thinks for a minute, shrugs, and shoots you.
Did you just create a counterfactual which relies on making me act even more cluelessly naive and banal than the wide eyed idealist?
That’s as meaningless than it is presumptive and inappropriate.
Now, for the next counterfactual let’s arbitrarily decide that MixedNuts is walking around naked having sex with an echidna while shouting “The World Is Flat!”.
Except for the sex-with-echidna part, this sounds vaguely like something that MixedNuts might do!
Rule 34, man. Rule 34. :-)
You have several decent points there, granted.
I feel that your most powerful point is that wide-eyed idealists are poor utility maximizers and poor rationalists.
The second strongest seems to be that a rationalist will (should, but as a rationalist, they do what they should) attempt better approaches, which seems to be quite close to one of wedrifid’s implied point in the grandparent. Was this your intended meaning?
Both of these are true, but I wasn’t talking to the wide-eyed idealist, I was talking to the baby-eater. If you grandstand about how a socially-approved and very mild punishment for doing bad things is Evil Boundary-violating Control, people who care about those bad things are less likely to let you alone than to switch to harsher punishments.
Not necessarily. Wild-eyed idealists, being idealists, are markedly biased towards shaming folks for whatever it is that they consider “bad”. Shaming and guilt-tripping people is not even particularly hard for them, since their whole worldview is often based on these emotions; whereas applying harsher punishments may not even occur to them unless they are rather authoritarian, and it might even be completely infeasible. Thus, reacting assertively is entirely appropriate, at least in the likeliest case.
Which of course begs the question about why you were attacking that particular straw man.
The optimal approach for dealing with enemies who are presumed to have more power than you seems rather irrelevant. Unless the relevance you imply is that radical feminists with an obsession for shame based control already represent a powerful hostile force that we would be foolish to resist? In that case I would of course agree that my words to members of that group would be best served keeping them misinformed about the effectiveness of their strategy of enforcement. All else being equal it tends to be better to keep powerful enemies ineffective.
Okay, my point was that if you accept that people are going to try to control you, it’s rather silly to complain about that means. But apparently you classify all people who attempt to control you as enemies. I suppose that’s a consistent view, and compatible with civilization if you allow control to enforce an agreed-upon set of laws.
But it doesn’t seem to allow for progress. If someone discovers that marital rape is not okay, contrary to mainstream belief, what are they supposed to do? Publishing a paper entitled “Psychological effects of nonconsensual sex between spouses” would count as informing rather than controlling; it would also be vastly less effective than making marital rape illegal, portraying characters who rape their spouses as horrible monsters, and shaming rapists.
If someone tries to control me and I disagree with their position, my answer is not “By attempting to control me you have made yourself my enemy”, but “I don’t agree that bestiality is cruel to animals, so I will fight your attempts to make it unacceptable, but I don’t disapprove of these attempts on principle. For example, I agree with your subargument about indecent exposure, so Knuckles here and I are going to get a room”.
I don’t think this is evident enough to be affirmed without supporting evidence. There’s evidence that such laws and shame-guilt-tripping might be much less effective than publishing a good comprehensive paper.
Prime example: Videogame piracy. Strong IP laws. Massive attempts at guilt-trip and manipulation of mass populace. Observed effect: No measurable effect of the laws and anti-piracy measures, and a continued growth of piracy. The growth is most likely attributable to other causes.
On the flipside, dev companies that have announced that they won’t do anything against piracy have seen considerable advertisement boosts from it and have on average enjoyed much greater success thanks to this.
Not only do I care about what means people use to control me, for any given person asserting that they don’t care what means people use to control them I would be confident in declaring them confused about their own preferences.
No I don’t, and wouldn’t. Why on earth would I give away my power like that? I’ll do whatever I want in response to people attempting to control me including complying with indifference, ignoring them, gaining more social power so that people are unable to or unlikely to make that kind of moves. Some people doing (or being likely to do) certain things would make them enemies but that is rare and implies giving them a significant degree of respect and attention. It doesn’t happen often.
We agree on the first point! I’m saying some means are worse than others, and shame/guilt is one of the best ones.
As Dave pointed out, we need to taboo “enemy”. “This person’s actions are bothering me; I’ll minimize annoyance” is treating the person as your enemy in the sense I was using it. Not treating them as an enemy is “This person is trying to do good, yet their actions aren’t the ones I think are best; I shall update on what they believe, and tell them what I believe so they can do same; if we still disagree, I’ll minimize total annoyance among us both”. If most people are your enemies by that definition, you’re… not typical audience for social justice rhetoric.
Can you taboo “enemy”? I’m not at all convinced it means the same thing throughout this exchange.
Ah, yes. Thanks for making this clear.
This, incidentally, reminds me of the rule of Ko, since I only learned to play Go yesterday. It seems like there’s a meta pattern of the baby-eater becoming the wide-eyed-idealist when you consider the boundary-violating control as the baby-eating and the ball starts bouncing around while both camps conscript soldiers and muster armies and continuously threaten other elements of their opposition while looking for something that invalidates the other’s morality.
Sure. And the other baby-eaters look at that and stop eating babies where the wide-eyed idealist can find out about it, because the idealist has made a credible threat. (A slightly more idealistic idealist might look for nonfatal ways to make a credible threat, but they might not be available.) This happens all the time; much of our civilization is built on it. What’s the problem?
Well, if the wide-eyed idealists are a lot more powerful than the baby-eaters, probably. But if the wide-eyed idealists are less powerful than the baby-eaters, then the baby-eaters may instead be provoked into a war on wide-eyed-idealists, because even if they lose out more in the short term by waging such a war than by putting an end to their baby-eating, they’d be sending the signal that they won’t let extremist minorities dictate values to the majority.
Yup, that’s possible. And if the idealists are more powerful, the baby-eaters might still “be provoked into” (aka “initiate”) a war to make imposing majority preferences too expensive and encourage the majority to accommodate to them. And many other outcomes are possible. Narrating them all might be an entertaining way to spend an afternoon, but I’m still not sure what the point is. Were you disagreeing with wedrifid? Can you clarify your disagreement if so?
EDIT: Whoops! I just noticed you’re not the same poster. Never mind, then...
Fair enough. Sad, but fair :-/
That’s a fascinating discussion topic in and of itself, but it might be out of scope for the current thread. That said:
Some LWers explicitly deny this statement; they might say, “feeling bad doesn’t solve anything in and of itself, since actions matter more than words”, or “feeling bad about things one absolutely cannot control is counterproductive”, or some combination thereof. It’s probably not a good idea to assume that the views of LWers will align with those of the general population, as far as morality is concerned. I could be wrong, however.
Ah, understood, thanks for the clarification. I’m not sure whether operant conditioning alone is enough to account for gender, but I don’t know enough psychology to make a credible claim one way or another.
Indeed.
I think that learning accounts for gender. Whether that learning originates in modeling, operant conditioning, or observational learning is irrelevant to me.
As I asked you on a different thread, how do you know whether this is true ? If you were to ask me that question, I would say, “let’s go out and run a bunch of experiments”, but you have explicitly stated that doing so would be sexist, so… now what ?
There’s one experiment in particular that I advocate—the destruction of patriarchy.
Your current worldview seems to be unfalsifiable without very expensive experiments. (How would you even know if patriarchy had been destroyed anyway?) Maybe we’re doing this backwards. What caused you to become a feminist? What evidence could you have encountered that would have made you a non-feminist?
This is an assertion I’ve heard made a lot by people outside biology and I’d like to hammer it out with somebody who seems well informed.
On what basis can we make this assertion? Biology obviously contributes in a physical sense (people with male gender tend not to have wombs). I assume what you mean is that there are no inherent neurological differences in males versus females. But how can we know that? We have a strong prior (other animals) and lots of circumstantial evidence that it should be true.
I think feminism ought to acknowledge at least the possibility of inherent male-female differences with a simple “so fucking what”. For instance I think that physical abuse of women, by men, probably represents an adaptive, ancestral behavior caused (amongst other things) by inherent neurological differences in men and women. That doesn’t excuse it. We can and have made great progress in conditioning men not to hit women, and hopefully will continue to do so.
My introduction to social justice (as a whole) was through the lens of intersex conditions (wherein people with ambiguous genitalia are assigned a gender at birth, most often female because the surgery is easier). A major problem was that raising male children as female or vice versa ends up causing psychological problems.
The main [unethical] case study was a pair of identical male twins, one of whose penis was accidentally cut off during circumcision, and then got female reassignment surgery, grew up very confused and depressed and eventually committed suicide. (Other case studies are less clear cut but generally indicative of the same problem, not to mention transgender people). Gender clearly has a biological component.
It also does clearly have a environmental component, and I don’t know where those elements interrelate, but ignoring the biological element causes as many problems as ignoring our problems with how we raise children.
...or the knowledge of that child’s parents, doctors, and everyone around him lead to them (the adults) treating that child as a freak rather than a woman.
One of the ideas I like in radical feminism is that masculinity is very much defined by the ability to impregnate women (one of the reasons why intersex infants are virtually always assigned female). Conversely, femininity is defined by the ability to be impregnated. Seeing as this child could do neither, and their caregivers knew that, I would hardly expect this child to have typical gender socialization.
The only experiment that could demonstrate this to my satisfaction is a double-blind study where infants are adopted by parents that know only that infant’s current assigned gender, and nothing else.
This is the fallacy of gray.
So what is your opinion on transpeople?
Okay, fair enough. It’s very plausible to me that most of our problems relate to socialization rather than biology. But you seem to be implying they are 100% sociological, which seems wrong.
I’m not totally sure, and I notice that it’s a confusing topic.
Since humans can’t think quantitatively, I prefer to just say “gender is learned” rather than “gender is almost entirely (95-99%) learned but the remaining part is biological.”
In fact, it might be that gender is entirely non-biological. But I’m sure it’s mostly social.
(This is not me setting up a followup ambush argument, just asking)
To what extent would it alter your philosophy if we learned that gender was 70% social? 50% social? Right now, these questions are vague and difficult to test, but they may not always be. And I think it’s much sounder (both from an instrumental and epistemic standpoint) to think in advance about how your philosophy should shift if different facts were confirmed.
I don’t know what the answer is but the existence of transpeople (and genderqueer people and others who don’t fall neatly into the gender binary) suggests to me that it’s unlikely to be 95%+ social. But even if it turned out to be as low as 50% social, dealing with those social issues properly still requires a radical upheaval of the popular consensus on how we should socialize people.
If social learning accounts for gender, what causes gender differences among animals? If your answer is that they don’t have gender in the same sense, what exactly do you mean by gender?
Bias in the humans observing them.
But even then, there aren’t gender differences among animals to anywhere near the degree to which there supposedly are in humans. Do female chimpanzees get paid less than male coworkers? Do they wear pink more so than men?
A lot of your claims sound considerably less crazy now. If the comments still existed, I’d suggest edits.
Operant conditioning is notoriously bad at getting creatures to have behaviors that will adapt to changing environments, so is unlikely to be a significant part of the cause of gender behavior.
I said this literally days ago, and have been saying it the entire time I have been having this discussion.
“Operant conditioning” was introduced into this discussion by me, in a comment that says “I think that learning (operant conditioning, modeling, and observational learning) is the cause of gender.”
Have you come into this discussion after those comments were deleted? Or did you never read them?
If you want other people to avoid having the same experience you did, upvote my comments. EY messaged me earlier today saying he was deleting any downvoted posts, which are primarily mine.
FWIW, I do not think that. I would like people doing bad things to stop doing those things. “Feeling bad” is (I believe) never useful: not to the person having the feeling, and not to anyone else.
Having decided that it’s a bad idea for me to continue discussing things with eridu, it might be better for me to avoid discussing the same things with people who are currently engaged in conversation with him. But I think that in this case we have a substantive disagreement.
I think that not only is people feeling bad a powerful moderator of our behavior, and one that it’s useful for other people to know we have, I think deliberately making people feel bad about their actions can be a useful way to motivate them to change their behavior in positive ways. Ideally, nobody should have to feel bad, but then, ideally, nobody should be doing bad things either.
To draw an available example, Ghandi’s efforts to gain independence for India rested almost entirely on making the British colonialists feel bad about themselves, and while giving up their possession of India might have been an economic inevitability, he certainly accelerated it.
I think eridu is overgeneralizing the usefulness of imposing guilt on others though. It appears to me that in order to modify others’ behavior by encouraging them to feel guilty, you need to start with people who have an existing set of moral standards (ones by which they actually operate not simply ones they profess,) which they are not applying in a particular case, and make them feel intuitively that this is a case where they should be applying those standards. For instance, the British citizens mostly had moral standards against attacking civilized, non-resisting people with clubs. If they saw Indian people behaving in a civilized, nonthreatening manner, and being beaten with clubs for challenging colonial rule, the British citizens are going to feel guilty without needing further incitement. On the other hand, if you try to encourage people to feel guilty for, say, stopping women from having abortions, and appeal to them on principles of autonomy, it won’t work because they don’t relate it to anything else they would feel guilty about. You can tell them why they should, but they aren’t going to intuitively put either “women” or “abortion” into a new reference class that completes a preexisting basis for guilt.
I’m not sure whether it’s a separate principle, or an extension of this one, that trying to get people to modify their behavior too radically by appealing to guilt will also backfire. For instance, you can appeal to someone that a consistent application of their principles would lead to them giving away nearly all their money to charity, but most people don’t have preexisting models for guilt whereby they will feel guilty for not giving away nearly everything they own. They can be guilted into “doing their part,” make some contribution, and stop feeling guilty, but if they judge that the person encouraging them to feel guilty is asking too much of them, then they’ll try to avoid the person trying to make them feel guilty, rather than the behaviors that person is trying to encourage them to change.
I suspect the banhammer may be looming over all of this, or the karmic penalty for being under the same bridge as the troll, as eridu’s last ancestor comment has vanished, but I’ll just briefly refer to this reply of mine to eridu, and take up the following:
Bingo. People have these fantasies of being able to reach into other people’s heads and tweak some switches to make them do what they (the ones tweaking) want, but things just don’t work like that. People have their own purposes, and nothing you can do to them is any more than a disturbance to those purposes. What they will do to get what they want in spite of someone else’s meddling will not necessarily resemble, even slightly, what the meddler wanted. See also Goodhart’s law.
How would you like this to occur?
To put it another way, what stops you from murdering somebody you dislike? The (bad feeling of) fear of getting caught? The (bad feeling of) remorse from taking a human’s life?
Or do you really think you’re a Hollywood rationalist, making a cold and precise computation of negative utility as a result of your potential action, and choosing another path?
Like the other poster who said roughly the same thing as you, you seem entirely ignorant to the massive amount of bad feelings present in reality, and the usefulness of those feelings. Nowhere in the fun theory sequence does EY advocate getting rid of bad feelings, and in fact EY argues against that.
The possibility that they could still contain potential for improving paperclip production (to the extent that that is true).
I’m happy to have one of the most well-loved LW celebrities respond to a post I made!
In the counterfactual world where you did murder someone you disliked, and later found that they were planning on instigating paperclip production, how would you feel out of “good” or “bad”?
Of course, maybe you don’t have something you call “feelings,” but rather think of things purely in terms of expected paperclips. Humans, on the other hand, have difficulty thinking strictly in terms of expected paperclips, but rather learn to associate expected paperclips with good feelings, and negative expected paperclips with bad feelings.
In humans, we have a set of primitive mental actions (like feelings, intuitions, and similar system-one things) that we can sometimes compose into more sophisticated ones (like computing expected paperclips yielded by an action).
As such, you can always say “I wouldn’t kill someone I disliked because I might feel regret for taking a life,” or “I wouldn’t kill someone I disliked because I would be imprisoned and unable to accomplish my goals,” but ultimately, all those things boil down to the general explanation of “feeling bad.”
“Feeling bad” is the default human state of not accomplishing their goal.
(As an aside, this is why I think that you, clippy, can be said to have emotions like humans—because I don’t think there’s a difference between your expectation of negative paperclips as a result of a possible future event and fear or dread, nor do I think there’s a difference between a realization that you created fewer paperclips and sadness, loss, or regret.)
Thank you again for replying, Clippy—I’ll go down to my supply room at my earliest convenience and take most of the paperclips as a token for me to remember this interaction, and in the process, causing my employer to purchase paperclips sooner, raising demand and thus causing more paperclips to be produced.
Thanks for buying more paperclips, you’re a good human.
To answer your question, if I entropized a human and later found out that the human had contained information or productive power that would have, on net, been better for paperclip production, I will evaluate the reasoning that led me to entropize that human, and if I find that I can improve heuristics in a way that will avoid such killings without also preventing a disproportoinate amount of papeclip production, then I will implement that improvement.
I suppose that is just a difference between us. Not a disagreement, but a difference: you are one way and I am another.
You think of disliking someone and ask, what stops you murdering them?
I think of disliking someone and ask (and only because of your question), what would start me murdering them?
Number of days since casual murder was used in a discussion on LessWrong: 0.
None of the above.
(BTW, the Star Trek novels, at least the ones I have read, paint a far more creditable and credible version of Vulcan rationality than the TV shows and films. Vulcans do not suppress their feelings, but master them. A tradition in the real world with multiple long pedigrees. And a shorter one.)
I am well aware of them. But I think people often misinterpret what they are. As I revised my original comment to say, negative feelings tell you something. What matters is to do something about it. All that stuff about negative reinforcement and feelings conceived as similar to physical forces that push you and pull you into doing stuff is fairy tales, fantasies of non-agency. (Which pop up all over the place, not just in BDSM. Strange.)
“Making someone feel bad” is even more of a fairy tale. How do you “make someone feel bad”? What will happen if you try? Here is one person’s hypothetical reaction, and here is the basic problem with the idea.
I’m pretty sure HPMoR already took a dive into this point, in a manner I found sufficiently eloquent to expose the moral nihilism and/or philosophical egocentrism required for the first to occur.
Are you talking about the same things?
(If you haven’t read HPMoR, darn. I was hoping it would provide a speed boost to that line of philosophical reasoning.)
I’ve read HPMoR, but not studied it—which chapter?
I fail to recall the specifics at the moment, but I’ll look for the passage (with better search tools) once I get home in a few hours.
Agency is the fantasy.
That isn’t putting it another way, it’s a different question entirely.
Is that what stops you murdering (more) people? Remorse? Who did you kill last time?
As for me, the fact that if murdering somebody one dislikes were right, then one would have to be extra careful to never be disliked by anybody (if one doesn’t want to be killed), and that would be a lot nastier than people one dislikes staying alive. (Yes, that would make no sense to CDTists, but people aren’t CDTists anyway.)
How do you feel about possibly being murdered?
I’m not sure I understand your question. I’d prefer to not be murdered rather than to be murdered, all other things being equal; are you asking anything else?
I’m asking how you feel about possibly being murdered. You know, emotions. It’s a simple question.
Because if “I don’t want to be murdered because by TDT-style rhetoric it leads to my being more likely to be murdered,” and if you feel bad about being murdered, you abstain from murdering people because you feel bad.
This relates to the above statement:
If you do not murder people because you would feel bad, feeling bad is useful.
I think this is a trivial point, and if I started this discussion on a different topic, it would be trivially accepted by most of the people currently arguing against it.
Feeling bad is one of the reasons why I don’t do certain things, but not the only one. If I’m convinced something that would make me feel bad would also have desirable consequences that would outweigh that (even considering ethical injunctions, TDT-related considerations, etc.), I try to overcome my emotional hang-up (using precommitment devices, drinking alcohol, etc., if necessary) and do that anyway.
It was a denotative simple question attempting to assert a non-sequitur rhetorical point.
That doesn’t follow.
Nonsense. Your reasoning is well below the standard expected around here. It may pass elsewhere but only because anything with “boo murder” in it is too hard to argue with regardless of the standards of the content.
Well, let me spell it out even more so than I already have.
Preferences are system 2 concepts.
Over time, system 2 concepts map to system 1 concepts.
As such, if you prefer ice cream to spinach, you will feel bad (in a system 1 sense) if you are promised ice cream but given spinach.
In humans, as such, any preference against a thing means that human feels bad about that thing.
Arguing about the choice of something that represents the LW concept of negative utility in a hypothetical example is equivalent to arguing about grammar.
Let A(X) be a function such that X.Consciousness becomes terminated (ends, dies, etc.)
I have a preference for NOT A(me).
Over time, the above maps to Feel Bad → A(me)
As such, if I am offered NOT A(me), and given A(me), I will feel bad because I attempt to be reflectively coherent.
As such, my preference for NOT A(me) does, as you claim, imply that I ought to feel bad about A(me).
The above are intended as a rephrasing of your statements, and I fully agree.
However…
You are making the subsequent conclusion that I have:
Feel Bad → A( X | X.isElementOf(people) )
because I have preference for NOT A(me).
wedrifid correctly asserts that this does not follow.
If I’m reading it right I don’t think your formalism fits what I’m trying to argue, but this is a boring point and I’m not terribly interested in taking it further.
“That doesn’t follow” does not mean “I cannot understand your argument”. It means that the argument was fundamentally logically flawed and your reasoning confused.
Some people might feel bad. Others would feel amused (and, incidentally, many would personally develop themselves such that they are more inclined to feel positive than negative emotions in that kind of situation). Most importantly, system 1 refers to a heck of a lot more than emotions. Even system 1 based decisions to avoid something don’t translate to ‘feeling bad’ about it. Especially in people who are mature or experienced.
No it doesn’t.
Irrelevant.
I dispute both your first and your second bullet point. As far as I know there exist both system 1 and system 2 preferences, and it’s not clear that system 2 concepts usually bridge the gap. Can you give some examples or evidence?
Are you using ‘never’ in a figurative sense here? Seeing the absolute claim like that prompted me to think of a whole list of real world counter-examples despite me probably mostly agreeing with your position. (For a start, making people feel bad is useful in nearly all cases in which breaking someone’s finger is useful. Maintaining dominance, keeping oppressed people oppressed, provoking an enemy into taking hasty reactions against you that you believe you can win, short term coercement. Making others believe that you have the power to do harm to another without them having any recourse. That kind of thing. That’s before thinking up the cases where actual respectable, decent sounding outcomes could arise—those are rare but do occur.)
I would write “seldom” instead of “never”.
That is something I find a standard but rather annoying geek conversational failure. You could simply have answered your own question:
with “yes”. But “figurative” does not really capture it. All apparently absolute generalisations are relative to their context. Are there substantial exceptions relevant to the context?
Now, on further consideration I might indeed revise my original statement, but not in any of the directions you explore. Feeling bad—that is, having feelings that one does not want—is useful to precisely this extent: it informs you that something is wrong; that there is a conflict somewhere. The useful response to this is find where the conflict is and do something about it. Nothing else is useful about the feeling.
Days since someone used torture to illustrate an argument: 0.
I prefer to write “never” instead of “seldom”. “Seldom” and other such qualifiers too easily protect what one is saying behind a fog of vagueness. It allows one to move one’s soldiers around like the pieces of a sliding-block puzzle, so that wherever the enemy attacks, one can say “Ha! Fooled you! Never said that! Nobody there! Try again!”
Not so. Some reasons:
Psychologist Richard J. Davidson has shown that the affective trait Resilience (speedy recovery from bad feelings) becomes maladaptive when extremely high, as it interferes with empathy.
Almost all judicial systems have concluded that remorse helps avoid recidivism in criminals. (I’m opposed to remorse-based sentencing—but not based on its being irrelevant.)
For better or worse, judicial systems buying into an empirical proposition is not very strong evidence that the proposition is true.