I recently made a dissenting comment on a biggish, well-known-ish social-justice-y blog. The comment was on a post about a bracelet which one could wear and which would zap you with a painful (though presumably safe) electric shock at the end of a day if you hadn’t done enough exercise that day. The post was decrying this as an example of society’s rampant body-shaming and fat-shaming, which had reached such an insane pitch that people are now willing to torture themselves in order to be content with their body image.
I explained as best I could in a couple of shortish paragraphs some ideas about akrasia and precommitment in light of which this device made some sense. I also mentioned in passing that there were good reasons to want to exercise that had nothing to do with an unhealthy body image, such as that it’s good for you and improves your mood. For reasons I don’t fully understand, these latter turned out to be surprisingly controversial points. (For example, surreally enough, someone asked to see my trainer’s certificate and/or medical degree before they would let me get away with the outlandish claim that exercise makes you live longer. Someone else brought up the weird edge case that it’s possible to exercise too much, and that if you’re in such a position then more exercise will shorten, not lengthen, your life.)
Further to that, I was accused of mansplaining twice. and then was asked to leave by the blog owner on grounds of being “tedious as fuck”. (Granted, but it’s hard not to end up tedious as fuck when you’re picked up on and hence have to justify claims like “exercise is good for you”.)
This is admittedly minor, so why am I posting about it here? Just because it made me realize a few things:
It was an interesting case study in memeplex collision. I felt like not only did I hold a different position to the rest of those present, but we had entirely different background assumptions about how one makes a case for said position. There was a near-Kuhnian incommensurability between us.
I felt my otherwise-mostly-dormant tribal status-seeking circuits fire up—nay, go into overdrive. I had lost face and been publicly humiliated, and the only way to regain the lost status was to come up with the ultimate putdown and “win” the argument. (A losing battle if ever there was one.) It kept coming to the front of my mind when I was trying to get other things done and, at a time when I have plenty of more important things to worry about, I wasted a lot of cycles on running over and over the arguments and formulating optimal comebacks and responses. I had to actively choose to disengage (in spite of the temptation to keep posting) because I could see I had more invested in it and it was taking up a greater cognitive load than I’d ever intended. This seems like a good reason to avoid arguing on the internet in general: it will fire up all the wrong parts of your brain, and you’ll find it harder to disengage than you anticipated.
It made me realize that I am more deeply connected to lesswrong (or the LW-osphere) than I’d previously realized. Up ’til now, I’d thought of myself as an outsider, more or less on the periphery of this community. But evidently I’ve absorbed enough of its memeplex to be several steps of inference away from an intelligent non-rationalist-identifying community. It also made me more grateful for certain norms which exist here and which I had otherwise gotten to take for granted: curiosity and a genuine interest in learning the truth, and (usually) courtesy to those with dissenting views.
but we had entirely different background assumptions about how one makes a case for said position. There was a near-Kuhnian incommensurability between us.
This is very frustrating and when I realize it is happening, I stop the engagement. In my experience, rationalists are not that different from smart science or philosophy types because we agree on very basic things like the structure of an argument and the probabilistic nature of evidence. But in my experience normal people are very difficult to have productive discussions with. Some glaring things that I notice happening are:
a) Different definitions of evidence. The Bayesian definition of evidence is anything that makes A more likely than not A. But for many people, evidence is anything that would happen given A. For example a conspiracy theorist might say “Well of course they would deny it if were true, this only proves that I’m right”.
b) Aristotelianism: the idea that every statement is either true or false and you can prove statements deductively via reasoning. If you’ve reasoned that something is true, then you’ve proved it so it must be true. Here is a gem from an Aristotelian friend of mine “The people in the US are big, it must be the food and they use growth hormones in livestock, therefore people in the US are big because of growth hormones”.
c) Arguments that aren’t actually arguments. Usually these are either insults or signals of tribal affiliation. For example “Good to know you’re better than everyone else” in response to a critical comment. But insults can be more subtle and they can masquerade as arguments. For example in response to a call for higher taxes someone might say “If you love taxes so much then why aren’t you sending extra money to the treasury?”.
d) Arguments that just have nothing to do with their conclusion. An institute called Heartmath stated this gem (rough paraphrase): “The heart sends more information to the brain than the brain does to the heart therefore the heart is more important that the brain”.
e) Statistical illiteracy. I want to grab a flamethrower every time the following exchange happens:
Salviati: “According to this study people who are X tend to be Y”
Simplicio: “Well I know someone who is X but isn’t Y, so there goes that theory”
f) Logical illiteracy:
Example 1:
Salviati: ” If A then B”
Simplicio: “But A isn’t true therefore your argument is invalid”
Example 2:
Simplicio: “X is A therefore X is B”
Salviati: “Let us apply a proof by contradiction. ‘A implies B’ is false because Y is A, but Y is not B”
Simplicio: “How dare you compare X to Y, they are totally different! Y is only not B because …”
Sorry if the symbolic statements are harder to read, I didn’t want to use too many object level issues.
Arguments that aren’t actually arguments: argument by tribal affiliation was certainly in full force, as well as a certain general condescension bordering on insult.
Statistical illiteracy: in an only minor variant of your hypothetical exchange, I said that very few people are doing too much exercise (tacitly, relative to the number of people who are doing too little), to which someone replied that they had once overtrained to their detriment, as if this disproved my point.
I was also struck by how weird it was that people were nitpicking totally incidental parts of my post, which, even if granted, didn’t actually deduct from the essence of what I was saying. This seemed like a sort of “argument by attrition”, or even just a way of saying “go away; we can tell you’re not one of us.”
A general pattern I’ve noticed: when processing an argument to which they are hostile, people often parse generalizations as unsympathetically as they can. General statements which would ordinarily pass without a second thought are taken as absolutes and then “disproved” by citations of noncentral examples and weird edge cases. I think this is pretty bad faith, and it seems common enough. Do we have a name for it? (I have to stop myself doing it sometimes.)
Social justice, apropos of the name, is largely an exercise in the manipulation of cultural assumptions and categorical boundaries- especially the manipulation of taboos like body weight. We probably shouldn’t expect the habits and standards of the social justice community to be well suited to factual discovery, if only because factual discovery is usually a poor way to convince whole cultures of things.
But the tricky thing about conversation in that style is that disagreement is rarely amicable. In a conversation where external realities are relevant, the ‘winner’ gets social respect and the ‘loser’ gets to learn things, so disagreement can be mutually beneficial happy event. But if external realities are not considered, debate becomes a zero-sum game of social influence. In that case, you start to see tactics pop up that might otherwise feel like ‘bad faith.’ For example, you win if the other person finds debate so unpleasant that they stop vocalizing their disagreement, leaving you free to make assertions unopposed. On a site like Less Wrong, this result is catastrophic- but if your focus is primarily on the spread of social influence, then it can be an acceptable cost (or outright free, if you’re of the postmodernist persuasion).
My general sense is that this is a fairly distinctive quality of social justice communities, so your feeling of alienation may have as much to do with the social justice community as it does with the LW memeplex. A random conversation about fat acceptance with culturally modal people might be a great deal less stressful. But then again, you probably shouldn’t trust somebody else on LW to say that.
Upvoted for being a plausible, fully charitable explanation of Social Justice rhetorical norms, which I had been unthinkingly categorizing as “Dumb/Evil For No Reason” despite the many highly intelligent people involved.
My general sense is that this is a fairly distinctive quality of social justice communities, so your feeling of alienation may have as much to do with the social justice community as it does with the LW memeplex.
I am very curious to what extent this is true, and would appreciate any evidence people have in either direction.
What is the cause of this? Is it just random fluctuation in culture that reinforce themselves? Perhaps I do not notice these problems in non social justice people just because they do not have an issue they care enough about to argue in this way. Perhaps, It is just availability bias as I spend too much time reading things social justice people say. Perhaps it is a function of the fact that the memes they are talking have this idea that they are being oppressed which makes them more fearful of outsiders.
Simplicio: THAT’S NOT TRUE THERE EXISTS AN EXCEPTION YOUR ENTIRE ARGUMENT IS INVALID
Because we’re talking about being uncharitable, let’s be charitable for a moment. Simplicio, in fact, made the mathematically proper counterargument: he produced a counterexample to a for-all claim. And finding one flaw with a mathematical proof is, in fact, sufficient to disregard the entire thing.
Clearly, though, Simplicio’s argument is horrible and nobody should ever make it. If we check out the errata for Linear Algebra Done Right, we find that Dr. Axler derped some coefficients on page 81. His proof is incorrect, but any reasonable person can easily see how the coefficients were derped and what the correct coefficients were, and it’s a trivial matter to change the proof to a correct proof.
Analogously, the proper response to an argument that’s technically incorrect, but has an obvious correct argument that you know the author was making even if they phrased it poorly, is to replace the incorrect argument with the correct argument, not scream about the incorrect argument. Anyone who does anything differently should have their internet privileges revoked. It’s more than a trivial inconvenience to write (and read) “the overwhelming scientific consensus indicates that, for most individuals, increasing exercise increases lifespan, although there’s a few studies that may suggest the opposite, and there’s a few outliers for whom increased exercise reduces lifespan” instead of “exercise increases lifespan”.
Simplicio: THAT’S NOT TRUE THERE EXISTS AN EXCEPTION YOUR ENTIRE ARGUMENT IS INVALID
Salviati: Principle of charity, bro
Now, if Simplicio applies principle of charity, then they’ll never make arguments like that again, and we’ve resolved the problem. If they don’t, we discontinue debating with them, and we’ve resolved the problem.
There’s a few failure modes here. We create a new route down which debates about akrasia-fighting devices can be derailed. We give a superweapon to people who we probably shouldn’t trust with one. They may google it and find our community and we won’t be able to keep them out of our walled garden. (I jest. Well, maybe.) But introducing principle of charity to people who have clearly never heard of it feels like it should either improve the quality of discourse or identify places we don’t want to spend any time.
Well, there’s a frustrating sort of ambiguity there: it’s able to pivot between the two in an uncomfortable way which leaves one vulnerable to exploits like the above.
Sure, and it’s also vulnerable to abuse from the other side:
“I have bogosthenia and can’t exercise because my organs will fall out if I do. How should I extend my lifespan?” “You should exercise! Exercise increases lifespan!” ”But my organs!” “Are you saying exercise doesn’t increase lifespan? All these studies say it does!” ”Did they study people with no organs?” “Why are you bringing up organs again? Exercise increases lifespan. If you start telling people it doesn’t, you’re going to be responsible for N unnecessary deaths per year, you quack.” ”… organs?”
I was also struck by how weird it was that people were nitpicking totally incidental parts of my post, which, even if granted, didn’t actually deduct from the essence of what I was saying.
I see this in lots of places where it’s clearly not an argument by attrition. There’s a sizable fraction of people on the Internet who are just over-literal.
I said that very few people are doing too much exercise (tacitly, relative to the number of people who are doing too little), to which someone replied that they had once overtrained to their detriment, as if this disproved my point.
There’s this issue though—what matters is not the fraction of people who exercise too much among the general population, is the fraction of people who exercise too much among the people you’re telling to exercise more to.
It’s a first contact situation. You need to establish basic things first, e.g. “do you recognize this is a sequence of primes,” “is there such a thing as ‘good’ and ‘bad’,” “how do you treat your enemies,” etc.
Simplicio: “Well I know someone who is X but isn’t Y, so there goes that theory”
“Aren’t you afraid of flying after that plane was shot down?” “No; flying is still much safer than driving, even taking terrorist attacks into account.” “But that plane was shot down!!!”
Simplicio: “But A isn’t true therefore your argument is invalid”
Sorry for being nit-picky, but that is partly linguistic illiteracy on Salviati’s part. Natural language conditionals are not assertible if their antecedent is false. Thus, by asserting “If A then B”, he implies that A is possible, with which Simiplicio might reasonably disagree.
Usually in these exchanges the truth value of A is under dispute. But it is nevertheless possible to make arguments with uncertain premises to see if the argument actually succeeds given its premises.
“But A isn’t true” is also a common response to counterfactual conditionals—especially in thought experiments.
Well, sometimes thought-experiments are dirty tricks and merit having their premises dismissed.
“If X, Y, and Z were all true, wouldn’t that mean we should kill all the coders?” “Well, hypothetically, but none of X, Y, and Z are true.” ”Aha! So you concede that there are certain circumstances under which we should kill all the coders!”
My preferred answer being:
“I can’t occupy the epistemic state that you suggest — namely, knowing that X, Y, and Z are true with sufficient confidence to kill all the coders. If I ended up believing X, Y, and Z, it’s more likely that I’d hallucinated the evidence or been fooled than that killing all the coders is actually a good idea. Therefore, regardless of whether X, Y, and Z seem true to me, I can’t conclude that we should kill all the coders.”
But that’s a lot more subtle than the thought-experiment, and probably constitutes fucking tedious in a lot of social contexts. The simplified version “But killing is wrong, and we shouldn’t do wrong things!” is alas not terribly convincing to people who don’t agree with the premise already.
The simplified version “But killing is wrong, and we shouldn’t do wrong things!” is alas not terribly convincing to people who don’t agree with the premise already.
There are other ways of saying it. I think Iain Banks said it pretty well.
The same thing can still happen with a subjunctive conditional, though.
A: If John came to the party, Mary would be happy. (So we could make Mary happy by making John come to the party.)
B: But John isn’t going to the party, no matter what we do. (So your argument is invalid.)
Also, pace George R. R. Martin, the name is still spelled John. Sorry, no offense, I just couldn’t resist. :)
Ah, thanks. I didn’t know that existed as a short form for Jonathan, and inferred that it was merely another instance of his distorting English spelling in names and titles.
Even with such a generic conditional (where t and t’ are, effectively, universally quantified), the response can make sense with the following implied point: So not “B(now+delta’)”, hence we can’t draw any presently relevant conclusions from your statement, so why are you saying this?
It may or may not be appropriate to dispute the relevance of the conditional in this way, depending on the conversational situation.
Create another nickname, pretending to be a Native American woman. Say that the idea of precommitment to exercise reminds you that in the ancient times the hunters of your tribe believed that it is spiritually important to be fit. (Then the white people came and ruined everything.) If anyone disagrees with you, act emotional and tell them to check their privilege.
The only problem is that winning in this way is a lost purpose. Unless you consider it expanding your communication skills.
I’ve actually seen an argument online in which some social justicers (with the same bad habits as in the story above) were convinced that it is acceptable to care about male circumcision on the grounds that it made SRS (sexual reassignment surgery) more difficult for trans women. Typically (in this community), if you thought male circumcision was an issue—you were quickly shouted down as a dreaded MRA (men’s rights activist).
Don’t think that’d work. Traditional practices and attitudes are a sacred category in this sort of discourse, but that doesn’t mean they’re unassailable—it just means that any sufficiently inconvenient ones get dismissed as outliers or distortions or fabrications rather than being attacked directly. It helps, of course, that in this case they’d actually be fabrications.
Focusing on feelings is the right way to go, though. This probably needs more refinement, but I think you should do something along the lines of saying that exercise makes you feel happier and more capable (which happens to be true, at least for me), and that bringing tangible consequences into the picture helps people escape middle-class patriarchal white Western consumer culture’s relentless focus on immediate short-term gratification (true from a certain point of view, although not a framing I’d normally use). After that you can talk about how traditional cultures are less sedentary, but don’t make membership claims and do not mention outcomes. You’re not torturing yourself to meet racist, sexist expectations of health and fitness; you’re meeting spiritual, mental, and incidentally physical needs that the establishment’s conditioned you to neglect. The shock is a reminder of what they’ve stolen from you.
You’ll probably still get accusations of internalized kyriarchy that way, but it ought to at least be controversial, and it won’t get you accused of mansplaining.
I think this is still too logical to work. Each step of an argument is another place that can be attacked. And because attacks are allowed to be illogical, even the most logical step has maybe 50% chance of breaking the chain. The shortest, and therefore the most powerful argument, is simply “X offends me!” (But to use this argument, you must belong to a group whose feelings are included in the social justice utility function.)
Now that I think about it, this probably explains why in this kind of debates you never get an explanation, only an angry “It’s not my job to educate you!” when you ask about something. Using arguments and explanations is a losing strategy. (Also, it is what the bad guys do. You don’t want to be pattern-matched to them.) Which is why people skilled in playing the game never provide explanations.
I hope your rationalist toucan is signed up for cryonics. :P
In the linked article the author mentions that there are multiple definitions of racism and people often aren’t clear about which one they use; and then decides to use the one without ”..., but only when white people do it” as a default. And says that it is okay if white authors decide to write only white characters, but if they write also non-white characters they should describe their experiences realistically. (Then in the comments someone asks whether saying that every human being is racist doesn’t render the word meaningless, and there is no outrage afterwards. Other people mention that calling someone racist is usually used just to silence or insult them.)
I am not sure whether this even should be called “social justice”. It just seems like a common sense to me. (This specific article; I haven’t read more from the same author yet.)
Somewhat related—writing this comment I realized that I am kinda judging the sanity of the author by how much I agree with her. When I put it this way, it seems horrible. (“You are sane if and only if you agree with me.”) But I admit it is a part of the algorithm I use. Is that a reason to worry? But then I remembered the parable that all correct maps of the same city are necessarily similar to each, although finding a set of similar maps does not guarantee their correctness (they could be copies of the same original wrong map). So, if you spend some time trying to make a map that reflects the territory better, and you believe you are sane enough, you should expect the maps of other sane people to be similar to yours. Of course this shouldn’t be your only criterium. But, uhm, extraordinary maps require extraordinary evidence; or at least some evidence.
I am not sure whether this even should be called “social justice”. It just seems like common sense to me.
Perhaps social justice done right should just seem like common sense (to reasonable people). I mean, what’s the alternative? Social injustice?
It would be a pity to use the term “social justice” to describe only facepalming irrationality. I mean, you then get this No True Scotsman sort of thing (maybe we should call it No True Nazi or something) where you refuse to say that someone’s engaged in “social justice” even though what they’re doing is crusading against sexism, racism, patriarchy, etc., simply because no True Social Justice Warror would engage in rational debate or respond to disagreement with sensible engagement rather than outrage.
(Minor vested interest disclosure: I happen to know some people who are both quite social-justice-y and quite rational, and I would find it unfortunate to be unable to say that on account of “social justice” and “rationality” getting gratuitously exclusive definitions.)
even though what they’re doing is crusading against sexism, racism, patriarchy, etc., simply because no True Social Justice Warror would engage in rational debate or respond to disagreement with sensible engagement rather than outrage.
Slightly off topic, but can I ask why patriarchy is assumed to be obviously bad?
I can certainly see the negative aspects of even moderate patriarchy, and wouldn’t endorse extreme patriarchy or all forms of it, but its positive aspect seems to be civilization as we know it. It makes monogamy viable, reduces the time preferences of the people in a society, makes men invested in society by encouraging them to become fathers and husbands, boosts fertility rates to above replacement, likely makes the average man more attractive to the average woman improving many relationships, results in a political system of easily scalable hierarchy, etc.
So, like with “rationality” and “Hollywood rationality”, we could have “social justice” and, uhm, “tumblr social justice”? Maybe this would work.
My main objection would be that words “social justice” already feel like a weird way to express “equality” or something like that. It’s already a word that meant something (“justice”) with an adjective that allowes you to remove or redefine its parts, and make it a flexible applause light.
Historical note, as I understand things—the emotionally abusive power grab aspects didn’t happen by coincidence. A good many people said that if they were polite and reasonable, what they said got ignored, so they started dumping rage.
I propose an alternative explanation. Some people are just born psychopaths; they love to hurt other people.
Whatever nice cause you start, if it gains just a little power, sooner or later one of them will notice it and decide they like it. Then they will try to join it and optimize it for their own purposes. You will recognize that this happened when people around you start repeating memes that hurting other people is actually good for your cause. Now, in such environment people most skilled in hurting others can quickly rise to the top.
(Actually, both our explanations can be true at the same time. Maybe any movement that doesn’t open its doors to psychopaths it doomed in the long term, because other people simply don’t have enough power to change the society.)
I’d expect rage to be better at converting people already predisposed to belief into True Believers, but worse at making believers of the undecided, and much worse at winning over those predisposed to opposition.
The rage level actually drives away some of the people who would be inclined to help them, and has produced something that looks a lot like PTSD in some of the people in the movement who got hit by opposition from others who were somewhat on the same side..
Still, they’ve gained a certain amount of ground on the average. I have no idea what the outcome will be.
As far as I can tell, there’s very little in the way of physical threats, but (most) people are very vulnerable to emotional attacks.
As I understand it, that’s part of what’s powering SJWs—they felt (and I’d say rightly) that they were and are subject to pervasive emotional attack both from the culture and from individuals, and are trying to make a world they can be comfortable in.
That “as I understand it” is not boilerplate—I read a fair amount of SJ material and (obviously) spent a lot of time thinking and obsessing about it, but this is a huge subject (and isn’t the same in all times, places, and sub-cultures), and I’ve never been an insider.
That would be one option. Or (this is different because “Hollywood rationality” is not actually a variety of rationality) we could say that both those things really are varieties of social justice, but one of them is social justice plus a bunch of crazy ideas and attitudes that unfortunately happen to have come along for the ride in various social-justice-valuing venues.
I don’t think “social justice” is just a weirdly contorted way to say “equality”. The addition of an adjective is necessary because “justice” simpliciter covers things like imprisoning criminals rather than innocent bystanders, and not having kleptocratic laws; “social justice” means something like “justice in people’s social interactions”. In some cases that’s roughly the same thing as equality, but in others equality might be the wrong thing (because different groups want different things, or because some historical injustice is best dealt with by a temporary compensating inequality in the other direction). -- Whether such inequality ever is a good approach, and how often if so, is a separate matter, but unless it’s inconceivable “equality” can’t be the right word.
Still, I’m not greatly enamoured of the term “social justice”. But it’s there, and it seems like it means something potentially useful, and it would be a shame if it ended up only being applicable where there’s a whole lot of craziness alongside the concern for allegedly marginalized groups.
I realized that I am kinda judging the sanity of the author by how much I agree with her.
That doesn’t seem horrible to me. There are many ways of being insane, but one of them is having a very wrong map (and you can express the one of standard criteria for clinical-grade mental illness—interferes with functioning in normal life—as “your map is so wrong you can’t traverse the territory well”).
I think the critical difference here is whether you disagree about facts (which are, hopefully, empirically observable and statements about them falsifiable) or whether you disagree about values, opinions, and forecasts. Major disagreement about facts is a good reason to doubt someone’s sanity, but about values and predictions is not.
Since I’d have to overcome a really strong ugh field to read it again, I’d like to check on whether my memory of it is correct—the one thing I didn’t like about it was Mohanraj saying (implying?) that if you behave decently you won’t be attacked. She was making promises about people who aren’t as rational as she is.
Why an ugh field? Those essays came out when racefail was going on, and came with the added info that it took Mohanraj two and a half weeks to write them, and (at least as I read it) I should feel really guilty that a woman of color had to do the work. I just couldn’t deal. I’m pretty sure the guilt trip wasn’t from Mohanraj.
I read them later, and thought they were good except for the caveat mentioned above.
That line was somewhat tongue-in-cheek. I wouldn’t go that far over the top in a real discussion, although I might throw in a bit of anti-*ist rhetoric as an expected shibboleth.
That being said, these people aren’t stupid. They don’t generally have the same priorities or epistemology that we do, and they’re very political, but that’s true of a lot of people outside the gates of our incestuous little nerd-ghetto. Winning, in the real world, implies dealing with these people, and that’s likely to go a lot better if we understand them.
Does that mean we should go out and pick fights with mainstream social justice advocates? No, of course not. But putting ourselves in their shoes every now and then can’t hurt.
This makes some sense. I think part of the reason my contribution was taken so badly was, as I said, that I was arguing in a style that was clearly different to that of the rest of those present, and as such I was (in Villam Bur’s phrasing) pattern-matched as a bad guy. (In other words, I didn’t use the shibboleths.)
Significantly, no-one seemed to take issue with the actual thrust of my point.
“These people” are not homogenous and there are a lot of idiots among them. However what most of them are is mindkilled. They won’t update so why bother?
However what most of them are is mindkilled. They won’t update so why bother?
Because we occasionally might want to convince them of things, and we can’t do that without understanding what they want to see in an argument. Or, more generally, because it behooves us to get better at modeling people that don’t share our epistemology or our (at least, my) contempt for politics.
Because we occasionally might want to convince them of things, and we can’t do that without understanding what they want to see in an argument.
So, um, if you really let Jesus into your heart and accept Him as your personal savior you will see that He wants you to donate 50% of your salary to GiveWell’s top charities..?
it behooves us to get better at modeling people that don’t share our epistemology or our (at least, my) contempt for politics.
True, but you don’t do that by mimicking their rhetoric.
True, but you don’t do that by mimicking their rhetoric.
The point isn’t to blindly mimic their rhetoric, it’s to talk their language: not just the soundbites, but the motivations under them. To use your example, talking about letting Jesus into your heart isn’t going to convince anyone to donate a large chunk of their salary to GiveWell’s top charities. There’s a Christian argument for charity already, though, and talking effective altruism in those terms might well convince someone that accepts it to donate to real charity rather than some godawful sad puppies fund; or to support or create Christian charities that use EA methodology, which given comparative advantage might be even better. But you’re not going to get there without understanding what makes Christian charity tick, and it’s not the simple utilitarian arguments that we’re used to in an EA context.
The point isn’t to mimic their rhetoric, it’s to talk their language
There is a price: to talk in their language is to accept their framework. If you are making an argument in terms of fighting the oppression of white male patriarchy, you implicitly agree that the white male patriarchy is in the business of oppression and needs to be fought. If you’re using the Christian argument for charity to talk effective altruism, you are implicitly accepting the authority of Jesus.
If you’re using the Christian argument for charity to talk effective altruism, you are implicitly accepting the authority of Jesus.
Yes, you are. That’s a price you need to pay if you want to get something out of mindkilled people, which incidentally tends to be the first step in introducing outside ideas and thereby making them less mindkilled. Reject it in favor of some kind of radical honesty policy, and unless you’re very lucky and very charismatic you’ll find yourself with no allies and few friends. But hey, you’ll have the moral high ground! I hear that and $1.50 will get you a cup of coffee.
(My argument in the ancestor wasn’t really about fighting the white male patriarchy, though; the rhetoric about that is just gingerbread, like appending “peace be upon him” to the name of the Prophet. It’s about the importance of subjective experience and a more general contrarianism—which are also SJ themes, just less obvious ones.)
That’s a price you need to pay if you want to get something out of mindkilled people, which incidentally tends to be the first step in making them less mindkilled.
Maybe it’s the price you need to pay, but I don’t see how being able to get something out of mindkilled people is the first step in making them less mindkilled. You got what you wanted and paid for it by reinforcing their beliefs—why would they become more likely to change them?
some kind of radical honesty policy
I am not going for radical honesty. What I’m suspicious of is using arguments which you yourself believe are bullshit and at the same time pretending to be a bona fide member of a tribe to which you don’t belong.
And, by the way, there seems to be a difference between Jesus and SJ here. When talking to a Christian I can be “radically honest” and say something along the lines “I myself am not a Christian but you are and don’t you recall how Jesus said that …”. But that doesn’t work with SJWs—if I start by saying “I myself don’t believe in while male oppression but you do and therefore you should conclude that...”, I will be immediately crucified for the first part and no one will pay any attention to the second.
I don’t see how being able to get something out of mindkilled people is the first step in making them less mindkilled. You got what you wanted and paid for it by reinforcing their beliefs—why would they become more likely to change them?
You’re not substantially reinforcing their beliefs. Beliefs entangled with your identity don’t follow Bayesian rules: directly showing anything less than overpoweringly strong evidence against them (and even that isn’t a sure thing) tends to reinforce them by provoking rationalization, while accepting them is noise. If you don’t like Christianity, you wouldn’t want to use the Christian argument for charity with a weak or undecided Christian; but they aren’t going to be mindkilled in this regard, so it wouldn’t make a good argument anyway.
On the other hand, sneaking new ideas into someone’s internal memetic ecosystem tends to put stress on any totalizing identities they’ve adopted. For example, you might have to invoke God’s commandment to love thy neighbor as thyself to get a fundamentalist Christian to buy EA in the first place; but now they have an interest in EA, which could (e.g.) lead them to EA forums sharing secular humanist assumptions. Before, they’d have dismissed this as (e.g.) some kind of pathetic atheist attempt at constructing a morality in the absence of God. But now they have a shared assumption, a point of commonality. That’ll lead to cognitive dissonance, but only in the long run—timescales you can’t work on unless you’re very good friends with this person.
That cognitive dissonance won’t always resolve against Christianity, but sometimes it will. And when it doesn’t, you’ll usually still have left them with a more nuanced and less stereotypical Christianity.
You’re not substantially reinforcing their beliefs.
Well, yes, if we’re talking about a single conversation, especially over the ’net, you are not going to affect much anything. Still, even if you do not reinforce then you confirm. And there are different ways to get mindkilled, entangling your identity with beliefs is only one of them...
On the other hand, sneaking new ideas into someone’s internal memetic ecosystem tends to put stress on any totalizing identities they’ve adopted.
True, but the same caveat applies—if we’re talking about one or two conversations you’re not going to produce much if any effect.
In any case, my line of thinking in this subthread wasn’t concerned so much with the effectiveness of deconversion, but rather was more about the willingness to employ arguments that you don’t believe but your discussion opponent might. I understand the need to talk to people in the language they understand, but there is a fine line to walk here.
Traditional practices and attitudes are a sacred category in this sort of discourse, but that doesn’t mean they’re unassailable—it just means that any sufficiently inconvenient ones get dismissed as outliers or distortions or fabrications rather than being attacked directly.
That works a lot less well arguing against someone who is claiming to be from that culture.
It helps, of course, that in this case they’d actually be fabrications.
So? Most of the “traditional practices” SJ types sanctify are fabrications. That doesn’t stop them from sanctifying them.
That works a lot less well arguing against someone who is claiming to be from that culture.
I’ve more than once seen people accused of not really being whatever they claim to be. “You’re wrong about your culture’s traditional practices” isn’t a legal move, but “you’re obviously an imposter” is.
A lot of people are pointing out that perhaps it wasn’t very wise for you to engage with such commenters. I mostly agree. But I also partially disagree. The negative effects of you commenting there, of course, are very clear. But, there are positive effects as well.
The outside world—i.e. outside the rationalist community and academia—shouldn’t get too isolated from us. While many people made stupid comments, I’m sure that there were many more people who looked at your argument and went, “Huh. Guess I didn’t think of that,” or at least registered some discomfort with their currently held worldview. Of course, none of them would’ve commented.
Also, I’m sure your way of argumentation appealed to many people, and they’ll be on the lookout for this kind of argumentation in the future. Maybe one of them will eventually stumble upon LW. By looking at the quality of argumentation was also how I selected which blogs to follow. I tried (and often failed) to avoid those blogs that employed rhetoric and emotional manipulation. One of the good blogs eventually linked to LW.
Thus, while the cost to you was probably great and perhaps wasn’t worth the effort, I don’t think it was entirely fruitless.
I was glad to at least disrupt the de facto consensus. I agree that it’s worth bearing in mind the silent majority of the audience as well as those who actually comment. The former probably outnumber the latter by an order of magnitude (or more?).
I suppose the meta-level point was also worth conveying. Ultimately, I don’t care a great deal about the object-level point (how one should feel about a silly motivational bracelet) but the tacit, meta-level point was perhaps: “There are other ways, perhaps more useful, to evaluate things than the amount of moral indignation one can generate in response.”
I don’t think it’s a good idea to get into a discussion on any forum where the term “mansplaining” is used to stifle dissent, even (or especially) if you have “a clear, concise, self-contained point”.
I don’t think it’s a good idea to get into a discussion on any forum where the term “mansplaining” is used to stifle dissent
True for a serious discussion, but such forums make for interesting ethnographic expeditions :-) And if you’re not above occasional trolling for teh lulz… :-D
Here’s an example from someone who believes strongly in cultivating internal motivation—the opposite of shocking yourself if you don’t do enough crudely monitored exercise.
The punishment approach to exercise arguably makes people less likely to exercise at all, and I think it increases the risk of injuries from exercise.
There really is a cultural problem—how popular is the approach from the link compared to The Biggest Loser and boot camps for civilians?
Sidetrack: I’m imagining a shock bracelet to discourage involvement in pointless internet arguments. How would it identify them? Would people use it?
A thing I would like is this. I would totally enable this on LW if it was an option. (And if someone volunteered to write a Firefox plugin to achieve the same client-side, they’d have all my gratefulness.)
The whole idea of optimisation is controversial among some people because they see it as the opposite of being yourself.
Someone else brought up the weird edge case that it’s possible to exercise too much, and that if you’re in such a position then more exercise will shorten, not lengthen, your life
It’s no weird edge case. If I remember right there was a recent study that came to that conclusion that went through the media.
This seems like a good reason to avoid arguing on the internet in general: it will fire up all the wrong parts of your brain, and you’ll find it harder to disengage than you anticipated.
The shock bracelet intrigues me. I imagine it could be interfaced to an app that could give shocks under all manner of chosen conditions. Do you have any more details? Is it a real thing, or (like this) just clickbait that no-one intends actually making?
I think this has the same problem than any kind of self-conditioning. I watched the video and the social community and gaming thing seem actually motivating, but I’m not sure about the punishment because you can always take the wristband off. Maybe there’s a commitment and social pressure not to take the wristband off, but ultimately you yourself are responsible for keeping the wristband on your wrist and this is basically self-conditioning. Yvain made a good post about it.
Suppose you have a big box of candy in the fridge. If you haven’t eaten it all already, that suggests your desire for candy isn’t even enough to reinforce the action of going to the fridge, getting a candy bar, and eating it, let alone the much more complicated task of doing homework. Yes, maybe there are good reasons why you don’t eat the candy – for example, you’re afraid of getting fat. But these issues don’t go away when you use the candy as a reward for homework completion. However little you want the candy bar you were barely even willing to take out of the fridge, that’s how much it’s motivating your homework.
If the zap had any kind of motivating effect, wouldn’t that effect firstly be directed towards taking the wristband off your wrist and not the much more distant and complex sequence of actions like going to the gym? I don’t think small zap on its owns could motivate me to do even anything simple, like leaving the computer. Also, I agree with Yvain that rewards and punishments seem only have real effect when they happen unpredictably.
A more low-tech solution, which is recommended by countless self-help books/webpages of dubious authority, is to snap a rubber band against your own wrist when you have done something bad. It seems this should work roughly as well as the Pavlov? In theory it should suffer the same “can’t condition yourself” problem. On the other hand, if lots of people recommend it, then maybe it works?
I suspect that if electric zapping or snapping a rubber band work (I don’t know if they do), they do so by raising your level of attention to the problematic behaviour. A claim of Perceptual Control Theory is that reorganisation—learning to control something better—follows attention. Yanking your attention onto the situation whenever you’re contemplating or committing sinful things may enable you to stop wanting to commit them.
#4281 +(27833)- [X]
<Zybl0re> get up
<Zybl0re> get on up
<Zybl0re> get up
<Zybl0re> get on up
<phxl|paper> and DANCE
* nmp3bot dances :D-<
* nmp3bot dances :D|-<
* nmp3bot dances :D/-<
<[SA]HatfulOfHollow> i'm going to become rich and famous after i invent a device
that allows you to stab people in the face over the internet
I wonder what they think of Beeminder, that allows you to financially torture yourself over anything you want to. Not that I’m going to go over there, wherever it is, to ask.
I recently made a dissenting comment on a biggish, well-known-ish social-justice-y blog. The comment was on a post about a bracelet which one could wear and which would zap you with a painful (though presumably safe) electric shock at the end of a day if you hadn’t done enough exercise that day. The post was decrying this as an example of society’s rampant body-shaming and fat-shaming, which had reached such an insane pitch that people are now willing to torture themselves in order to be content with their body image.
I explained as best I could in a couple of shortish paragraphs some ideas about akrasia and precommitment in light of which this device made some sense. I also mentioned in passing that there were good reasons to want to exercise that had nothing to do with an unhealthy body image, such as that it’s good for you and improves your mood. For reasons I don’t fully understand, these latter turned out to be surprisingly controversial points. (For example, surreally enough, someone asked to see my trainer’s certificate and/or medical degree before they would let me get away with the outlandish claim that exercise makes you live longer. Someone else brought up the weird edge case that it’s possible to exercise too much, and that if you’re in such a position then more exercise will shorten, not lengthen, your life.)
Further to that, I was accused of mansplaining twice. and then was asked to leave by the blog owner on grounds of being “tedious as fuck”. (Granted, but it’s hard not to end up tedious as fuck when you’re picked up on and hence have to justify claims like “exercise is good for you”.)
This is admittedly minor, so why am I posting about it here? Just because it made me realize a few things:
It was an interesting case study in memeplex collision. I felt like not only did I hold a different position to the rest of those present, but we had entirely different background assumptions about how one makes a case for said position. There was a near-Kuhnian incommensurability between us.
I felt my otherwise-mostly-dormant tribal status-seeking circuits fire up—nay, go into overdrive. I had lost face and been publicly humiliated, and the only way to regain the lost status was to come up with the ultimate putdown and “win” the argument. (A losing battle if ever there was one.) It kept coming to the front of my mind when I was trying to get other things done and, at a time when I have plenty of more important things to worry about, I wasted a lot of cycles on running over and over the arguments and formulating optimal comebacks and responses. I had to actively choose to disengage (in spite of the temptation to keep posting) because I could see I had more invested in it and it was taking up a greater cognitive load than I’d ever intended. This seems like a good reason to avoid arguing on the internet in general: it will fire up all the wrong parts of your brain, and you’ll find it harder to disengage than you anticipated.
It made me realize that I am more deeply connected to lesswrong (or the LW-osphere) than I’d previously realized. Up ’til now, I’d thought of myself as an outsider, more or less on the periphery of this community. But evidently I’ve absorbed enough of its memeplex to be several steps of inference away from an intelligent non-rationalist-identifying community. It also made me more grateful for certain norms which exist here and which I had otherwise gotten to take for granted: curiosity and a genuine interest in learning the truth, and (usually) courtesy to those with dissenting views.
This is very frustrating and when I realize it is happening, I stop the engagement. In my experience, rationalists are not that different from smart science or philosophy types because we agree on very basic things like the structure of an argument and the probabilistic nature of evidence. But in my experience normal people are very difficult to have productive discussions with. Some glaring things that I notice happening are:
a) Different definitions of evidence. The Bayesian definition of evidence is anything that makes A more likely than not A. But for many people, evidence is anything that would happen given A. For example a conspiracy theorist might say “Well of course they would deny it if were true, this only proves that I’m right”.
b) Aristotelianism: the idea that every statement is either true or false and you can prove statements deductively via reasoning. If you’ve reasoned that something is true, then you’ve proved it so it must be true. Here is a gem from an Aristotelian friend of mine “The people in the US are big, it must be the food and they use growth hormones in livestock, therefore people in the US are big because of growth hormones”.
c) Arguments that aren’t actually arguments. Usually these are either insults or signals of tribal affiliation. For example “Good to know you’re better than everyone else” in response to a critical comment. But insults can be more subtle and they can masquerade as arguments. For example in response to a call for higher taxes someone might say “If you love taxes so much then why aren’t you sending extra money to the treasury?”.
d) Arguments that just have nothing to do with their conclusion. An institute called Heartmath stated this gem (rough paraphrase): “The heart sends more information to the brain than the brain does to the heart therefore the heart is more important that the brain”.
e) Statistical illiteracy. I want to grab a flamethrower every time the following exchange happens:
Salviati: “According to this study people who are X tend to be Y”
Simplicio: “Well I know someone who is X but isn’t Y, so there goes that theory”
f) Logical illiteracy:
Example 1:
Salviati: ” If A then B”
Simplicio: “But A isn’t true therefore your argument is invalid”
Example 2:
Simplicio: “X is A therefore X is B”
Salviati: “Let us apply a proof by contradiction. ‘A implies B’ is false because Y is A, but Y is not B”
Simplicio: “How dare you compare X to Y, they are totally different! Y is only not B because …”
Sorry if the symbolic statements are harder to read, I didn’t want to use too many object level issues.
Sightings:
Arguments that aren’t actually arguments: argument by tribal affiliation was certainly in full force, as well as a certain general condescension bordering on insult.
Statistical illiteracy: in an only minor variant of your hypothetical exchange, I said that very few people are doing too much exercise (tacitly, relative to the number of people who are doing too little), to which someone replied that they had once overtrained to their detriment, as if this disproved my point.
I was also struck by how weird it was that people were nitpicking totally incidental parts of my post, which, even if granted, didn’t actually deduct from the essence of what I was saying. This seemed like a sort of “argument by attrition”, or even just a way of saying “go away; we can tell you’re not one of us.”
A general pattern I’ve noticed: when processing an argument to which they are hostile, people often parse generalizations as unsympathetically as they can. General statements which would ordinarily pass without a second thought are taken as absolutes and then “disproved” by citations of noncentral examples and weird edge cases. I think this is pretty bad faith, and it seems common enough. Do we have a name for it? (I have to stop myself doing it sometimes.)
Your symbolic arguments made me laugh.
Social justice, apropos of the name, is largely an exercise in the manipulation of cultural assumptions and categorical boundaries- especially the manipulation of taboos like body weight. We probably shouldn’t expect the habits and standards of the social justice community to be well suited to factual discovery, if only because factual discovery is usually a poor way to convince whole cultures of things.
But the tricky thing about conversation in that style is that disagreement is rarely amicable. In a conversation where external realities are relevant, the ‘winner’ gets social respect and the ‘loser’ gets to learn things, so disagreement can be mutually beneficial happy event. But if external realities are not considered, debate becomes a zero-sum game of social influence. In that case, you start to see tactics pop up that might otherwise feel like ‘bad faith.’ For example, you win if the other person finds debate so unpleasant that they stop vocalizing their disagreement, leaving you free to make assertions unopposed. On a site like Less Wrong, this result is catastrophic- but if your focus is primarily on the spread of social influence, then it can be an acceptable cost (or outright free, if you’re of the postmodernist persuasion).
My general sense is that this is a fairly distinctive quality of social justice communities, so your feeling of alienation may have as much to do with the social justice community as it does with the LW memeplex. A random conversation about fat acceptance with culturally modal people might be a great deal less stressful. But then again, you probably shouldn’t trust somebody else on LW to say that.
(I upvoted Simplicio and Salviati, by the way.)
Upvoted for being a plausible, fully charitable explanation of Social Justice rhetorical norms, which I had been unthinkingly categorizing as “Dumb/Evil For No Reason” despite the many highly intelligent people involved.
I am very curious to what extent this is true, and would appreciate any evidence people have in either direction.
What is the cause of this? Is it just random fluctuation in culture that reinforce themselves? Perhaps I do not notice these problems in non social justice people just because they do not have an issue they care enough about to argue in this way. Perhaps, It is just availability bias as I spend too much time reading things social justice people say. Perhaps it is a function of the fact that the memes they are talking have this idea that they are being oppressed which makes them more fearful of outsiders.
I’d call it being uncharitable. Extremely so, in this case.
Salviati: blah blah blah Exercise increases lifespan blah blah blah
Simplicio: THAT’S NOT TRUE THERE EXISTS AN EXCEPTION YOUR ENTIRE ARGUMENT IS INVALID
Because we’re talking about being uncharitable, let’s be charitable for a moment. Simplicio, in fact, made the mathematically proper counterargument: he produced a counterexample to a for-all claim. And finding one flaw with a mathematical proof is, in fact, sufficient to disregard the entire thing.
Clearly, though, Simplicio’s argument is horrible and nobody should ever make it. If we check out the errata for Linear Algebra Done Right, we find that Dr. Axler derped some coefficients on page 81. His proof is incorrect, but any reasonable person can easily see how the coefficients were derped and what the correct coefficients were, and it’s a trivial matter to change the proof to a correct proof.
Analogously, the proper response to an argument that’s technically incorrect, but has an obvious correct argument that you know the author was making even if they phrased it poorly, is to replace the incorrect argument with the correct argument, not scream about the incorrect argument. Anyone who does anything differently should have their internet privileges revoked. It’s more than a trivial inconvenience to write (and read) “the overwhelming scientific consensus indicates that, for most individuals, increasing exercise increases lifespan, although there’s a few studies that may suggest the opposite, and there’s a few outliers for whom increased exercise reduces lifespan” instead of “exercise increases lifespan”.
So, now our argument looks like
Salviati: blah blah blah Exercise increases lifespan blah blah blah
Simplicio: THAT’S NOT TRUE THERE EXISTS AN EXCEPTION YOUR ENTIRE ARGUMENT IS INVALID
Salviati: Principle of charity, bro
Now, if Simplicio applies principle of charity, then they’ll never make arguments like that again, and we’ve resolved the problem. If they don’t, we discontinue debating with them, and we’ve resolved the problem.
There’s a few failure modes here. We create a new route down which debates about akrasia-fighting devices can be derailed. We give a superweapon to people who we probably shouldn’t trust with one. They may google it and find our community and we won’t be able to keep them out of our walled garden. (I jest. Well, maybe.) But introducing principle of charity to people who have clearly never heard of it feels like it should either improve the quality of discourse or identify places we don’t want to spend any time.
In regular English, “exercise increases lifespan” doesn’t mean ‘all exercise increases lifespan’ any more than “ducks lay eggs” means ‘all ducks [including males] lay eggs’.
Well, there’s a frustrating sort of ambiguity there: it’s able to pivot between the two in an uncomfortable way which leaves one vulnerable to exploits like the above.
Sure, and it’s also vulnerable to abuse from the other side:
“I have bogosthenia and can’t exercise because my organs will fall out if I do. How should I extend my lifespan?”
“You should exercise! Exercise increases lifespan!”
”But my organs!”
“Are you saying exercise doesn’t increase lifespan? All these studies say it does!”
”Did they study people with no organs?”
“Why are you bringing up organs again? Exercise increases lifespan. If you start telling people it doesn’t, you’re going to be responsible for N unnecessary deaths per year, you quack.”
”… organs?”
Totally right.
I see this in lots of places where it’s clearly not an argument by attrition. There’s a sizable fraction of people on the Internet who are just over-literal.
There’s this issue though—what matters is not the fraction of people who exercise too much among the general population, is the fraction of people who exercise too much among the people you’re telling to exercise more to.
Not even that. It’s the fraction of people who have known someone who thought they exercised too much at least once in their lives.
It’s a first contact situation. You need to establish basic things first, e.g. “do you recognize this is a sequence of primes,” “is there such a thing as ‘good’ and ‘bad’,” “how do you treat your enemies,” etc.
“Aren’t you afraid of flying after that plane was shot down?” “No; flying is still much safer than driving, even taking terrorist attacks into account.” “But that plane was shot down!!!”
Sorry for being nit-picky, but that is partly linguistic illiteracy on Salviati’s part. Natural language conditionals are not assertible if their antecedent is false. Thus, by asserting “If A then B”, he implies that A is possible, with which Simiplicio might reasonably disagree.
Usually in these exchanges the truth value of A is under dispute. But it is nevertheless possible to make arguments with uncertain premises to see if the argument actually succeeds given its premises.
“But A isn’t true” is also a common response to counterfactual conditionals—especially in thought experiments.
Well, sometimes thought-experiments are dirty tricks and merit having their premises dismissed.
“If X, Y, and Z were all true, wouldn’t that mean we should kill all the coders?”
“Well, hypothetically, but none of X, Y, and Z are true.”
”Aha! So you concede that there are certain circumstances under which we should kill all the coders!”
My preferred answer being:
“I can’t occupy the epistemic state that you suggest — namely, knowing that X, Y, and Z are true with sufficient confidence to kill all the coders. If I ended up believing X, Y, and Z, it’s more likely that I’d hallucinated the evidence or been fooled than that killing all the coders is actually a good idea. Therefore, regardless of whether X, Y, and Z seem true to me, I can’t conclude that we should kill all the coders.”
But that’s a lot more subtle than the thought-experiment, and probably constitutes fucking tedious in a lot of social contexts. The simplified version “But killing is wrong, and we shouldn’t do wrong things!” is alas not terribly convincing to people who don’t agree with the premise already.
There are other ways of saying it. I think Iain Banks said it pretty well.
Can you give a quick example with the blanks filled in? I’m interested, but I’m not sure I follow.
A: If John comes to the party, Mary will be happy. (So there is a chance that Mary will be happy.)
B: But John isn’t going to the party. (So your argument is invalid.)
That’s what the subjunctive is for. If A had said “If Jon came to the party, Mary would be happy”, …
The same thing can still happen with a subjunctive conditional, though.
A: If John came to the party, Mary would be happy. (So we could make Mary happy by making John come to the party.) B: But John isn’t going to the party, no matter what we do. (So your argument is invalid.)
Also, pace George R. R. Martin, the name is still spelled John. Sorry, no offense, I just couldn’t resist. :)
Jon—short for Jonathan—was a perfectly good name long before George R R Martin.
Ah, thanks. I didn’t know that existed as a short form for Jonathan, and inferred that it was merely another instance of his distorting English spelling in names and titles.
It depends why Salvati is bringing it up.
“If X(t), then A(t+delta). If A(t’) then B(t’+delta’).”
“But, not A(now)!”
Even with such a generic conditional (where t and t’ are, effectively, universally quantified), the response can make sense with the following implied point: So not “B(now+delta’)”, hence we can’t draw any presently relevant conclusions from your statement, so why are you saying this?
It may or may not be appropriate to dispute the relevance of the conditional in this way, depending on the conversational situation.
Let me rephrase that with more words:
“If we do X, then A will happen. If A happens, then B happens.”
“But A isn’t happening.”
Here is how to win the argument:
Create another nickname, pretending to be a Native American woman. Say that the idea of precommitment to exercise reminds you that in the ancient times the hunters of your tribe believed that it is spiritually important to be fit. (Then the white people came and ruined everything.) If anyone disagrees with you, act emotional and tell them to check their privilege.
The only problem is that winning in this way is a lost purpose. Unless you consider it expanding your communication skills.
I’ve actually seen an argument online in which some social justicers (with the same bad habits as in the story above) were convinced that it is acceptable to care about male circumcision on the grounds that it made SRS (sexual reassignment surgery) more difficult for trans women. Typically (in this community), if you thought male circumcision was an issue—you were quickly shouted down as a dreaded MRA (men’s rights activist).
Don’t think that’d work. Traditional practices and attitudes are a sacred category in this sort of discourse, but that doesn’t mean they’re unassailable—it just means that any sufficiently inconvenient ones get dismissed as outliers or distortions or fabrications rather than being attacked directly. It helps, of course, that in this case they’d actually be fabrications.
Focusing on feelings is the right way to go, though. This probably needs more refinement, but I think you should do something along the lines of saying that exercise makes you feel happier and more capable (which happens to be true, at least for me), and that bringing tangible consequences into the picture helps people escape middle-class patriarchal white Western consumer culture’s relentless focus on immediate short-term gratification (true from a certain point of view, although not a framing I’d normally use). After that you can talk about how traditional cultures are less sedentary, but don’t make membership claims and do not mention outcomes. You’re not torturing yourself to meet racist, sexist expectations of health and fitness; you’re meeting spiritual, mental, and incidentally physical needs that the establishment’s conditioned you to neglect. The shock is a reminder of what they’ve stolen from you.
You’ll probably still get accusations of internalized kyriarchy that way, but it ought to at least be controversial, and it won’t get you accused of mansplaining.
I think this is still too logical to work. Each step of an argument is another place that can be attacked. And because attacks are allowed to be illogical, even the most logical step has maybe 50% chance of breaking the chain. The shortest, and therefore the most powerful argument, is simply “X offends me!” (But to use this argument, you must belong to a group whose feelings are included in the social justice utility function.)
Now that I think about it, this probably explains why in this kind of debates you never get an explanation, only an angry “It’s not my job to educate you!” when you ask about something. Using arguments and explanations is a losing strategy. (Also, it is what the bad guys do. You don’t want to be pattern-matched to them.) Which is why people skilled in playing the game never provide explanations.
I hope your rationalist toucan is signed up for cryonics. :P
I’m sure it depends on where you hang out, but I’ve seen plenty of explanations from social justice people. A sample
Impressive.
In the linked article the author mentions that there are multiple definitions of racism and people often aren’t clear about which one they use; and then decides to use the one without ”..., but only when white people do it” as a default. And says that it is okay if white authors decide to write only white characters, but if they write also non-white characters they should describe their experiences realistically. (Then in the comments someone asks whether saying that every human being is racist doesn’t render the word meaningless, and there is no outrage afterwards. Other people mention that calling someone racist is usually used just to silence or insult them.)
I am not sure whether this even should be called “social justice”. It just seems like a common sense to me. (This specific article; I haven’t read more from the same author yet.)
Somewhat related—writing this comment I realized that I am kinda judging the sanity of the author by how much I agree with her. When I put it this way, it seems horrible. (“You are sane if and only if you agree with me.”) But I admit it is a part of the algorithm I use. Is that a reason to worry? But then I remembered the parable that all correct maps of the same city are necessarily similar to each, although finding a set of similar maps does not guarantee their correctness (they could be copies of the same original wrong map). So, if you spend some time trying to make a map that reflects the territory better, and you believe you are sane enough, you should expect the maps of other sane people to be similar to yours. Of course this shouldn’t be your only criterium. But, uhm, extraordinary maps require extraordinary evidence; or at least some evidence.
Perhaps social justice done right should just seem like common sense (to reasonable people). I mean, what’s the alternative? Social injustice?
It would be a pity to use the term “social justice” to describe only facepalming irrationality. I mean, you then get this No True Scotsman sort of thing (maybe we should call it No True Nazi or something) where you refuse to say that someone’s engaged in “social justice” even though what they’re doing is crusading against sexism, racism, patriarchy, etc., simply because no True Social Justice Warror would engage in rational debate or respond to disagreement with sensible engagement rather than outrage.
(Minor vested interest disclosure: I happen to know some people who are both quite social-justice-y and quite rational, and I would find it unfortunate to be unable to say that on account of “social justice” and “rationality” getting gratuitously exclusive definitions.)
Slightly off topic, but can I ask why patriarchy is assumed to be obviously bad?
I can certainly see the negative aspects of even moderate patriarchy, and wouldn’t endorse extreme patriarchy or all forms of it, but its positive aspect seems to be civilization as we know it. It makes monogamy viable, reduces the time preferences of the people in a society, makes men invested in society by encouraging them to become fathers and husbands, boosts fertility rates to above replacement, likely makes the average man more attractive to the average woman improving many relationships, results in a political system of easily scalable hierarchy, etc.
I wasn’t assuming it’s obviously bad, I was describing it as a thing social-justice types characteristically crusade against.
As to whether moderate patriarchy is good or bad or mixed or neutral—I imagine it depends enormously on how you define the term.
The post reads very much like you are implying they are bad, but I’ll update on your response that you didn’t.
So, like with “rationality” and “Hollywood rationality”, we could have “social justice” and, uhm, “tumblr social justice”? Maybe this would work.
My main objection would be that words “social justice” already feel like a weird way to express “equality” or something like that. It’s already a word that meant something (“justice”) with an adjective that allowes you to remove or redefine its parts, and make it a flexible applause light.
Historical note, as I understand things—the emotionally abusive power grab aspects didn’t happen by coincidence. A good many people said that if they were polite and reasonable, what they said got ignored, so they started dumping rage.
I propose an alternative explanation. Some people are just born psychopaths; they love to hurt other people.
Whatever nice cause you start, if it gains just a little power, sooner or later one of them will notice it and decide they like it. Then they will try to join it and optimize it for their own purposes. You will recognize that this happened when people around you start repeating memes that hurting other people is actually good for your cause. Now, in such environment people most skilled in hurting others can quickly rise to the top.
(Actually, both our explanations can be true at the same time. Maybe any movement that doesn’t open its doors to psychopaths it doomed in the long term, because other people simply don’t have enough power to change the society.)
And then they complain when anybody else is ‘uncivil’.
I called it an emotionally abusive power grab because that’s how I see it.
Nonetheless, I still think they’re right about some of their issues.
I’d expect rage to be better at converting people already predisposed to belief into True Believers, but worse at making believers of the undecided, and much worse at winning over those predisposed to opposition.
The rage level actually drives away some of the people who would be inclined to help them, and has produced something that looks a lot like PTSD in some of the people in the movement who got hit by opposition from others who were somewhat on the same side..
Still, they’ve gained a certain amount of ground on the average. I have no idea what the outcome will be.
Well, if you can vaguely imply that it might be physically dangerous to disagree, a little rage can work wonders.
As far as I can tell, there’s very little in the way of physical threats, but (most) people are very vulnerable to emotional attacks.
As I understand it, that’s part of what’s powering SJWs—they felt (and I’d say rightly) that they were and are subject to pervasive emotional attack both from the culture and from individuals, and are trying to make a world they can be comfortable in.
That “as I understand it” is not boilerplate—I read a fair amount of SJ material and (obviously) spent a lot of time thinking and obsessing about it, but this is a huge subject (and isn’t the same in all times, places, and sub-cultures), and I’ve never been an insider.
That would be one option. Or (this is different because “Hollywood rationality” is not actually a variety of rationality) we could say that both those things really are varieties of social justice, but one of them is social justice plus a bunch of crazy ideas and attitudes that unfortunately happen to have come along for the ride in various social-justice-valuing venues.
I don’t think “social justice” is just a weirdly contorted way to say “equality”. The addition of an adjective is necessary because “justice” simpliciter covers things like imprisoning criminals rather than innocent bystanders, and not having kleptocratic laws; “social justice” means something like “justice in people’s social interactions”. In some cases that’s roughly the same thing as equality, but in others equality might be the wrong thing (because different groups want different things, or because some historical injustice is best dealt with by a temporary compensating inequality in the other direction). -- Whether such inequality ever is a good approach, and how often if so, is a separate matter, but unless it’s inconceivable “equality” can’t be the right word.
Still, I’m not greatly enamoured of the term “social justice”. But it’s there, and it seems like it means something potentially useful, and it would be a shame if it ended up only being applicable where there’s a whole lot of craziness alongside the concern for allegedly marginalized groups.
That doesn’t seem horrible to me. There are many ways of being insane, but one of them is having a very wrong map (and you can express the one of standard criteria for clinical-grade mental illness—interferes with functioning in normal life—as “your map is so wrong you can’t traverse the territory well”).
I think the critical difference here is whether you disagree about facts (which are, hopefully, empirically observable and statements about them falsifiable) or whether you disagree about values, opinions, and forecasts. Major disagreement about facts is a good reason to doubt someone’s sanity, but about values and predictions is not.
I’m glad you liked it.
Since I’d have to overcome a really strong ugh field to read it again, I’d like to check on whether my memory of it is correct—the one thing I didn’t like about it was Mohanraj saying (implying?) that if you behave decently you won’t be attacked. She was making promises about people who aren’t as rational as she is.
Why an ugh field? Those essays came out when racefail was going on, and came with the added info that it took Mohanraj two and a half weeks to write them, and (at least as I read it) I should feel really guilty that a woman of color had to do the work. I just couldn’t deal. I’m pretty sure the guilt trip wasn’t from Mohanraj.
I read them later, and thought they were good except for the caveat mentioned above.
I don’t think reinforcing stupidity is a good idea.
“Never argue with stupid people, they will drag you down to their level and then beat you with experience.” ― Mark Twain
This is that level:
That line was somewhat tongue-in-cheek. I wouldn’t go that far over the top in a real discussion, although I might throw in a bit of anti-*ist rhetoric as an expected shibboleth.
That being said, these people aren’t stupid. They don’t generally have the same priorities or epistemology that we do, and they’re very political, but that’s true of a lot of people outside the gates of our incestuous little nerd-ghetto. Winning, in the real world, implies dealing with these people, and that’s likely to go a lot better if we understand them.
Does that mean we should go out and pick fights with mainstream social justice advocates? No, of course not. But putting ourselves in their shoes every now and then can’t hurt.
This makes some sense. I think part of the reason my contribution was taken so badly was, as I said, that I was arguing in a style that was clearly different to that of the rest of those present, and as such I was (in Villam Bur’s phrasing) pattern-matched as a bad guy. (In other words, I didn’t use the shibboleths.)
Significantly, no-one seemed to take issue with the actual thrust of my point.
Of course, but only somewhat :-)
“These people” are not homogenous and there are a lot of idiots among them. However what most of them are is mindkilled. They won’t update so why bother?
Because we occasionally might want to convince them of things, and we can’t do that without understanding what they want to see in an argument. Or, more generally, because it behooves us to get better at modeling people that don’t share our epistemology or our (at least, my) contempt for politics.
So, um, if you really let Jesus into your heart and accept Him as your personal savior you will see that He wants you to donate 50% of your salary to GiveWell’s top charities..?
True, but you don’t do that by mimicking their rhetoric.
The point isn’t to blindly mimic their rhetoric, it’s to talk their language: not just the soundbites, but the motivations under them. To use your example, talking about letting Jesus into your heart isn’t going to convince anyone to donate a large chunk of their salary to GiveWell’s top charities. There’s a Christian argument for charity already, though, and talking effective altruism in those terms might well convince someone that accepts it to donate to real charity rather than some godawful sad puppies fund; or to support or create Christian charities that use EA methodology, which given comparative advantage might be even better. But you’re not going to get there without understanding what makes Christian charity tick, and it’s not the simple utilitarian arguments that we’re used to in an EA context.
There is a price: to talk in their language is to accept their framework. If you are making an argument in terms of fighting the oppression of white male patriarchy, you implicitly agree that the white male patriarchy is in the business of oppression and needs to be fought. If you’re using the Christian argument for charity to talk effective altruism, you are implicitly accepting the authority of Jesus.
Yes, you are. That’s a price you need to pay if you want to get something out of mindkilled people, which incidentally tends to be the first step in introducing outside ideas and thereby making them less mindkilled. Reject it in favor of some kind of radical honesty policy, and unless you’re very lucky and very charismatic you’ll find yourself with no allies and few friends. But hey, you’ll have the moral high ground! I hear that and $1.50 will get you a cup of coffee.
(My argument in the ancestor wasn’t really about fighting the white male patriarchy, though; the rhetoric about that is just gingerbread, like appending “peace be upon him” to the name of the Prophet. It’s about the importance of subjective experience and a more general contrarianism—which are also SJ themes, just less obvious ones.)
Maybe it’s the price you need to pay, but I don’t see how being able to get something out of mindkilled people is the first step in making them less mindkilled. You got what you wanted and paid for it by reinforcing their beliefs—why would they become more likely to change them?
I am not going for radical honesty. What I’m suspicious of is using arguments which you yourself believe are bullshit and at the same time pretending to be a bona fide member of a tribe to which you don’t belong.
And, by the way, there seems to be a difference between Jesus and SJ here. When talking to a Christian I can be “radically honest” and say something along the lines “I myself am not a Christian but you are and don’t you recall how Jesus said that …”. But that doesn’t work with SJWs—if I start by saying “I myself don’t believe in while male oppression but you do and therefore you should conclude that...”, I will be immediately crucified for the first part and no one will pay any attention to the second.
You’re not substantially reinforcing their beliefs. Beliefs entangled with your identity don’t follow Bayesian rules: directly showing anything less than overpoweringly strong evidence against them (and even that isn’t a sure thing) tends to reinforce them by provoking rationalization, while accepting them is noise. If you don’t like Christianity, you wouldn’t want to use the Christian argument for charity with a weak or undecided Christian; but they aren’t going to be mindkilled in this regard, so it wouldn’t make a good argument anyway.
On the other hand, sneaking new ideas into someone’s internal memetic ecosystem tends to put stress on any totalizing identities they’ve adopted. For example, you might have to invoke God’s commandment to love thy neighbor as thyself to get a fundamentalist Christian to buy EA in the first place; but now they have an interest in EA, which could (e.g.) lead them to EA forums sharing secular humanist assumptions. Before, they’d have dismissed this as (e.g.) some kind of pathetic atheist attempt at constructing a morality in the absence of God. But now they have a shared assumption, a point of commonality. That’ll lead to cognitive dissonance, but only in the long run—timescales you can’t work on unless you’re very good friends with this person.
That cognitive dissonance won’t always resolve against Christianity, but sometimes it will. And when it doesn’t, you’ll usually still have left them with a more nuanced and less stereotypical Christianity.
Well, yes, if we’re talking about a single conversation, especially over the ’net, you are not going to affect much anything. Still, even if you do not reinforce then you confirm. And there are different ways to get mindkilled, entangling your identity with beliefs is only one of them...
True, but the same caveat applies—if we’re talking about one or two conversations you’re not going to produce much if any effect.
In any case, my line of thinking in this subthread wasn’t concerned so much with the effectiveness of deconversion, but rather was more about the willingness to employ arguments that you don’t believe but your discussion opponent might. I understand the need to talk to people in the language they understand, but there is a fine line to walk here.
That works a lot less well arguing against someone who is claiming to be from that culture.
So? Most of the “traditional practices” SJ types sanctify are fabrications. That doesn’t stop them from sanctifying them.
I’ve more than once seen people accused of not really being whatever they claim to be. “You’re wrong about your culture’s traditional practices” isn’t a legal move, but “you’re obviously an imposter” is.
A lot of people are pointing out that perhaps it wasn’t very wise for you to engage with such commenters. I mostly agree. But I also partially disagree. The negative effects of you commenting there, of course, are very clear. But, there are positive effects as well.
The outside world—i.e. outside the rationalist community and academia—shouldn’t get too isolated from us. While many people made stupid comments, I’m sure that there were many more people who looked at your argument and went, “Huh. Guess I didn’t think of that,” or at least registered some discomfort with their currently held worldview. Of course, none of them would’ve commented.
Also, I’m sure your way of argumentation appealed to many people, and they’ll be on the lookout for this kind of argumentation in the future. Maybe one of them will eventually stumble upon LW. By looking at the quality of argumentation was also how I selected which blogs to follow. I tried (and often failed) to avoid those blogs that employed rhetoric and emotional manipulation. One of the good blogs eventually linked to LW.
Thus, while the cost to you was probably great and perhaps wasn’t worth the effort, I don’t think it was entirely fruitless.
You’re right.
I was glad to at least disrupt the de facto consensus. I agree that it’s worth bearing in mind the silent majority of the audience as well as those who actually comment. The former probably outnumber the latter by an order of magnitude (or more?).
I suppose the meta-level point was also worth conveying. Ultimately, I don’t care a great deal about the object-level point (how one should feel about a silly motivational bracelet) but the tacit, meta-level point was perhaps: “There are other ways, perhaps more useful, to evaluate things than the amount of moral indignation one can generate in response.”
I don’t think it’s a good idea to get into a discussion on any forum where the term “mansplaining” is used to stifle dissent, even (or especially) if you have “a clear, concise, self-contained point”.
True for a serious discussion, but such forums make for interesting ethnographic expeditions :-) And if you’re not above occasional trolling for teh lulz… :-D
Seems somehow related: r/drama
Somehow feels relevant: /r/drama
Um, why?
I mean, walking through a monkey house when all they’re going to do is fling shit everywhere isn’t something I would choose to do.
Only because I had a clear, concise, self-contained point to make and I figured I’d be able to walk away once I was done. I’ll know better next time.
I wasn’t sure about doing discussion of the specific point, but other people are....
http://www.moveandbefree.com/blog/laziness-doesnt-exist
Here’s an example from someone who believes strongly in cultivating internal motivation—the opposite of shocking yourself if you don’t do enough crudely monitored exercise.
The punishment approach to exercise arguably makes people less likely to exercise at all, and I think it increases the risk of injuries from exercise.
There really is a cultural problem—how popular is the approach from the link compared to The Biggest Loser and boot camps for civilians?
Sidetrack: I’m imagining a shock bracelet to discourage involvement in pointless internet arguments. How would it identify them? Would people use it?
That’s probably a FAI-complete problem. See also: http://xkcd.com/810/
A thing I would like is this. I would totally enable this on LW if it was an option. (And if someone volunteered to write a Firefox plugin to achieve the same client-side, they’d have all my gratefulness.)
Done. Client-side version, that is.
The whole idea of optimisation is controversial among some people because they see it as the opposite of being yourself.
It’s no weird edge case. If I remember right there was a recent study that came to that conclusion that went through the media.
True that.
The shock bracelet intrigues me. I imagine it could be interfaced to an app that could give shocks under all manner of chosen conditions. Do you have any more details? Is it a real thing, or (like this) just clickbait that no-one intends actually making?
It’s called the Pavlok. It seems to be able to monitor a variety of criteria, some fairly smart.
Wow, it is indeed a real thing! Thank you for posting this.
I think this has the same problem than any kind of self-conditioning. I watched the video and the social community and gaming thing seem actually motivating, but I’m not sure about the punishment because you can always take the wristband off. Maybe there’s a commitment and social pressure not to take the wristband off, but ultimately you yourself are responsible for keeping the wristband on your wrist and this is basically self-conditioning. Yvain made a good post about it.
If the zap had any kind of motivating effect, wouldn’t that effect firstly be directed towards taking the wristband off your wrist and not the much more distant and complex sequence of actions like going to the gym? I don’t think small zap on its owns could motivate me to do even anything simple, like leaving the computer. Also, I agree with Yvain that rewards and punishments seem only have real effect when they happen unpredictably.
A more low-tech solution, which is recommended by countless self-help books/webpages of dubious authority, is to snap a rubber band against your own wrist when you have done something bad. It seems this should work roughly as well as the Pavlov? In theory it should suffer the same “can’t condition yourself” problem. On the other hand, if lots of people recommend it, then maybe it works?
I suspect that if electric zapping or snapping a rubber band work (I don’t know if they do), they do so by raising your level of attention to the problematic behaviour. A claim of Perceptual Control Theory is that reorganisation—learning to control something better—follows attention. Yanking your attention onto the situation whenever you’re contemplating or committing sinful things may enable you to stop wanting to commit them.
See also the use of the cilice.
I’ve mostly seen that technique described as a way to cope with self-harm.
Classic bash.org :-D
I wonder what they think of Beeminder, that allows you to financially torture yourself over anything you want to. Not that I’m going to go over there, wherever it is, to ask.
I mentioned beeminder and that I use it. Don’t think anyone picked up on that part, cash evidently being less triggering than electricity.